WorldWideScience

Sample records for procedural method based

  1. Methods for testing the logical structure of plant procedure documents

    International Nuclear Information System (INIS)

    Horne, C.P.; Colley, R.; Fahley, J.M.

    1990-01-01

    This paper describes an ongoing EPRI project to investigate computer based methods to improve the development, maintenance, and verification of plant operating procedures. This project began as an evaluation of the applicability of structured software analysis methods to operating procedures. It was found that these methods offer benefits, if procedures are transformed to a structured representation to make them amenable to computer analysis. The next task was to investigate methods to transform procedures into a structured representation. The use of natural language techniques to read and compile the procedure documents appears to be viable for this purpose and supports conformity to guidelines. The final task was to consider possibilities of automated verification methods for procedures. Methods to help verify procedures were defined and information requirements specified. These methods take the structured representation of procedures as input. The software system being constructed in this project is called PASS, standing for Procedures Analysis Software System

  2. Standardized Procedure Content And Data Structure Based On Human Factors Requirements For Computer-Based Procedures

    International Nuclear Information System (INIS)

    Bly, Aaron; Oxstrand, Johanna; Le Blanc, Katya L

    2015-01-01

    Most activities that involve human interaction with systems in a nuclear power plant are guided by procedures. Traditionally, the use of procedures has been a paper-based process that supports safe operation of the nuclear power industry. However, the nuclear industry is constantly trying to find ways to decrease the human error rate, especially the human errors associated with procedure use. Advances in digital technology make computer-based procedures (CBPs) a valid option that provides further enhancement of safety by improving human performance related to procedure use. The transition from paper-based procedures (PBPs) to CBPs creates a need for a computer-based procedure system (CBPS). A CBPS needs to have the ability to perform logical operations in order to adjust to the inputs received from either users or real time data from plant status databases. Without the ability for logical operations the procedure is just an electronic copy of the paper-based procedure. In order to provide the CBPS with the information it needs to display the procedure steps to the user, special care is needed in the format used to deliver all data and instructions to create the steps. The procedure should be broken down into basic elements and formatted in a standard method for the CBPS. One way to build the underlying data architecture is to use an Extensible Markup Language (XML) schema, which utilizes basic elements to build each step in the smart procedure. The attributes of each step will determine the type of functionality that the system will generate for that step. The CBPS will provide the context for the step to deliver referential information, request a decision, or accept input from the user. The XML schema needs to provide all data necessary for the system to accurately perform each step without the need for the procedure writer to reprogram the CBPS. The research team at the Idaho National Laboratory has developed a prototype CBPS for field workers as well as the

  3. Standardized Procedure Content And Data Structure Based On Human Factors Requirements For Computer-Based Procedures

    Energy Technology Data Exchange (ETDEWEB)

    Bly, Aaron; Oxstrand, Johanna; Le Blanc, Katya L

    2015-02-01

    Most activities that involve human interaction with systems in a nuclear power plant are guided by procedures. Traditionally, the use of procedures has been a paper-based process that supports safe operation of the nuclear power industry. However, the nuclear industry is constantly trying to find ways to decrease the human error rate, especially the human errors associated with procedure use. Advances in digital technology make computer-based procedures (CBPs) a valid option that provides further enhancement of safety by improving human performance related to procedure use. The transition from paper-based procedures (PBPs) to CBPs creates a need for a computer-based procedure system (CBPS). A CBPS needs to have the ability to perform logical operations in order to adjust to the inputs received from either users or real time data from plant status databases. Without the ability for logical operations the procedure is just an electronic copy of the paper-based procedure. In order to provide the CBPS with the information it needs to display the procedure steps to the user, special care is needed in the format used to deliver all data and instructions to create the steps. The procedure should be broken down into basic elements and formatted in a standard method for the CBPS. One way to build the underlying data architecture is to use an Extensible Markup Language (XML) schema, which utilizes basic elements to build each step in the smart procedure. The attributes of each step will determine the type of functionality that the system will generate for that step. The CBPS will provide the context for the step to deliver referential information, request a decision, or accept input from the user. The XML schema needs to provide all data necessary for the system to accurately perform each step without the need for the procedure writer to reprogram the CBPS. The research team at the Idaho National Laboratory has developed a prototype CBPS for field workers as well as the

  4. Determining procedures for simulation-based training in radiology

    DEFF Research Database (Denmark)

    Nayahangan, Leizl Joy; Nielsen, Kristina Rue; Albrecht-Beste, Elisabeth

    2018-01-01

    , and basic abdominal ultrasound. CONCLUSION: A needs assessment identified and prioritized 13 technical procedures to include in a simulation-based curriculum. The list may be used as guide for development of training programs. KEY POINTS: • Simulation-based training can supplement training on patients......OBJECTIVES: New training modalities such as simulation are widely accepted in radiology; however, development of effective simulation-based training programs is challenging. They are often unstructured and based on convenience or coincidence. The study objective was to perform a nationwide needs...... assessment to identify and prioritize technical procedures that should be included in a simulation-based curriculum. METHODS: A needs assessment using the Delphi method was completed among 91 key leaders in radiology. Round 1 identified technical procedures that radiologists should learn. Round 2 explored...

  5. An electrochemical procedure coupled with a Schiff base method; application to electroorganic synthesis of new nitrogen-containing heterocycles

    International Nuclear Information System (INIS)

    Dowlati, Bahram; Othman, Mohamed Rozali

    2013-01-01

    The synthesis of Nitrogen-containing heterocycles has been achieved using chemical and electrochemical methods, respectively. The direct chemical synthesis of nucleophiles proceeds through the Schiff base chemical reaction. This procedure offers an alternate reaction between dicarbonyl compounds and diamines leads to the formation of products. The results indicate that the Schiff base chemical method for synthesis of the product has successfully performed in excellent overall yield. In the electrochemical step, a series of Nitrogen-containing compounds were electrosynthesized. Various parameters such as the applied potential, pH of the electrolytic solution, cell configuration and also purification techniques, were carried out to optimize the yields of corresponding products. New Nitrogen-containing heterocycle derivatives were synthesized using an electrochemical procedure coupled with a Schiff base as a facile, efficient and practical method. The products have been characterized after purification by IR, 1 H NMR, 13 C NMR and ESI-MS 2

  6. Completely automated modal analysis procedure based on the combination of different OMA methods

    Science.gov (United States)

    Ripamonti, Francesco; Bussini, Alberto; Resta, Ferruccio

    2018-03-01

    In this work a completely automated output-only Modal Analysis procedure is presented and all its benefits are listed. Based on the merging of different Operational Modal Analysis methods and a statistical approach, the identification process has been improved becoming more robust and giving as results only the real natural frequencies, damping ratios and mode shapes of the system. The effect of the temperature can be taken into account as well, leading to the creation of a better tool for automated Structural Health Monitoring. The algorithm has been developed and tested on a numerical model of a scaled three-story steel building present in the laboratories of Politecnico di Milano.

  7. Modal-pushover-based ground-motion scaling procedure

    Science.gov (United States)

    Kalkan, Erol; Chopra, Anil K.

    2011-01-01

    Earthquake engineering is increasingly using nonlinear response history analysis (RHA) to demonstrate the performance of structures. This rigorous method of analysis requires selection and scaling of ground motions appropriate to design hazard levels. This paper presents a modal-pushover-based scaling (MPS) procedure to scale ground motions for use in a nonlinear RHA of buildings. In the MPS method, the ground motions are scaled to match to a specified tolerance, a target value of the inelastic deformation of the first-mode inelastic single-degree-of-freedom (SDF) system whose properties are determined by the first-mode pushover analysis. Appropriate for first-mode dominated structures, this approach is extended for structures with significant contributions of higher modes by considering elastic deformation of second-mode SDF systems in selecting a subset of the scaled ground motions. Based on results presented for three actual buildings-4, 6, and 13-story-the accuracy and efficiency of the MPS procedure are established and its superiority over the ASCE/SEI 7-05 scaling procedure is demonstrated.

  8. Office-Based Procedures for the Diagnosis and Treatment of Laryngeal Pathology.

    Science.gov (United States)

    Wellenstein, David J; Schutte, Henrieke W; Takes, Robert P; Honings, Jimmie; Marres, Henri A M; Burns, James A; van den Broek, Guido B

    2017-09-18

    Since the development of distal chip endoscopes with a working channel, diagnostic and therapeutic possibilities in the outpatient clinic in the management of laryngeal pathology have increased. Which of these office-based procedures are currently available, and their clinical indications and possible advantages, remains unclear. Review of literature on office-based procedures in laryngology and head and neck oncology. Flexible endoscopic biopsy (FEB), vocal cord injection, and laser surgery are well-established office-based procedures that can be performed under topical anesthesia. These procedures demonstrate good patient tolerability and multiple advantages. Office-based procedures under topical anesthesia are currently an established method in the management of laryngeal pathology. These procedures offer medical and economic advantages compared with operating room-performed procedures. Furthermore, office-based procedures enhance the speed and timing of the diagnostic and therapeutic process. Copyright © 2017 The Voice Foundation. All rights reserved.

  9. Comparison of extraction chromatography and a procedure based on the molecular recognition method as separation methods in the determination of neptunium and plutonium radionuclides

    International Nuclear Information System (INIS)

    Strisovska, Jana; Galanda, Dusan; Drabova, Veronika; Kuruc, Jozef

    2012-01-01

    The potential of various types of sorbents for separation of radionuclides of plutonium and neptunium were examined. Extraction chromatography and a procedure based on the molecular recognition method were used for the separation. The suitability of the various sorbent types and brands for this purpose was determined. (orig.)

  10. 40 CFR 63.1365 - Test methods and initial compliance procedures.

    Science.gov (United States)

    2010-07-01

    ... permit limit applicable to the process vent. (D) Design analysis based on accepted chemical engineering... concentration, temperature, and the reaction kinetics of the constituents with the scrubbing liquid. The design... procedures specified in Method 8260 or 8270 in “Test Methods for Evaluating Solid Waste, Physical/Chemical...

  11. An Efficient Upscaling Procedure Based on Stokes-Brinkman Model and Discrete Fracture Network Method for Naturally Fractured Carbonate Karst Reservoirs

    KAUST Repository

    Qin, Guan; Bi, Linfeng; Popov, Peter; Efendiev, Yalchin; Espedal, Magne

    2010-01-01

    , fractures and their interconnectivities in coarse-scale simulation models. In this paper, we present a procedure based on our previously proposed Stokes-Brinkman model (SPE 125593) and the discrete fracture network method for accurate and efficient upscaling

  12. Usability test of the ImPRO, computer-based procedure system

    International Nuclear Information System (INIS)

    Jung, Y.; Lee, J.

    2006-01-01

    ImPRO is a computer based procedure in both flowchart and success logic tree. It is evaluated on the basis of computer based procedure guidelines. It satisfies most requirements such as presentations and functionalities. Besides, SGTR has been performed with ImPRO to evaluate reading comprehension and situation awareness. ImPRO is a software engine which can interpret procedure script language, so that ImPRO is reliable by nature and verified with formal method. One bug, however, had hidden one year after release, but it was fixed. Finally backup paper procedures can be prepared on the same format as VDU in case of ImPRO failure. (authors)

  13. Risk Control Through the Use of Procedures - A Method for Evaluating the Change in Risk

    Science.gov (United States)

    Praino, Gregory; Sharit, Joseph

    2010-01-01

    Organizations use procedures to influence or control the behavior of their workers, but often have no basis for determining whether an additional rule, or procedural control will be beneficial. This paper outlines a proposed method for determining if the addition or removal of procedural controls will impact the occurrences of critical consequences. The proposed method focuses on two aspects: how valuable the procedural control is, based on the inevitability of the consequence and the opportunity to intervene; and how likely the control is to fail, based on five procedural design elements that address how well the rule or control has been Defined, Assigned, Trained, Organized and Monitored-referred to as the DATOM elements

  14. What Do You Think You Are Measuring? A Mixed-Methods Procedure for Assessing the Content Validity of Test Items and Theory-Based Scaling

    Science.gov (United States)

    Koller, Ingrid; Levenson, Michael R.; Glück, Judith

    2017-01-01

    The valid measurement of latent constructs is crucial for psychological research. Here, we present a mixed-methods procedure for improving the precision of construct definitions, determining the content validity of items, evaluating the representativeness of items for the target construct, generating test items, and analyzing items on a theoretical basis. To illustrate the mixed-methods content-scaling-structure (CSS) procedure, we analyze the Adult Self-Transcendence Inventory, a self-report measure of wisdom (ASTI, Levenson et al., 2005). A content-validity analysis of the ASTI items was used as the basis of psychometric analyses using multidimensional item response models (N = 1215). We found that the new procedure produced important suggestions concerning five subdimensions of the ASTI that were not identifiable using exploratory methods. The study shows that the application of the suggested procedure leads to a deeper understanding of latent constructs. It also demonstrates the advantages of theory-based item analysis. PMID:28270777

  15. Mixture-based gatekeeping procedures in adaptive clinical trials.

    Science.gov (United States)

    Kordzakhia, George; Dmitrienko, Alex; Ishida, Eiji

    2018-01-01

    Clinical trials with data-driven decision rules often pursue multiple clinical objectives such as the evaluation of several endpoints or several doses of an experimental treatment. These complex analysis strategies give rise to "multivariate" multiplicity problems with several components or sources of multiplicity. A general framework for defining gatekeeping procedures in clinical trials with adaptive multistage designs is proposed in this paper. The mixture method is applied to build a gatekeeping procedure at each stage and inferences at each decision point (interim or final analysis) are performed using the combination function approach. An advantage of utilizing the mixture method is that it enables powerful gatekeeping procedures applicable to a broad class of settings with complex logical relationships among the hypotheses of interest. Further, the combination function approach supports flexible data-driven decisions such as a decision to increase the sample size or remove a treatment arm. The paper concludes with a clinical trial example that illustrates the methodology by applying it to develop an adaptive two-stage design with a mixture-based gatekeeping procedure.

  16. Situation awareness and trust in computer-based procedures in nuclear power plant operations

    Energy Technology Data Exchange (ETDEWEB)

    Throneburg, E. B.; Jones, J. M. [AREVA NP Inc., 7207 IBM Drive, Charlotte, NC 28262 (United States)

    2006-07-01

    Situation awareness and trust are two issues that need to be addressed in the design of computer-based procedures for nuclear power plants. Situation awareness, in relation to computer-based procedures, concerns the operators' knowledge of the plant's state while following the procedures. Trust concerns the amount of faith that the operators put into the automated procedures, which can affect situation awareness. This paper first discusses the advantages and disadvantages of computer-based procedures. It then discusses the known aspects of situation awareness and trust as applied to computer-based procedures in nuclear power plants. An outline of a proposed experiment is then presented that includes methods of measuring situation awareness and trust so that these aspects can be analyzed for further study. (authors)

  17. Situation awareness and trust in computer-based procedures in nuclear power plant operations

    International Nuclear Information System (INIS)

    Throneburg, E. B.; Jones, J. M.

    2006-01-01

    Situation awareness and trust are two issues that need to be addressed in the design of computer-based procedures for nuclear power plants. Situation awareness, in relation to computer-based procedures, concerns the operators' knowledge of the plant's state while following the procedures. Trust concerns the amount of faith that the operators put into the automated procedures, which can affect situation awareness. This paper first discusses the advantages and disadvantages of computer-based procedures. It then discusses the known aspects of situation awareness and trust as applied to computer-based procedures in nuclear power plants. An outline of a proposed experiment is then presented that includes methods of measuring situation awareness and trust so that these aspects can be analyzed for further study. (authors)

  18. 40 CFR 76.15 - Test methods and procedures.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 16 2010-07-01 2010-07-01 false Test methods and procedures. 76.15 Section 76.15 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) ACID RAIN NITROGEN OXIDES EMISSION REDUCTION PROGRAM § 76.15 Test methods and procedures. (a) The...

  19. Antimicrobial Testing Methods & Procedures Developed by EPA's Microbiology Laboratory

    Science.gov (United States)

    We develop antimicrobial testing methods and standard operating procedures to measure the effectiveness of hard surface disinfectants against a variety of microorganisms. Find methods and procedures for antimicrobial testing.

  20. Anatomical and procedural determinants of catheter-based renal denervation

    NARCIS (Netherlands)

    Ewen, Sebastian; Ukena, Christian; Lüscher, Thomas Felix; Bergmann, Martin; Blankestijn, Peter J; Blessing, Erwin; Cremers, Bodo; Dörr, Oliver; Hering, Dagmara; Kaiser, Lukas; Nef, Holger; Noory, Elias; Schlaich, Markus; Sharif, Faisal; Sudano, Isabella; Vogel, Britta; Voskuil, Michiel; Zeller, Thomas; Tzafriri, Abraham R; Edelman, Elazer R; Lauder, Lucas; Scheller, Bruno; Böhm, Michael; Mahfoud, Felix

    2016-01-01

    BACKGROUND/PURPOSE: Catheter-based renal sympathetic denervation (RDN) can reduce blood pressure (BP) and sympathetic activity in certain patients with uncontrolled hypertension. Less is known about the impact of renal anatomy and procedural parameters on subsequent BP response. METHODS/MATERIALS: A

  1. Determining procedures for simulation-based training in radiology: a nationwide needs assessment.

    Science.gov (United States)

    Nayahangan, Leizl Joy; Nielsen, Kristina Rue; Albrecht-Beste, Elisabeth; Bachmann Nielsen, Michael; Paltved, Charlotte; Lindorff-Larsen, Karen Gilboe; Nielsen, Bjørn Ulrik; Konge, Lars

    2018-01-09

    New training modalities such as simulation are widely accepted in radiology; however, development of effective simulation-based training programs is challenging. They are often unstructured and based on convenience or coincidence. The study objective was to perform a nationwide needs assessment to identify and prioritize technical procedures that should be included in a simulation-based curriculum. A needs assessment using the Delphi method was completed among 91 key leaders in radiology. Round 1 identified technical procedures that radiologists should learn. Round 2 explored frequency of procedure, number of radiologists performing the procedure, risk and/or discomfort for patients, and feasibility for simulation. Round 3 was elimination and prioritization of procedures. Response rates were 67 %, 70 % and 66 %, respectively. In Round 1, 22 technical procedures were included. Round 2 resulted in pre-prioritization of procedures. In round 3, 13 procedures were included in the final prioritized list. The three highly prioritized procedures were ultrasound-guided (US) histological biopsy and fine-needle aspiration, US-guided needle puncture and catheter drainage, and basic abdominal ultrasound. A needs assessment identified and prioritized 13 technical procedures to include in a simulation-based curriculum. The list may be used as guide for development of training programs. • Simulation-based training can supplement training on patients in radiology. • Development of simulation-based training should follow a structured approach. • The CAMES Needs Assessment Formula explores needs for simulation training. • A national Delphi study identified and prioritized procedures suitable for simulation training. • The prioritized list serves as guide for development of courses in radiology.

  2. Development and verification of symptom based emergency procedure support system

    International Nuclear Information System (INIS)

    Saijou, Nobuyuki; Sakuma, Akira; Takizawa, Yoji; Tamagawa, Naoko; Kubota, Ryuji; Satou, Hiroyuki; Ikeda, Koji; Taminami, Tatsuya

    1998-01-01

    A Computerized Emergency Procedure Guideline (EPG) Support System has been developed for BWR and evaluated using training simulator. It aims to enhance the effective utilization of EPG. The system identifies suitable symptom-based operating procedures for present plant status automatically. It has two functions : one is plant status identification function, and the other is man-machine interface function. For the realization of the former function, a method which identifies and prioritize suitable symptom-based operational procedures against present plant status has been developed. As man-machine interface, operation flow chart display has been developed. It express the flow of the identified operating procedures graphically. For easy understanding of the display, important information such as plant status change, priority of operating procedures and completion/uncompletion of the operation is displayed on the operation flow display by different colors. As evaluation test, the response of the system to the design based accidents was evaluated by actual plant operators, using training simulator at BWR Training Center. Through the analysis of interviews and questionnaires to operators, it was shown that the system is effective and can be utilized for a real plant. (author)

  3. Veterinary Students' Recollection Methods for Surgical Procedures

    DEFF Research Database (Denmark)

    Langebaek, Rikke; Tanggaard, Lene; Berendt, Mette

    2016-01-01

    When veterinary students face their first live animal surgeries, their level of anxiety is generally high and this can affect their ability to recall the procedure they are about to undertake. Multimodal teaching methods have previously been shown to enhance learning and facilitate recall; however......, student preferences for recollection methods when translating theory into practice have not been documented. The aim of this study was to investigate veterinary students' experience with recollection of a surgical procedure they were about to perform after using multiple methods for preparation. From...... a group of 171 veterinary students enrolled in a basic surgery course, 26 students were randomly selected to participate in semi-structured interviews. Results showed that 58% of the students used a visual, dynamic method of recollection, mentally visualizing the video they had watched as part...

  4. Model of Procedure Usage – Results from a Qualitative Study to Inform Design of Computer-Based Procedures

    Energy Technology Data Exchange (ETDEWEB)

    Johanna H Oxstrand; Katya L Le Blanc

    2012-07-01

    The nuclear industry is constantly trying to find ways to decrease the human error rate, especially the human errors associated with procedure use. As a step toward the goal of improving procedure use performance, researchers, together with the nuclear industry, have been looking at replacing the current paper-based procedures with computer-based procedure systems. The concept of computer-based procedures is not new by any means; however most research has focused on procedures used in the main control room. Procedures reviewed in these efforts are mainly emergency operating procedures and normal operating procedures. Based on lessons learned for these previous efforts we are now exploring a more unknown application for computer based procedures - field procedures, i.e. procedures used by nuclear equipment operators and maintenance technicians. The Idaho National Laboratory, the Institute for Energy Technology, and participants from the U.S. commercial nuclear industry are collaborating in an applied research effort with the objective of developing requirements and specifications for a computer-based procedure system to be used by field operators. The goal is to identify the types of human errors that can be mitigated by using computer-based procedures and how to best design the computer-based procedures to do this. The underlying philosophy in the research effort is “Stop – Start – Continue”, i.e. what features from the use of paper-based procedures should we not incorporate (Stop), what should we keep (Continue), and what new features or work processes should be added (Start). One step in identifying the Stop – Start – Continue was to conduct a baseline study where affordances related to the current usage of paper-based procedures were identified. The purpose of the study was to develop a model of paper based procedure use which will help to identify desirable features for computer based procedure prototypes. Affordances such as note taking, markups

  5. International Conference on Robust Rank-Based and Nonparametric Methods

    CERN Document Server

    McKean, Joseph

    2016-01-01

    The contributors to this volume include many of the distinguished researchers in this area. Many of these scholars have collaborated with Joseph McKean to develop underlying theory for these methods, obtain small sample corrections, and develop efficient algorithms for their computation. The papers cover the scope of the area, including robust nonparametric rank-based procedures through Bayesian and big data rank-based analyses. Areas of application include biostatistics and spatial areas. Over the last 30 years, robust rank-based and nonparametric methods have developed considerably. These procedures generalize traditional Wilcoxon-type methods for one- and two-sample location problems. Research into these procedures has culminated in complete analyses for many of the models used in practice including linear, generalized linear, mixed, and nonlinear models. Settings are both multivariate and univariate. With the development of R packages in these areas, computation of these procedures is easily shared with r...

  6. State Token Petri Net modeling method for formal verification of computerized procedure including operator's interruptions of procedure execution flow

    International Nuclear Information System (INIS)

    Kim, Yun Goo; Seong, Poong Hyun

    2012-01-01

    The Computerized Procedure System (CPS) is one of the primary operating support systems in the digital Main Control Room. The CPS displays procedure on the computer screen in the form of a flow chart, and displays plant operating information along with procedure instructions. It also supports operator decision making by providing a system decision. A procedure flow should be correct and reliable, as an error would lead to operator misjudgement and inadequate control. In this paper we present a modeling for the CPS that enables formal verification based on Petri nets. The proposed State Token Petri Nets (STPN) also support modeling of a procedure flow that has various interruptions by the operator, according to the plant condition. STPN modeling is compared with Coloured Petri net when they are applied to Emergency Operating Computerized Procedure. A converting program for Computerized Procedure (CP) to STPN has been also developed. The formal verification and validation methods of CP with STPN increase the safety of a nuclear power plant and provide digital quality assurance means that are needed when the role and function of the CPS is increasing.

  7. A Tuning Procedure for ARX-based MPC of Multivariate Processes

    DEFF Research Database (Denmark)

    Olesen, Daniel; Huusom, Jakob Kjøbsted; Jørgensen, John Bagterp

    2013-01-01

    We present an optimization based tuning procedure with certain robustness properties for an offset free Model Predictive Controller (MPC). The MPC is designed for multivariate processes that can be represented by an ARX model. The stochastic model of the ARX model identified from input-output data...... is modified with an ARMA model designed as part of the MPC-design procedure to ensure offset-free control. The MPC is designed and implemented based on a state space model in innovation form. Expressions for the closed-loop dynamics of the unconstrained system is used to derive the sensitivity function...... to a constraint on the maximum of the sensitivity function. The latter constraint provides a robustness measure that is essential for the procedure. The method is demonstrated for two simulated examples: A Wood-Berry distillation column example and a cement mill example....

  8. 40 CFR 63.645 - Test methods and procedures for miscellaneous process vents.

    Science.gov (United States)

    2010-07-01

    ... analysis based on accepted chemical engineering principles, measurable process parameters, or physical or... for a new source, Method 18 may be used to determine any non-VOC hydrocarbons that may be deducted to calculate the TOC (minus non-VOC hydrocarbons) concentration and mass flow rate. The following procedures...

  9. Soil Conservation Service Curve Number method: How to mend a wrong soil moisture accounting procedure?

    Science.gov (United States)

    Michel, Claude; Andréassian, Vazken; Perrin, Charles

    2005-02-01

    This paper unveils major inconsistencies in the age-old and yet efficient Soil Conservation Service Curve Number (SCS-CN) procedure. Our findings are based on an analysis of the continuous soil moisture accounting procedure implied by the SCS-CN equation. It is shown that several flaws plague the original SCS-CN procedure, the most important one being a confusion between intrinsic parameter and initial condition. A change of parameterization and a more complete assessment of the initial condition lead to a renewed SCS-CN procedure, while keeping the acknowledged efficiency of the original method.

  10. Phenomenological Research Method, Design and Procedure: A ...

    African Journals Online (AJOL)

    Phenomenological Research Method, Design and Procedure: A Phenomenological Investigation of the Phenomenon of Being-in-Community as Experienced by Two Individuals Who Have Participated in a Community Building Workshop.

  11. Office-based deep sedation for pediatric ophthalmologic procedures using a sedation service model.

    Science.gov (United States)

    Lalwani, Kirk; Tomlinson, Matthew; Koh, Jeffrey; Wheeler, David

    2012-01-01

    Aims. (1) To assess the efficacy and safety of pediatric office-based sedation for ophthalmologic procedures using a pediatric sedation service model. (2) To assess the reduction in hospital charges of this model of care delivery compared to the operating room (OR) setting for similar procedures. Background. Sedation is used to facilitate pediatric procedures and to immobilize patients for imaging and examination. We believe that the pediatric sedation service model can be used to facilitate office-based deep sedation for brief ophthalmologic procedures and examinations. Methods. After IRB approval, all children who underwent office-based ophthalmologic procedures at our institution between January 1, 2000 and July 31, 2008 were identified using the sedation service database and the electronic health record. A comparison of hospital charges between similar procedures in the operating room was performed. Results. A total of 855 procedures were reviewed. Procedure completion rate was 100% (C.I. 99.62-100). There were no serious complications or unanticipated admissions. Our analysis showed a significant reduction in hospital charges (average of $1287 per patient) as a result of absent OR and recovery unit charges. Conclusions. Pediatric ophthalmologic minor procedures can be performed using a sedation service model with significant reductions in hospital charges.

  12. Office-Based Deep Sedation for Pediatric Ophthalmologic Procedures Using a Sedation Service Model

    Directory of Open Access Journals (Sweden)

    Kirk Lalwani

    2012-01-01

    Full Text Available Aims. (1 To assess the efficacy and safety of pediatric office-based sedation for ophthalmologic procedures using a pediatric sedation service model. (2 To assess the reduction in hospital charges of this model of care delivery compared to the operating room (OR setting for similar procedures. Background. Sedation is used to facilitate pediatric procedures and to immobilize patients for imaging and examination. We believe that the pediatric sedation service model can be used to facilitate office-based deep sedation for brief ophthalmologic procedures and examinations. Methods. After IRB approval, all children who underwent office-based ophthalmologic procedures at our institution between January 1, 2000 and July 31, 2008 were identified using the sedation service database and the electronic health record. A comparison of hospital charges between similar procedures in the operating room was performed. Results. A total of 855 procedures were reviewed. Procedure completion rate was 100% (C.I. 99.62–100. There were no serious complications or unanticipated admissions. Our analysis showed a significant reduction in hospital charges (average of $1287 per patient as a result of absent OR and recovery unit charges. Conclusions. Pediatric ophthalmologic minor procedures can be performed using a sedation service model with significant reductions in hospital charges.

  13. A conceptual application for computer-based procedures for handheld devices

    Energy Technology Data Exchange (ETDEWEB)

    Sofie, Lunde-Hanssen Linda [Industrial Psychology, Institute for Energy Technology, Halden (Norway)

    2014-08-15

    This paper describes the concepts and proposed design principles for an application for computer-based procedures (CBPs) for field operators in the nuclear domain (so-called handheld procedures). The concept is focused on the field operators' work with procedures and the communication and coordination between field operators and control room operators. The goal is to overcome challenges with shared situation awareness (SA) in a distributed team by providing effective and usable information design. An iterative design method and user-centred design is used for tailoring the concept to the context of field operations. The resulting concept supports the execution of procedures where close collaboration is needed between control room and field operations, e.g. where particular procedure steps are executed from remote control points and others from the control room. The resulting conceptual application for CBPs on handheld devices is developed for mitigating the SA challenges and designing for usability and ease of use.

  14. A conceptual application for computer-based procedures for handheld devices

    International Nuclear Information System (INIS)

    Sofie, Lunde-Hanssen Linda

    2014-01-01

    This paper describes the concepts and proposed design principles for an application for computer-based procedures (CBPs) for field operators in the nuclear domain (so-called handheld procedures). The concept is focused on the field operators' work with procedures and the communication and coordination between field operators and control room operators. The goal is to overcome challenges with shared situation awareness (SA) in a distributed team by providing effective and usable information design. An iterative design method and user-centred design is used for tailoring the concept to the context of field operations. The resulting concept supports the execution of procedures where close collaboration is needed between control room and field operations, e.g. where particular procedure steps are executed from remote control points and others from the control room. The resulting conceptual application for CBPs on handheld devices is developed for mitigating the SA challenges and designing for usability and ease of use

  15. Comparison of a novel spray congealing procedure with emulsion-based methods for the micro-encapsulation of water-soluble drugs in low melting point triglycerides.

    Science.gov (United States)

    McCarron, Paul A; Donnelly, Ryan F; Al-Kassas, Rasil

    2008-09-01

    The particle size characteristics and encapsulation efficiency of microparticles prepared using triglyceride materials and loaded with two model water-soluble drugs were evaluated. Two emulsification procedures based on o/w and w/o/w methodologies were compared to a novel spray congealing procedure. After extensive modification of both emulsification methods, encapsulation efficiencies of 13.04% tetracycline HCl and 11.27% lidocaine HCl were achievable in a Witepsol-based microparticle. This compares to much improved encapsulation efficiencies close to 100% for the spray congealing method, which was shown to produce spherical particles of approximately 58 microm. Drug release studies from a Witepsol formulation loaded with lidocaine HCl showed a temperature-dependent release mechanism, which displayed diffusion-controlled kinetics at temperatures approximately 25 degrees C, but exhibited almost immediate release when triggered using temperatures close to that of skin. Therefore, such a system may find application in topical semi-solid formulations, where a temperature-induced burst release is preferred.

  16. Methods for implementing revisions to emergency operating procedures. Final report

    International Nuclear Information System (INIS)

    Myers, L.B.; Bell, A.J.

    1984-05-01

    In response to the Three Mile Island (TMI) accident, the US Nuclear Regulatory Commission (NRC) has published the TMI Action Plan. The TMI Action Plan Item I.C.1 called for the upgrading of Emergency Operating Procedures (EOPs) at nuclear power plants. The program developed from this Action Plan item has resulted in utility efforts to: (1) revise EOPs; (2) train personnel in the use of the EOPs; and (3) implement the revised EOPs. The NRC supported the study presented in this report to identify factors which influence the effectiveness of training and implementation of revised EOPs. The NRC's major concern was the possible effects of negative transfer of training. The report includes a summary of existing methods for implementing revisions to procedures based on interviews of plant personnel, a review of the training literature applicable to the effect of previously learned procedures on the learning of and performance with revised procedures (i.e., negative transfer) and recommendations of methods and schedules for implementing revised EOPs. While the study found that the concern over negative transfer of training was not as great as anticipated, several recommendations were made. These include: (1) overtraining of operators to reduce the effect of observed negative transfer; and (2) implementation of the revised EOPs as soon as possible after training to minimize the time operators must rely upon the old EOPs after having been trained on the revised EOPs. The results of the study should be useful both to the utilities and the NRC in the development and review of EOP implementation programs

  17. Possible overestimation of surface disinfection efficiency by assessment methods based on liquid sampling procedures as demonstrated by in situ quantification of spore viability.

    Science.gov (United States)

    Grand, I; Bellon-Fontaine, M-N; Herry, J-M; Hilaire, D; Moriconi, F-X; Naïtali, M

    2011-09-01

    The standard test methods used to assess the efficiency of a disinfectant applied to surfaces are often based on counting the microbial survivors sampled in a liquid, but total cell removal from surfaces is seldom achieved. One might therefore wonder whether evaluations of microbial survivors in liquid-sampled cells are representative of the levels of survivors in whole populations. The present study was thus designed to determine the "damaged/undamaged" status induced by a peracetic acid disinfection for Bacillus atrophaeus spores deposited on glass coupons directly on this substrate and to compare it to the status of spores collected in liquid by a sampling procedure. The method utilized to assess the viability of both surface-associated and liquid-sampled spores included fluorescence labeling with a combination of Syto 61 and Chemchrome V6 dyes and quantifications by analyzing the images acquired by confocal laser scanning microscopy. The principal result of the study was that the viability of spores sampled in the liquid was found to be poorer than that of surface-associated spores. For example, after 2 min of peracetic acid disinfection, less than 17% ± 5% of viable cells were detected among liquid-sampled cells compared to 79% ± 5% or 47% ± 4%, respectively, when the viability was evaluated on the surface after or without the sampling procedure. Moreover, assessments of the survivors collected in the liquid phase, evaluated using the microscopic method and standard plate counts, were well correlated. Evaluations based on the determination of survivors among the liquid-sampled cells can thus overestimate the efficiency of surface disinfection procedures.

  18. Proposal for an Evaluation Method for the Performance of Work Procedures.

    Science.gov (United States)

    Mohammed, Mouda; Mébarek, Djebabra; Wafa, Boulagouas; Makhlouf, Chati

    2016-12-01

    Noncompliance of operators with work procedures is a recurrent problem. This human behavior has been said to be situational and studied by many different approaches (ergonomic and others), which consider the noncompliance with work procedures to be obvious and seek to analyze its causes as well as consequences. The object of the proposed method is to solve this problem by focusing on the performance of work procedures and ensuring improved performance on a continuous basis. This study has multiple results: (1) assessment of the work procedures' performance by a multicriteria approach; (2) the use of a continuous improvement approach as a framework for the sustainability of the assessment method of work procedures' performance; and (3) adaptation of the Stop-Card as a facilitator support for continuous improvement of work procedures. The proposed method emphasizes to put in value the inputs of continuous improvement of the work procedures in relation with the conventional approaches which adopt the obvious evidence of the noncompliance to the working procedures and seek to analyze the cause-effect relationships related to this unacceptable phenomenon, especially in strategic industry.

  19. Assessment of creep-fatigue damage using the UK strain based procedure

    International Nuclear Information System (INIS)

    Bate, S.K.

    1997-01-01

    The UK strain based procedures have been developed for the evaluation of damage in structures, arising from fatigue cycles and creep processes. The fatigue damage is assessed on the basis of modelling crack growth from about one grain depth to an allowable limit which represents an engineering definition of crack formation. Creep damage is based up on the exhaustion of available ductility by creep strain accumulation. The procedures are applicable only when level A and B service conditions apply, as defined in RCC-MR or ASME Code Case N47. The procedures require the components of strain to be evaluated separately, thus they may be used with either full inelastic analysis or simplified methods. To support the development of the UK strain based creep-fatigue procedures an experimental program was undertaken by NNC to study creep-fatigue interaction of structures operating at high temperature. These tests, collectively known as the SALTBATH tests considered solid cylinder and tube-plate specimens, manufactured from Type 316 stainless steel. These specimens were subjected to thermal cycles between 250 deg. C and 600 deg. C. In all the cases the thermal cycle produces tensile residual stresses during dwells at 600 deg. C. One of the tube-plate specimens was used as a benchmark for validating the strain based creep fatigue procedures and subsequently as part of a CEC co-operative study. This benchmark work is described in this paper. A thermal and inelastic stress analysis was carried out using the finite element code ABAQUS. The inelastic behaviour of the material was described using the ORNL constitutive equations. A creep fatigue assessment using the strain based procedures has been compared with an assessment using the RCC-MR inelastic rules. The analyses indicated that both the UK strain based procedures and the RCC-MR rules were conservative, but the conservatism was greater for the RCC-MR rules. (author). 8 refs, 8 figs, 4 tabs

  20. An Efficient Upscaling Procedure Based on Stokes-Brinkman Model and Discrete Fracture Network Method for Naturally Fractured Carbonate Karst Reservoirs

    KAUST Repository

    Qin, Guan

    2010-01-01

    Naturally-fractured carbonate karst reservoirs are characterized by various-sized solution caves that are connected via fracture networks at multiple scales. These complex geologic features can not be fully resolved in reservoir simulations due to the underlying uncertainty in geologic models and the large computational resource requirement. They also bring in multiple flow physics which adds to the modeling difficulties. It is thus necessary to develop a method to accurately represent the effect of caves, fractures and their interconnectivities in coarse-scale simulation models. In this paper, we present a procedure based on our previously proposed Stokes-Brinkman model (SPE 125593) and the discrete fracture network method for accurate and efficient upscaling of naturally fractured carbonate karst reservoirs.

  1. Variational method for inverting the Kohn-Sham procedure

    International Nuclear Information System (INIS)

    Kadantsev, Eugene S.; Stott, M.J.

    2004-01-01

    A procedure based on a variational principle is developed for determining the local Kohn-Sham (KS) potential corresponding to a given ground-state electron density. This procedure is applied to calculate the exchange-correlation part of the effective Kohn-Sham (KS) potential for the neon atom and the methane molecule

  2. Current Human Reliability Analysis Methods Applied to Computerized Procedures

    Energy Technology Data Exchange (ETDEWEB)

    Ronald L. Boring

    2012-06-01

    Computerized procedures (CPs) are an emerging technology within nuclear power plant control rooms. While CPs have been implemented internationally in advanced control rooms, to date no US nuclear power plant has implemented CPs in its main control room (Fink et al., 2009). Yet, CPs are a reality of new plant builds and are an area of considerable interest to existing plants, which see advantages in terms of enhanced ease of use and easier records management by omitting the need for updating hardcopy procedures. The overall intent of this paper is to provide a characterization of human reliability analysis (HRA) issues for computerized procedures. It is beyond the scope of this document to propose a new HRA approach or to recommend specific methods or refinements to those methods. Rather, this paper serves as a review of current HRA as it may be used for the analysis and review of computerized procedures.

  3. A National Needs Assessment to Identify Technical Procedures in Vascular Surgery for Simulation Based Training

    DEFF Research Database (Denmark)

    Nayahangan, L J; Konge, L; Schroeder, T V

    2017-01-01

    to identify technical procedures that vascular surgeons should learn. Round 2 was a survey that used a needs assessment formula to explore the frequency of procedures, the number of surgeons performing each procedure, risk and/or discomfort, and feasibility for simulation based training. Round 3 involved...... eliminated, resulting in a final prioritised list of 19 technical procedures. Conclusion A national needs assessment using a standardised Delphi method identified a list of procedures that are highly suitable and may provide the basis for future simulation based training programs for vascular surgeons......Objectives and background Practical skills training in vascular surgery is facing challenges because of an increased number of endovascular procedures and fewer open procedures, as well as a move away from the traditional principle of “learning by doing.” This change has established simulation...

  4. Applying computer-based procedures in nuclear power plants

    Energy Technology Data Exchange (ETDEWEB)

    Oliveira, Mauro V. de; Carvalho, Paulo V.R. de; Santos, Isaac J.A.L. dos; Grecco, Claudio H.S. [Instituto de Engenharia Nuclear (IEN/CNEN-RJ), Rio de Janeiro, RJ (Brazil). Div. de Instrumentacao e Confiabilidade Humana], e-mail: mvitor@ien.gov.br, e-mail: paulov@ien.gov.br, e-mail: luquetti@ien.gov.br, e-mail: grecco@ien.gov.br; Bruno, Diego S. [Universidade Federal do Rio de Janeiro (UFRJ), RJ (Brazil). Escola Politecnica. Curso de Engenharia de Controle e Automacao], e-mail: diegosalomonebruno@gmail.com

    2009-07-01

    Plant operation procedures are used to guide operators in coping with normal, abnormal or emergency situations in a process control system. Historically, the plant procedures have been paper-based (PBP), with the digitalisation trend in these complex systems computer-based procedures (CBPs) are being developed to support procedure use. This work shows briefly the research on CBPs at the Human-System Interface Laboratory (LABIHS). The emergency operation procedure EOP-0 of the LABIHS NPP simulator was implemented in the ImPRO CBP system. The ImPRO system was chosen for test because it is available for download in the Internet. A preliminary operation test using the implemented procedure in the CBP system was realized and the results were compared to the operation through PBP use. (author)

  5. Office-based procedures for diagnosis and treatment of esophageal pathology.

    Science.gov (United States)

    Wellenstein, David J; Schutte, Henrieke W; Marres, Henri A M; Honings, Jimmie; Belafsky, Peter C; Postma, Gregory N; Takes, Robert P; van den Broek, Guido B

    2017-09-01

    Diagnostic and therapeutic office-based procedures under topical anesthesia are emerging in the daily practice of laryngologists and head and neck surgeons. Since the introduction of the transnasal esophagoscope, office-based procedures for the esophagus are increasingly performed. We conducted a systematic review of literature on office-based procedures under topical anesthesia for the esophagus. Transnasal esophagoscopy is an extensively investigated office-based procedure. This procedure shows better patient tolerability and equivalent accuracy compared to conventional transoral esophagoscopy, as well as time and cost savings. Secondary tracheoesophageal puncture, esophageal dilatation, esophageal sphincter injection, and foreign body removal are less investigated, but show promising results. With the introduction of the transnasal esophagoscope, an increasing number of diagnostic and therapeutic office-based procedures for the esophagus are possible, with multiple advantages. Further investigation must prove the clinical feasibility and effectiveness of the therapeutic office-based procedures. © 2017 Wiley Periodicals, Inc.

  6. Evaluation of Revised Computer-Based Procedure System Prototype

    Energy Technology Data Exchange (ETDEWEB)

    Katya Le Blanc; Johanna Oxstrand; Cheradan Fikstad

    2013-01-01

    The nuclear power industry is very procedure driven, i.e. almost all activities that take place at a nuclear power plant are conducted by following procedures. The paper-based procedures (PBPs) currently used by the industry do a good job at keeping the industry safe. However, these procedures are most often paired with methods and tools put in place to anticipate, prevent, and catch errors related to hands-on work. These tools are commonly called human performance tools. The drawback with the current implementation of these tools is that the task of performing one procedure becomes time and labor intensive. For example, concurrent and independent verification of procedure steps are required at times, which essentially means that at least two people have to be actively involved in the task. Even though the current use of PBPs and human performance tools are keeping the industry safe, there is room for improvement. The industry could potentially increase their efficiency and safety by replacing their existing PBPs with CBPs. If implemented correctly, the CBP system could reduce the time and focus spent on using the human performance tools. Some of the tools can be completely incorporated in the CBP system in a manner that the performer does not think about the fact that these tools are being used. Examples of these tools are procedure use and adherence, placekeeping, and peer checks. Other tools can be partly integrated in a fashion that reduce the time and labor they require, such as concurrent and independent verification. The incorporation of advanced technology, such as CBP systems, may help to manage the effects of aging systems, structures, and components. The introduction of advanced technology may also make the existing LWR fleet more attractive to the future workforce, which will be of importance when the future workforce will chose between existing fleet and the newly built nuclear power plants.

  7. THE EFFECT OF INQURY BASED LEARNING ON THE PROCEDURAL KNOWLEDGE DIMENSION ABOUT ELECTRIC AND MAGNET CONCEPT

    Directory of Open Access Journals (Sweden)

    Y. Yusrizal

    2017-11-01

    Full Text Available The purpose of this study to determine the effect of the use of inquiry-based learning to the increased dimensions of procedural knowledge in electrical magnetic material. The study used a quasi-experimental research methods with research design is non-equivalent control group design and a sampel are selected with the random sampling method. The experimental group was taught by the method of inquiry-based learning and the control group was taught by conventional methods. Collecting data using the instrument of multiple-choice test that developed through this research with category of validity is valid, reliability with category of reliable, index of discrimination with category of low, and level of difficulty with category of medium. The results of the data analysis by using the formula N-Gain and t-test showed that an increase in the dimensions of procedural knowledge siginificantly for experimental class and less significant for control class. Based on the results of this study suggested to the teacher to always use the method of inquiry learning that an increase in procedural knowledge dimension, especially for topics related to experimental physics.

  8. 20 CFR 361.13 - Procedures for salary offset: Methods of collection.

    Science.gov (United States)

    2010-04-01

    ... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Procedures for salary offset: Methods of collection. 361.13 Section 361.13 Employees' Benefits RAILROAD RETIREMENT BOARD INTERNAL ADMINISTRATION... § 361.13 Procedures for salary offset: Methods of collection. (a) General. A debt will be collected by...

  9. 7 CFR 400.138 - Procedures for salary offset; methods of collection.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 6 2010-01-01 2010-01-01 false Procedures for salary offset; methods of collection. 400.138 Section 400.138 Agriculture Regulations of the Department of Agriculture (Continued) FEDERAL... Management-Regulations for the 1986 and Succeeding Crop Years § 400.138 Procedures for salary offset; methods...

  10. 24 CFR 17.136 - Procedures for salary offset: methods of collection.

    Science.gov (United States)

    2010-04-01

    ... 24 Housing and Urban Development 1 2010-04-01 2010-04-01 false Procedures for salary offset: methods of collection. 17.136 Section 17.136 Housing and Urban Development Office of the Secretary... the Government Salary Offset Provisions § 17.136 Procedures for salary offset: methods of collection...

  11. A Method of Separation Assurance for Instrument Flight Procedures at Non-Radar Airports

    Science.gov (United States)

    Conway, Sheila R.; Consiglio, Maria

    2002-01-01

    A method to provide automated air traffic separation assurance services during approach to or departure from a non-radar, non-towered airport environment is described. The method is constrained by provision of these services without radical changes or ambitious investments in current ground-based technologies. The proposed procedures are designed to grant access to a large number of airfields that currently have no or very limited access under Instrument Flight Rules (IFR), thus increasing mobility with minimal infrastructure investment. This paper primarily addresses a low-cost option for airport and instrument approach infrastructure, but is designed to be an architecture from which a more efficient, albeit more complex, system may be developed. A functional description of the capabilities in the current NAS infrastructure is provided. Automated terminal operations and procedures are introduced. Rules of engagement and the operations are defined. Results of preliminary simulation testing are presented. Finally, application of the method to more terminal-like operations, and major research areas, including necessary piloted studies, are discussed.

  12. Possible Overestimation of Surface Disinfection Efficiency by Assessment Methods Based on Liquid Sampling Procedures as Demonstrated by In Situ Quantification of Spore Viability ▿

    Science.gov (United States)

    Grand, I.; Bellon-Fontaine, M.-N.; Herry, J.-M.; Hilaire, D.; Moriconi, F.-X.; Naïtali, M.

    2011-01-01

    The standard test methods used to assess the efficiency of a disinfectant applied to surfaces are often based on counting the microbial survivors sampled in a liquid, but total cell removal from surfaces is seldom achieved. One might therefore wonder whether evaluations of microbial survivors in liquid-sampled cells are representative of the levels of survivors in whole populations. The present study was thus designed to determine the “damaged/undamaged” status induced by a peracetic acid disinfection for Bacillus atrophaeus spores deposited on glass coupons directly on this substrate and to compare it to the status of spores collected in liquid by a sampling procedure. The method utilized to assess the viability of both surface-associated and liquid-sampled spores included fluorescence labeling with a combination of Syto 61 and Chemchrome V6 dyes and quantifications by analyzing the images acquired by confocal laser scanning microscopy. The principal result of the study was that the viability of spores sampled in the liquid was found to be poorer than that of surface-associated spores. For example, after 2 min of peracetic acid disinfection, less than 17% ± 5% of viable cells were detected among liquid-sampled cells compared to 79% ± 5% or 47% ± 4%, respectively, when the viability was evaluated on the surface after or without the sampling procedure. Moreover, assessments of the survivors collected in the liquid phase, evaluated using the microscopic method and standard plate counts, were well correlated. Evaluations based on the determination of survivors among the liquid-sampled cells can thus overestimate the efficiency of surface disinfection procedures. PMID:21742922

  13. 7 CFR 3.83 - Procedures for salary offset: methods of collection.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 1 2010-01-01 2010-01-01 false Procedures for salary offset: methods of collection. 3.83 Section 3.83 Agriculture Office of the Secretary of Agriculture DEBT MANAGEMENT Federal Salary Offset § 3.83 Procedures for salary offset: methods of collection. (a) General. A debt will be collected...

  14. Computer-Based Procedures for Field Workers in Nuclear Power Plants: Development of a Model of Procedure Usage and Identification of Requirements

    Energy Technology Data Exchange (ETDEWEB)

    Katya Le Blanc; Johanna Oxstrand

    2012-04-01

    The nuclear industry is constantly trying to find ways to decrease the human error rate, especially the human errors associated with procedure use. As a step toward the goal of improving procedure use performance, researchers, together with the nuclear industry, have been looking at replacing the current paper-based procedures with computer-based procedure systems. The concept of computer-based procedures is not new by any means; however most research has focused on procedures used in the main control room. Procedures reviewed in these efforts are mainly emergency operating procedures and normal operating procedures. Based on lessons learned for these previous efforts we are now exploring a more unknown application for computer based procedures - field procedures, i.e. procedures used by nuclear equipment operators and maintenance technicians. The Idaho National Laboratory and participants from the U.S. commercial nuclear industry are collaborating in an applied research effort with the objective of developing requirements and specifications for a computer-based procedure system to be used by field workers. The goal is to identify the types of human errors that can be mitigated by using computer-based procedures and how to best design the computer-based procedures to do so. This paper describes the development of a Model of Procedure Use and the qualitative study on which the model is based. The study was conducted in collaboration with four nuclear utilities and five research institutes. During the qualitative study and the model development requirements and for computer-based procedures were identified.

  15. New Procedure for Compacting Prismatic Specimens of Cement-Treated Base Materials

    Directory of Open Access Journals (Sweden)

    Alaitz Linares-Unamunzaga

    2018-06-01

    Full Text Available Understanding the long-term behaviour of cement-treated base materials is a key factor to improve its design and obtain environmentally friendly pavement base materials. Their characterization requires manufacturing prismatic specimens. However, various authors highlight the absence of standardized test methods for fabricating beams in the field and laboratory, which is not an easy task because it depends on the qualification and experience of the testing team. The aim of this paper is to present a new device and procedure for compacting prismatic specimens of cement-treated base materials. In this research, it was used for compacting soil-cement to simulate its performance as a road base material. This device employs elements that are generally available in a concrete laboratory test, such as a vibrating table or prismatic moulds. Once the procedure was established, and in order to verify its suitability, flexural and compressive strength tests were carried out. Results showed that the values obtained were consistent with this material and, despite the heterogeneity of the material, specimens from the same batch provided similar results and, hence, validated the compaction process. This new compacting procedure can improve understanding of the long-term performance of cement-treated materials from flexural and fatigue tests.

  16. THE PROCEDURE OF REALIZATION OF THE DIDACTIC PRINCIPLE OF VISUAL METHOD IN AN EDUCATIONAL LABORATORY

    Directory of Open Access Journals (Sweden)

    Anatolii H. Protasov

    2010-08-01

    Full Text Available This paper is devoted to the procedure of realization of the main didactic principle – use visual method which becomes an essential factor of student perception of educational sources. The procedure is realized with series of laboratory works which are based on the principle – “device-computer-software”. The transformers of a physical magnitude into electrical signal are used in laboratory works. The combination of these transformers and a computer form the device which can measure a physical magnitude. The software allows reconstructing a virtual field distribution of this magnitude in area and observing its history. MATLAB is used as software and it provides with computation of different physical processes. The proposed procedure provides with a direct visual method and an indirect one as well. This matter promotes forming future specialists’ professional competence.

  17. Procedure Redesign Methods : E3-Control: a redesign methodology for control procedures

    NARCIS (Netherlands)

    Liu, J.; Hofman, W.J.; Tan, Y.H.

    2011-01-01

    This chapter highlights the core research methodology, e3-control, that is applied throughout the ITAIDE project for the purpose of control procedure redesign. We present the key concept of the e3-control methodology and its technical guidelines. Based on the output of this chapter, domain experts

  18. Effects of Computer-Based Training on Procedural Modifications to Standard Functional Analyses

    Science.gov (United States)

    Schnell, Lauren K.; Sidener, Tina M.; DeBar, Ruth M.; Vladescu, Jason C.; Kahng, SungWoo

    2018-01-01

    Few studies have evaluated methods for training decision-making when functional analysis data are undifferentiated. The current study evaluated computer-based training to teach 20 graduate students to arrange functional analysis conditions, analyze functional analysis data, and implement procedural modifications. Participants were exposed to…

  19. Risk assessment for pipelines with active defects based on artificial intelligence methods

    Energy Technology Data Exchange (ETDEWEB)

    Anghel, Calin I. [Department of Chemical Engineering, Faculty of Chemistry and Chemical Engineering, University ' Babes-Bolyai' , Cluj-Napoca (Romania)], E-mail: canghel@chem.ubbcluj.ro

    2009-07-15

    The paper provides another insight into the pipeline risk assessment for in-service pressure piping containing defects. Beside of the traditional analytical approximation methods or sampling-based methods safety index and failure probability of pressure piping containing defects will be obtained based on a novel type of support vector machine developed in a minimax manner. The safety index or failure probability is carried out based on a binary classification approach. The procedure named classification reliability procedure, involving a link between artificial intelligence and reliability methods was developed as a user-friendly computer program in MATLAB language. To reveal the capacity of the proposed procedure two comparative numerical examples replicating a previous related work and predicting the failure probabilities of pressured pipeline with defects were presented.

  20. Risk assessment for pipelines with active defects based on artificial intelligence methods

    International Nuclear Information System (INIS)

    Anghel, Calin I.

    2009-01-01

    The paper provides another insight into the pipeline risk assessment for in-service pressure piping containing defects. Beside of the traditional analytical approximation methods or sampling-based methods safety index and failure probability of pressure piping containing defects will be obtained based on a novel type of support vector machine developed in a minimax manner. The safety index or failure probability is carried out based on a binary classification approach. The procedure named classification reliability procedure, involving a link between artificial intelligence and reliability methods was developed as a user-friendly computer program in MATLAB language. To reveal the capacity of the proposed procedure two comparative numerical examples replicating a previous related work and predicting the failure probabilities of pressured pipeline with defects were presented.

  1. A National Needs Assessment to Identify Technical Procedures in Vascular Surgery for Simulation Based Training.

    Science.gov (United States)

    Nayahangan, L J; Konge, L; Schroeder, T V; Paltved, C; Lindorff-Larsen, K G; Nielsen, B U; Eiberg, J P

    2017-04-01

    Practical skills training in vascular surgery is facing challenges because of an increased number of endovascular procedures and fewer open procedures, as well as a move away from the traditional principle of "learning by doing." This change has established simulation as a cornerstone in providing trainees with the necessary skills and competences. However, the development of simulation based programs often evolves based on available resources and equipment, reflecting convenience rather than a systematic educational plan. The objective of the present study was to perform a national needs assessment to identify the technical procedures that should be integrated in a simulation based curriculum. A national needs assessment using a Delphi process was initiated by engaging 33 predefined key persons in vascular surgery. Round 1 was a brainstorming phase to identify technical procedures that vascular surgeons should learn. Round 2 was a survey that used a needs assessment formula to explore the frequency of procedures, the number of surgeons performing each procedure, risk and/or discomfort, and feasibility for simulation based training. Round 3 involved elimination and ranking of procedures. The response rate for round 1 was 70%, with 36 procedures identified. Round 2 had a 76% response rate and resulted in a preliminary prioritised list after exploring the need for simulation based training. Round 3 had an 85% response rate; 17 procedures were eliminated, resulting in a final prioritised list of 19 technical procedures. A national needs assessment using a standardised Delphi method identified a list of procedures that are highly suitable and may provide the basis for future simulation based training programs for vascular surgeons in training. Copyright © 2017 European Society for Vascular Surgery. Published by Elsevier Ltd. All rights reserved.

  2. Human Reliability Analysis For Computerized Procedures

    International Nuclear Information System (INIS)

    Boring, Ronald L.; Gertman, David I.; Le Blanc, Katya

    2011-01-01

    This paper provides a characterization of human reliability analysis (HRA) issues for computerized procedures in nuclear power plant control rooms. It is beyond the scope of this paper to propose a new HRA approach or to recommend specific methods or refinements to those methods. Rather, this paper provides a review of HRA as applied to traditional paper-based procedures, followed by a discussion of what specific factors should additionally be considered in HRAs for computerized procedures. Performance shaping factors and failure modes unique to computerized procedures are highlighted. Since there is no definitive guide to HRA for paper-based procedures, this paper also serves to clarify the existing guidance on paper-based procedures before delving into the unique aspects of computerized procedures.

  3. Beam-Based Procedures for RF Guns

    CERN Document Server

    Krasilnikov, Mikhail; Grabosch, H J; Hartrott, Michael; Hui Han, Jang; Miltchev, Velizar; Oppelt, Anne; Petrosyan, Bagrat; Staykov, Lazar; Stephan, Frank

    2005-01-01

    A wide range of rf photo injector parameters has to be optimized in order to achieve an electron source performance as required for linac based high gain FELs. Some of the machine parameters can not be precisely controlled by direct measurements, whereas the tolerance on them is extremely tight. Therefore, this should be met with beam-based techniques. Procedures for beam-based alignment (BBA) of the laser on the photo cathode as well as solenoid alignment have been developed. They were applied at the Photo Injector Test facility at DESY Zeuthen (PITZ) and at the photo injector of the VUV-FEL at DESY Hamburg. A field balance of the accelerating mode in the 1 ½ cell gun cavity is one of the key beam dynamics issues of the rf gun. Since no direct field measurement in the half and full cell of the cavity is available for the PITZ gun, a beam-based technique to determine the field balance has been proposed. A beam-based rf phase monitoring procedure has been developed as well.

  4. Methods and procedures of succession of generations

    International Nuclear Information System (INIS)

    Homann, A.; Bendzko, R.

    2001-01-01

    The presentation describes the methods and procedures of the succession of generations in the nuclear industry. The industrial development required specialised knowledge and creativity on a changing level. The relations ship between knowledge-transfer and transfer of the responsibility must be taken into account. The knowledge-transfer has to be planned as an investment. (authors)

  5. The Effect of the Extinction Procedure in Function-Based Intervention

    Science.gov (United States)

    Janney, Donna M.; Umbreit, John; Ferro, Jolenea B.; Liaupsin, Carl J.; Lane, Kathleen L.

    2013-01-01

    In this study, we examined the contribution of the extinction procedure in function-based interventions implemented in the general education classrooms of three at-risk elementary-aged students. Function-based interventions included antecedent adjustments, reinforcement procedures, and function-matched extinction procedures. Using a combined ABC…

  6. Estimating the operator's performance time of emergency procedural tasks based on a task complexity measure

    International Nuclear Information System (INIS)

    Jung, Won Dae; Park, Jink Yun

    2012-01-01

    It is important to understand the amount of time required to execute an emergency procedural task in a high-stress situation for managing human performance under emergencies in a nuclear power plant. However, the time to execute an emergency procedural task is highly dependent upon expert judgment due to the lack of actual data. This paper proposes an analytical method to estimate the operator's performance time (OPT) of a procedural task, which is based on a measure of the task complexity (TACOM). The proposed method for estimating an OPT is an equation that uses the TACOM as a variable, and the OPT of a procedural task can be calculated if its relevant TACOM score is available. The validity of the proposed equation is demonstrated by comparing the estimated OPTs with the observed OPTs for emergency procedural tasks in a steam generator tube rupture scenario.

  7. qPR: An adaptive partial-report procedure based on Bayesian inference.

    Science.gov (United States)

    Baek, Jongsoo; Lesmes, Luis Andres; Lu, Zhong-Lin

    2016-08-01

    Iconic memory is best assessed with the partial report procedure in which an array of letters appears briefly on the screen and a poststimulus cue directs the observer to report the identity of the cued letter(s). Typically, 6-8 cue delays or 600-800 trials are tested to measure the iconic memory decay function. Here we develop a quick partial report, or qPR, procedure based on a Bayesian adaptive framework to estimate the iconic memory decay function with much reduced testing time. The iconic memory decay function is characterized by an exponential function and a joint probability distribution of its three parameters. Starting with a prior of the parameters, the method selects the stimulus to maximize the expected information gain in the next test trial. It then updates the posterior probability distribution of the parameters based on the observer's response using Bayesian inference. The procedure is reiterated until either the total number of trials or the precision of the parameter estimates reaches a certain criterion. Simulation studies showed that only 100 trials were necessary to reach an average absolute bias of 0.026 and a precision of 0.070 (both in terms of probability correct). A psychophysical validation experiment showed that estimates of the iconic memory decay function obtained with 100 qPR trials exhibited good precision (the half width of the 68.2% credible interval = 0.055) and excellent agreement with those obtained with 1,600 trials of the conventional method of constant stimuli procedure (RMSE = 0.063). Quick partial-report relieves the data collection burden in characterizing iconic memory and makes it possible to assess iconic memory in clinical populations.

  8. Practical procedure for method validation in INAA- A tutorial

    International Nuclear Information System (INIS)

    Petroni, Robson; Moreira, Edson G.

    2015-01-01

    This paper describes the procedure employed by the Neutron Activation Laboratory at the Nuclear and Energy Research Institute (LAN, IPEN - CNEN/SP) for validation of Instrumental Neutron Activation Analysis (INAA) methods. According to recommendations of ISO/IEC 17025 the method performance characteristics (limit of detection, limit of quantification, trueness, repeatability, intermediate precision, reproducibility, selectivity, linearity and uncertainties budget) were outline in an easy, fast and convenient way. The paper presents step by step how to calculate the required method performance characteristics in a process of method validation, what are the procedures, adopted strategies and acceptance criteria for the results, that is, how to make a method validation in INAA. In order to exemplify the methodology applied, obtained results for the method validation of mass fraction determination of Co, Cr, Fe, Rb, Se and Zn in biological matrix samples, using an internal reference material of mussel tissue were presented. It was concluded that the methodology applied for validation of INAA methods is suitable, meeting all the requirements of ISO/IEC 17025, and thereby, generating satisfactory results for the studies carried at LAN, IPEN - CNEN/SP. (author)

  9. Practical procedure for method validation in INAA- A tutorial

    Energy Technology Data Exchange (ETDEWEB)

    Petroni, Robson; Moreira, Edson G., E-mail: robsonpetroni@usp.br [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil)

    2015-07-01

    This paper describes the procedure employed by the Neutron Activation Laboratory at the Nuclear and Energy Research Institute (LAN, IPEN - CNEN/SP) for validation of Instrumental Neutron Activation Analysis (INAA) methods. According to recommendations of ISO/IEC 17025 the method performance characteristics (limit of detection, limit of quantification, trueness, repeatability, intermediate precision, reproducibility, selectivity, linearity and uncertainties budget) were outline in an easy, fast and convenient way. The paper presents step by step how to calculate the required method performance characteristics in a process of method validation, what are the procedures, adopted strategies and acceptance criteria for the results, that is, how to make a method validation in INAA. In order to exemplify the methodology applied, obtained results for the method validation of mass fraction determination of Co, Cr, Fe, Rb, Se and Zn in biological matrix samples, using an internal reference material of mussel tissue were presented. It was concluded that the methodology applied for validation of INAA methods is suitable, meeting all the requirements of ISO/IEC 17025, and thereby, generating satisfactory results for the studies carried at LAN, IPEN - CNEN/SP. (author)

  10. A Survey of Procedural Methods for Terrain Modelling

    NARCIS (Netherlands)

    Smelik, R.M.; Kraker, J.K. de; Groenewegen, S.A.; Tutenel, T.; Bidarra, R.

    2009-01-01

    Procedural methods are a promising but underused alternative to manual content creation. Commonly heard drawbacks are the randomness of and the lack of control over the output and the absence of integrated solutions, although more recent publications increasingly address these issues. This paper

  11. Measuring fuel moisture content in Alaska: standard methods and procedures.

    Science.gov (United States)

    Rodney A. Norum; Melanie. Miller

    1984-01-01

    Methods and procedures are given for collecting and processing living and dead plant materials for the purpose of determining their water content. Wild-land fuels in Alaska are emphasized, but the methodology is applicable elsewhere. Guides are given for determining the number of samples needed to attain a chosen precision. Detailed procedures are presented for...

  12. Design Guidance for Computer-Based Procedures for Field Workers

    Energy Technology Data Exchange (ETDEWEB)

    Oxstrand, Johanna [Idaho National Lab. (INL), Idaho Falls, ID (United States); Le Blanc, Katya [Idaho National Lab. (INL), Idaho Falls, ID (United States); Bly, Aaron [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2016-09-01

    Nearly all activities that involve human interaction with nuclear power plant systems are guided by procedures, instructions, or checklists. Paper-based procedures (PBPs) currently used by most utilities have a demonstrated history of ensuring safety; however, improving procedure use could yield significant savings in increased efficiency, as well as improved safety through human performance gains. The nuclear industry is constantly trying to find ways to decrease human error rates, especially human error rates associated with procedure use. As a step toward the goal of improving field workers’ procedure use and adherence and hence improve human performance and overall system reliability, the U.S. Department of Energy Light Water Reactor Sustainability (LWRS) Program researchers, together with the nuclear industry, have been investigating the possibility and feasibility of replacing current paper-based procedures with computer-based procedures (CBPs). PBPs have ensured safe operation of plants for decades, but limitations in paper-based systems do not allow them to reach the full potential for procedures to prevent human errors. The environment in a nuclear power plant is constantly changing, depending on current plant status and operating mode. PBPs, which are static by nature, are being applied to a constantly changing context. This constraint often results in PBPs that are written in a manner that is intended to cover many potential operating scenarios. Hence, the procedure layout forces the operator to search through a large amount of irrelevant information to locate the pieces of information relevant for the task and situation at hand, which has potential consequences of taking up valuable time when operators must be responding to the situation, and potentially leading operators down an incorrect response path. Other challenges related to use of PBPs are management of multiple procedures, place-keeping, finding the correct procedure for a task, and relying

  13. Design Transformations for Rule-based Procedural Modeling

    KAUST Repository

    Lienhard, Stefan; Lau, Cheryl; Mü ller, Pascal; Wonka, Peter; Pauly, Mark

    2017-01-01

    We introduce design transformations for rule-based procedural models, e.g., for buildings and plants. Given two or more procedural designs, each specified by a grammar, a design transformation combines elements of the existing designs to generate new designs. We introduce two technical components to enable design transformations. First, we extend the concept of discrete rule switching to rule merging, leading to a very large shape space for combining procedural models. Second, we propose an algorithm to jointly derive two or more grammars, called grammar co-derivation. We demonstrate two applications of our work: we show that our framework leads to a larger variety of models than previous work, and we show fine-grained transformation sequences between two procedural models.

  14. Design Transformations for Rule-based Procedural Modeling

    KAUST Repository

    Lienhard, Stefan

    2017-05-24

    We introduce design transformations for rule-based procedural models, e.g., for buildings and plants. Given two or more procedural designs, each specified by a grammar, a design transformation combines elements of the existing designs to generate new designs. We introduce two technical components to enable design transformations. First, we extend the concept of discrete rule switching to rule merging, leading to a very large shape space for combining procedural models. Second, we propose an algorithm to jointly derive two or more grammars, called grammar co-derivation. We demonstrate two applications of our work: we show that our framework leads to a larger variety of models than previous work, and we show fine-grained transformation sequences between two procedural models.

  15. Procedural Media Representation

    OpenAIRE

    Henrysson, Anders

    2002-01-01

    We present a concept for using procedural techniques to represent media. Procedural methods allow us to represent digital media (2D images, 3D environments etc.) with very little information and to render it photo realistically. Since not all kind of content can be created procedurally, traditional media representations (bitmaps, polygons etc.) must be used as well. We have adopted an object-based media representation where an object can be represented either with a procedure or with its trad...

  16. Parallel iterative procedures for approximate solutions of wave propagation by finite element and finite difference methods

    Energy Technology Data Exchange (ETDEWEB)

    Kim, S. [Purdue Univ., West Lafayette, IN (United States)

    1994-12-31

    Parallel iterative procedures based on domain decomposition techniques are defined and analyzed for the numerical solution of wave propagation by finite element and finite difference methods. For finite element methods, in a Lagrangian framework, an efficient way for choosing the algorithm parameter as well as the algorithm convergence are indicated. Some heuristic arguments for finding the algorithm parameter for finite difference schemes are addressed. Numerical results are presented to indicate the effectiveness of the methods.

  17. Issues involved in a knowledge-based approach to procedure synthesis

    International Nuclear Information System (INIS)

    Hajek, B.K.; Khartabil, L.F.; Miller, D.W.

    1992-01-01

    Many knowledge-based systems (KBSs) have been built to assist human operators in managing nuclear power plant operating functions, such as monitoring, fault diagnosis, alarm filtering, and procedure management. For procedure management, KBSs have been built to display and track existing written procedures or to dynamically follow procedure execution by monitoring plant data and action execution and suggesting recovery steps. More recent works build KBSs able to synthesize procedures. This paper addresses and examines the main issues related to the implementation of on-line procedure synthesis using KBSs. A KBS for procedure synthesis can provide a more robust and effective procedural plan during accidents. Currently existing procedures for abnormal plant conditions, written as precompiled step sets based on the event and symptom approaches, are inherently not robust because anticipation of all potential plant states and associated plant responses is not possible. Thus, their failure recovery capability is limited to the precompiled set. Procedure synthesis has the potential to overcome these two problems because it does not require such precompilation of large sets of plant states and associated recovery procedures. Other benefits obtained from a complete procedure synthesis system are providing (a) a methodology for off-line procedure verification and (b) a methodology for the eventual automation of plant operations

  18. An Innovative Adaptive Pushover Procedure Based on Storey Shear

    International Nuclear Information System (INIS)

    Shakeri, Kazem; Shayanfar, Mohsen A.

    2008-01-01

    Since the conventional pushover analyses are unable to consider the effect of the higher modes and progressive variation in dynamic properties, recent years have witnessed the development of some advanced adaptive pushover methods. However in these methods, using the quadratic combination rules to combine the modal forces result in a positive value in load pattern at all storeys and the reversal sign of the modes is removed; consequently these methods do not have a major advantage over their non-adaptive counterparts. Herein an innovative adaptive pushover method based on storey shear is proposed which can take into account the reversal signs in higher modes. In each storey the applied load pattern is derived from the storey shear profile; consequently, the sign of the applied loads in consecutive steps could be changed. Accuracy of the proposed procedure is examined by applying it to a 20-storey steel building. It illustrates a good estimation of the peak response in inelastic phase

  19. Method Validation Procedure in Gamma Spectroscopy Laboratory

    International Nuclear Information System (INIS)

    El Samad, O.; Baydoun, R.

    2008-01-01

    The present work describes the methodology followed for the application of ISO 17025 standards in gamma spectroscopy laboratory at the Lebanese Atomic Energy Commission including the management and technical requirements. A set of documents, written procedures and records were prepared to achieve the management part. The technical requirements, internal method validation was applied through the estimation of trueness, repeatability , minimum detectable activity and combined uncertainty, participation in IAEA proficiency tests assure the external method validation, specially that the gamma spectroscopy lab is a member of ALMERA network (Analytical Laboratories for the Measurements of Environmental Radioactivity). Some of these results are presented in this paper. (author)

  20. An automatic optimum number of well-distributed ground control lines selection procedure based on genetic algorithm

    Science.gov (United States)

    Yavari, Somayeh; Valadan Zoej, Mohammad Javad; Salehi, Bahram

    2018-05-01

    The procedure of selecting an optimum number and best distribution of ground control information is important in order to reach accurate and robust registration results. This paper proposes a new general procedure based on Genetic Algorithm (GA) which is applicable for all kinds of features (point, line, and areal features). However, linear features due to their unique characteristics are of interest in this investigation. This method is called Optimum number of Well-Distributed ground control Information Selection (OWDIS) procedure. Using this method, a population of binary chromosomes is randomly initialized. The ones indicate the presence of a pair of conjugate lines as a GCL and zeros specify the absence. The chromosome length is considered equal to the number of all conjugate lines. For each chromosome, the unknown parameters of a proper mathematical model can be calculated using the selected GCLs (ones in each chromosome). Then, a limited number of Check Points (CPs) are used to evaluate the Root Mean Square Error (RMSE) of each chromosome as its fitness value. The procedure continues until reaching a stopping criterion. The number and position of ones in the best chromosome indicate the selected GCLs among all conjugate lines. To evaluate the proposed method, a GeoEye and an Ikonos Images are used over different areas of Iran. Comparing the obtained results by the proposed method in a traditional RFM with conventional methods that use all conjugate lines as GCLs shows five times the accuracy improvement (pixel level accuracy) as well as the strength of the proposed method. To prevent an over-parametrization error in a traditional RFM due to the selection of a high number of improper correlated terms, an optimized line-based RFM is also proposed. The results show the superiority of the combination of the proposed OWDIS method with an optimized line-based RFM in terms of increasing the accuracy to better than 0.7 pixel, reliability, and reducing systematic

  1. Identifying complications of interventional procedures from UK routine healthcare databases: a systematic search for methods using clinical codes.

    Science.gov (United States)

    Keltie, Kim; Cole, Helen; Arber, Mick; Patrick, Hannah; Powell, John; Campbell, Bruce; Sims, Andrew

    2014-11-28

    Several authors have developed and applied methods to routine data sets to identify the nature and rate of complications following interventional procedures. But, to date, there has been no systematic search for such methods. The objective of this article was to find, classify and appraise published methods, based on analysis of clinical codes, which used routine healthcare databases in a United Kingdom setting to identify complications resulting from interventional procedures. A literature search strategy was developed to identify published studies that referred, in the title or abstract, to the name or acronym of a known routine healthcare database and to complications from procedures or devices. The following data sources were searched in February and March 2013: Cochrane Methods Register, Conference Proceedings Citation Index - Science, Econlit, EMBASE, Health Management Information Consortium, Health Technology Assessment database, MathSciNet, MEDLINE, MEDLINE in-process, OAIster, OpenGrey, Science Citation Index Expanded and ScienceDirect. Of the eligible papers, those which reported methods using clinical coding were classified and summarised in tabular form using the following headings: routine healthcare database; medical speciality; method for identifying complications; length of follow-up; method of recording comorbidity. The benefits and limitations of each approach were assessed. From 3688 papers identified from the literature search, 44 reported the use of clinical codes to identify complications, from which four distinct methods were identified: 1) searching the index admission for specified clinical codes, 2) searching a sequence of admissions for specified clinical codes, 3) searching for specified clinical codes for complications from procedures and devices within the International Classification of Diseases 10th revision (ICD-10) coding scheme which is the methodology recommended by NHS Classification Service, and 4) conducting manual clinical

  2. Novel purification procedure and derivatization method of single-walled carbon nanotubes (SWNTs)

    International Nuclear Information System (INIS)

    Holzinger, Michael; Hirsch, Andreas; Bernier, Patrick; Duesberg, Georg S.; Burghard, Marko

    2000-01-01

    A new purification procedure is introduced, which uses the advantages of both, column-chromatography and vacuum-filtration. Potassium polyacrylate was used as a stationary phase. This method is based on the idea that the size of the existing cavities in the polymer increases during a swelling process in distilled water. The cavities are big enough to entrap nanoparticles, but allow for a free movement of nanotubes and bundles. The procedure starts with an oxidation step to remove part of catalyst and nanoparticles. In this step a chemical modification of the SWNTs occurs, namely the oxidation of cage carbon atoms to carboxylic groups as well as to hydroxyl- and carbonyl-groups. In contrast to Haddon, we use an alternative derivatziation of carboxylic acid groups in making amides in water. AFM images of the reaction products show clearly that the SWNTs have also been oxidized on their sidewalls

  3. Selecting a Risk-Based SQC Procedure for a HbA1c Total QC Plan.

    Science.gov (United States)

    Westgard, Sten A; Bayat, Hassan; Westgard, James O

    2017-09-01

    Recent US practice guidelines and laboratory regulations for quality control (QC) emphasize the development of QC plans and the application of risk management principles. The US Clinical Laboratory Improvement Amendments (CLIA) now includes an option to comply with QC regulations by developing an individualized QC plan (IQCP) based on a risk assessment of the total testing process. The Clinical and Laboratory Standards Institute (CLSI) has provided new practice guidelines for application of risk management to QC plans and statistical QC (SQC). We describe an alternative approach for developing a total QC plan (TQCP) that includes a risk-based SQC procedure. CLIA compliance is maintained by analyzing at least 2 levels of controls per day. A Sigma-Metric SQC Run Size nomogram provides a graphical tool to simplify the selection of risk-based SQC procedures. Current HbA1c method performance, as demonstrated by published method validation studies, is estimated to be 4-Sigma quality at best. Optimal SQC strategies require more QC than the CLIA minimum requirement of 2 levels per day. More complex control algorithms, more control measurements, and a bracketed mode of operation are needed to assure the intended quality of results. A total QC plan with a risk-based SQC procedure provides a simpler alternative to an individualized QC plan. A Sigma-Metric SQC Run Size nomogram provides a practical tool for selecting appropriate control rules, numbers of control measurements, and run size (or frequency of SQC). Applications demonstrate the need for continued improvement of analytical performance of HbA1c laboratory methods.

  4. Control of Risks Through the Use of Procedures: A Method for Evaluating the Change in Risk

    Science.gov (United States)

    Praino, Gregory T.; Sharit, Joseph

    2010-01-01

    This paper considers how procedures can be used to control risks faced by an organization and proposes a means of recognizing if a particular procedure reduces risk or contributes to the organization's exposure. The proposed method was developed out of the review of work documents and the governing procedures performed in the wake of the Columbia accident by NASA and the Space Shuttle prime contractor, United Space Alliance, LLC. A technique was needed to understand the rules, or procedural controls, in place at the time in the context of how important the role of each rule was. The proposed method assesses procedural risks, the residual risk associated with a hazard after a procedure's influence is accounted for, by considering each clause of a procedure as a unique procedural control that may be beneficial or harmful. For procedural risks with consequences severe enough to threaten the survival of the organization, the method measures the characteristics of each risk on a scale that is an alternative to the traditional consequence/likelihood couple. The dual benefits of the substitute scales are that they eliminate both the need to quantify a relationship between different consequence types and the need for the extensive history a probabilistic risk assessment would require. Control Value is used as an analog for the consequence, where the value of a rule is based on how well the control reduces the severity of the consequence when operating successfully. This value is composed of two parts: the inevitability of the consequence in the absence of the control, and the opportunity to intervene before the consequence is realized. High value controls will be ones where there is minimal need for intervention but maximum opportunity to actively prevent the outcome. Failure Likelihood is used as the substitute for the conventional likelihood of the outcome. For procedural controls, a failure is considered to be any non-malicious violation of the rule, whether intended or

  5. Vision based flight procedure stereo display system

    Science.gov (United States)

    Shen, Xiaoyun; Wan, Di; Ma, Lan; He, Yuncheng

    2008-03-01

    A virtual reality flight procedure vision system is introduced in this paper. The digital flight map database is established based on the Geographic Information System (GIS) and high definitions satellite remote sensing photos. The flight approaching area database is established through computer 3D modeling system and GIS. The area texture is generated from the remote sensing photos and aerial photographs in various level of detail. According to the flight approaching procedure, the flight navigation information is linked to the database. The flight approaching area vision can be dynamic displayed according to the designed flight procedure. The flight approaching area images are rendered in 2 channels, one for left eye images and the others for right eye images. Through the polarized stereoscopic projection system, the pilots and aircrew can get the vivid 3D vision of the flight destination approaching area. Take the use of this system in pilots preflight preparation procedure, the aircrew can get more vivid information along the flight destination approaching area. This system can improve the aviator's self-confidence before he carries out the flight mission, accordingly, the flight safety is improved. This system is also useful in validate the visual flight procedure design, and it helps to the flight procedure design.

  6. An assessment of methods for monitoring entrance surface dose in fluoroscopically guided interventional procedures

    International Nuclear Information System (INIS)

    Waite, J.C.; Fitzgerald, M.

    2001-01-01

    In the light of a growing awareness of the risks of inducing skin injuries as a consequence of fluoroscopically guided interventional procedures (FGIPs), this paper compares three methods of monitoring entrance surface dose (ESD). It also reports measurements of ESDs made during the period August 1998 to June 1999 on 137 patients undergoing cardiac, neurological and general FGIPs. Although the sample is small, the results reinforce the need for routine assessments to be made of ESDs in FGIPs. At present, the most reliable and accurate form of ESD measurement would seem to be arrays of TLDs. However, transducer based methods, although likely to be less accurate, have considerable advantages in relation to a continuous monitoring programme. It is also suggested that there may be the potential locally for threshold dose area product (DAP) values to be set for specific procedures. These could be used to provide early warning of the potential for skin injuries. (author)

  7. New method to incorporate Type B uncertainty into least-squares procedures in radionuclide metrology

    International Nuclear Information System (INIS)

    Han, Jubong; Lee, K.B.; Lee, Jong-Man; Park, Tae Soon; Oh, J.S.; Oh, Pil-Jei

    2016-01-01

    We discuss a new method to incorporate Type B uncertainty into least-squares procedures. The new method is based on an extension of the likelihood function from which a conventional least-squares function is derived. The extended likelihood function is the product of the original likelihood function with additional PDFs (Probability Density Functions) that characterize the Type B uncertainties. The PDFs are considered to describe one's incomplete knowledge on correction factors being called nuisance parameters. We use the extended likelihood function to make point and interval estimations of parameters in the basically same way as the least-squares function used in the conventional least-squares method is derived. Since the nuisance parameters are not of interest and should be prevented from appearing in the final result, we eliminate such nuisance parameters by using the profile likelihood. As an example, we present a case study for a linear regression analysis with a common component of Type B uncertainty. In this example we compare the analysis results obtained from using our procedure with those from conventional methods. - Highlights: • A new method proposed to incorporate Type B uncertainty into least-squares method. • The method constructed from the likelihood function and PDFs of Type B uncertainty. • A case study performed to compare results from the new and the conventional method. • Fitted parameters are consistent but with larger uncertainties in the new method.

  8. The use of "mixing" procedure of mixed methods in health services research.

    Science.gov (United States)

    Zhang, Wanqing; Creswell, John

    2013-08-01

    Mixed methods research has emerged alongside qualitative and quantitative approaches as an important tool for health services researchers. Despite growing interest, among health services researchers, in using mixed methods designs, little has been done to identify the procedural aspects of doing so. To describe how mixed methods researchers mix the qualitative and quantitative aspects of their studies in health services research. We searched the PubMed for articles, using mixed methods in health services research, published between January 1, 2006 and December 30, 2010. We identified and reviewed 30 published health services research articles on studies in which mixed methods had been used. We selected 3 articles as illustrations to help health services researcher conceptualize the type of mixing procedures that they were using. Three main "mixing" procedures have been applied within these studies: (1) the researchers analyzed the 2 types of data at the same time but separately and integrated the results during interpretation; (2) the researchers connected the qualitative and quantitative portions in phases in such a way that 1 approach was built upon the findings of the other approach; and (3) the researchers mixed the 2 data types by embedding the analysis of 1 data type within the other. "Mixing" in mixed methods is more than just the combination of 2 independent components of the quantitative and qualitative data. The use of "mixing" procedure in health services research involves the integration, connection, and embedding of these 2 data components.

  9. Developing site-specific interactive environmental management tools: An exciting method of communicating training, procedures, and other information

    Energy Technology Data Exchange (ETDEWEB)

    Jaeckels, J.M.

    1999-07-01

    Environmental managers are faced with numerous programs that must be communicated throughout their organizations. Among these are regulatory training programs, internal environmental policy, regulatory guidance/procedures and internal guidance/procedures. Traditional methods of delivering this type of information are typically confined to written materials and classroom training. There are many challenges faced by environmental managers with these traditional approaches including: determining if recipients of written plans or procedures are reading and comprehending the information; scheduling training sessions to reach all affected people across multiple schedules/shifts; and maintaining adequate training records. In addition, current trends toward performance-based or competency-based training requires a more consistent method of measuring and documenting performance. The use of interactive computer applications to present training or procedural information is a new and exciting tool for delivering environmental information to employees. Site-specific pictures, text, sound, and even video can be combined with multimedia software to create informative and highly interactive applications. Some of the applications that can be produced include integrated environmental training, educational pieces, and interactive environmental procedures. They can be executed from a CD-ROM, hard drive, network or a company Intranet. Collectively, the authors refer to these as interactive environmental management tools (IEMTs). This paper focuses on site-specific, interactive training as an example of an IEMT. Interactive training not only delivers a highly effective message, but can also be designed to focus on site-specific environmental issues that are unique to each company. Interactive training also lends itself well to automated record keeping functions and to reaching all affected employees.

  10. HUMAN RELIABILITY ANALYSIS FOR COMPUTERIZED PROCEDURES, PART TWO: APPLICABILITY OF CURRENT METHODS

    Energy Technology Data Exchange (ETDEWEB)

    Ronald L. Boring; David I. Gertman

    2012-10-01

    Computerized procedures (CPs) are an emerging technology within nuclear power plant control rooms. While CPs have been implemented internationally in advanced control rooms, to date no U.S. nuclear power plant has implemented CPs in its main control room. Yet, CPs are a reality of new plant builds and are an area of considerable interest to existing plants, which see advantages in terms of easier records management by omitting the need for updating hardcopy procedures. The overall intent of this paper is to provide a characterization of human reliability analysis (HRA) issues for computerized procedures. It is beyond the scope of this document to propose a new HRA approach or to recommend specific methods or refinements to those methods. Rather, this paper serves as a review of current HRA as it may be used for the analysis and review of computerized procedures.

  11. [Costing nuclear medicine diagnostic procedures].

    Science.gov (United States)

    Markou, Pavlos

    2005-01-01

    To the Editor: Referring to a recent special report about the cost analysis of twenty-nine nuclear medicine procedures, I would like to clarify some basic aspects for determining costs of nuclear medicine procedure with various costing methodologies. Activity Based Costing (ABC) method, is a new approach in imaging services costing that can provide the most accurate cost data, but is difficult to perform in nuclear medicine diagnostic procedures. That is because ABC requires determining and analyzing all direct and indirect costs of each procedure, according all its activities. Traditional costing methods, like those for estimating incomes and expenses per procedure or fixed and variable costs per procedure, which are widely used in break-even point analysis and the method of ratio-of-costs-to-charges per procedure may be easily performed in nuclear medicine departments, to evaluate the variability and differences between costs and reimbursement - charges.

  12. Methods and procedures for shielding analyses for the SNS

    International Nuclear Information System (INIS)

    Popova, I.; Ferguson, F.; Gallmeier, F.X.; Iverson, E.; Lu, Wei

    2011-01-01

    In order to provide radiologically safe Spallation Neutron Source operation, shielding analyses are performed according to Oak Ridge National Laboratory internal regulations and to comply with the Code of Federal Regulations. An overview of on-going shielding work for the accelerator facility and neutrons beam lines, methods used for the analyses, and associated procedures and regulations are presented. Methods used to perform shielding analyses are described as well. (author)

  13. Using Learner-Centered, Simulation-Based Training to Improve Medical Students’ Procedural Skills

    Directory of Open Access Journals (Sweden)

    Serkan Toy

    2017-03-01

    Full Text Available Purpose: To evaluate the effectiveness of a learner-centered, simulation-based training developed to help medical students improve their procedural skills in intubation, arterial line placement, lumbar puncture, and central line insertion. Method: The study participants were second and third year medical students. Anesthesiology residents provided the training and evaluated students’ procedural skills. Two residents were present at each station to train the medical students who rotated through all 4 stations. Pre/posttraining assessment of confidence, knowledge, and procedural skills was done using a survey, a multiple-choice test, and procedural checklists, respectively. Results: In total, 24 students were trained in six 4-hour sessions. Students reported feeling significantly more confident, after training, in performing all 4 procedures on a real patient ( P < .001. Paired-samples t tests indicated statistically significant improvement in knowledge scores for intubation, t (23 = −2.92, P < .001, and arterial line placement, t (23 = −2.75, P < .001. Procedural performance scores for intubation ( t (23 = −17.29, P < .001, arterial line placement ( t (23 = −19.75, P < .001, lumbar puncture ( t (23 = −16.27, P < .001, and central line placement ( t (23 = −17.25, P < .001 showed significant improvement. Intraclass correlation coefficients indicated high reliability in checklist scores for all procedures. Conclusions: The simulation sessions allowed each medical student to receive individual attention from 2 residents for each procedure. Students’ written comments indicated that this training modality was well received. Results showed that medical students improved their self-confidence, knowledge, and skills in the aforementioned procedures.

  14. An angularly refineable phase space finite element method with approximate sweeping procedure

    International Nuclear Information System (INIS)

    Kophazi, J.; Lathouwers, D.

    2013-01-01

    An angularly refineable phase space finite element method is proposed to solve the neutron transport equation. The method combines the advantages of two recently published schemes. The angular domain is discretized into small patches and patch-wise discontinuous angular basis functions are restricted to these patches, i.e. there is no overlap between basis functions corresponding to different patches. This approach yields block diagonal Jacobians with small block size and retains the possibility for S n -like approximate sweeping of the spatially discontinuous elements in order to provide efficient preconditioners for the solution procedure. On the other hand, the preservation of the full FEM framework (as opposed to collocation into a high-order S n scheme) retains the possibility of the Galerkin interpolated connection between phase space elements at arbitrary levels of discretization. Since the basis vectors are not orthonormal, a generalization of the Riemann procedure is introduced to separate the incoming and outgoing contributions in case of unstructured meshes. However, due to the properties of the angular discretization, the Riemann procedure can be avoided at a large fraction of the faces and this fraction rapidly increases as the level of refinement increases, contributing to the computational efficiency. In this paper the properties of the discretization scheme are studied with uniform refinement using an iterative solver based on the S 2 sweep order of the spatial elements. The fourth order convergence of the scalar flux is shown as anticipated from earlier schemes and the rapidly decreasing fraction of required Riemann faces is illustrated. (authors)

  15. A single-photon ecat reconstruction procedure based on a PSF model

    International Nuclear Information System (INIS)

    Ying-Lie, O.

    1984-01-01

    Emission Computed Axial Tomography (ECAT) has been applied in nuclear medicine for the past few years. Owing to attenuation and scatter along the ray path, adequate correction methods are required. In this thesis, a correction method for attenuation, detector response and Compton scatter has been proposed. The method developed is based on a PSF model. The parameters of the models were derived by fitting experimental and simulation data. Because of its flexibility, a Monte Carlo simulation method has been employed. Using the PSF models, it was found that the ECAT problem can be described by the added modified equation. Application of the reconstruction procedure on simulation data yield satisfactory results. The algorithm tends to amplify noise and distortion in the data, however. Therefore, the applicability of the method on patient studies remain to be seen. (Auth.)

  16. A method for risk informing procedures at operating nuclear power plants

    Energy Technology Data Exchange (ETDEWEB)

    Nelson, P. F.; Martin del Campo, C., E-mail: pnelson_007@yahoo.com [UNAM, Facultad de Ingenieria, Departamento de Sistemas Energeticos, Paseo Cuauhnahuac No. 8532, Col. Progreso, 62550 Jiutepec, Morelos (Mexico)

    2012-10-15

    The technical approach presented establishes a framework intended to provide the necessary elements for a deployable human performance monitoring program that incorporates insights from plant specific probabilistic risk assessments, human reliability analysis, as well as the development of plant specific human failure data. A human performance monitoring program of this structure would be used to provide the ability to risk inform procedures (e.g., operations or maintenance) to determine the operational risk significance of procedural performance (i.e., precautions, prerequisites, procedure steps), the likelihood of consequential human error dur the performance of the procedure, and the identification of procedure specific barriers to reduce or eliminate consequential human errors. The program would provide the means to assess procedures prior to execution and the means to record and trend human failure events leading to a plant specific human failure database for human activities characterized as pre-initiator. The technical methods and data processing for each of these areas are developed and presented, as well as an example application of an operational procedure error leading to a plant level event (i.e, plant trip). (Author)

  17. Reliability assessment of a manual-based procedure towards learning curve modeling and fmea analysis

    Directory of Open Access Journals (Sweden)

    Gustavo Rech

    2013-03-01

    Full Text Available Separation procedures in drug Distribution Centers (DC are manual-based activities prone to failures such as shipping exchanged, expired or broken drugs to the customer. Two interventions seem as promising in improving the reliability in the separation procedure: (i selection and allocation of appropriate operators to the procedure, and (ii analysis of potential failure modes incurred by selected operators. This article integrates Learning Curves (LC and FMEA (Failure Mode and Effect Analysis aimed at reducing the occurrence of failures in the manual separation of a drug DC. LCs parameters enable generating an index to identify the recommended operators to perform the procedures. The FMEA is then applied to the separation procedure carried out by the selected operators in order to identify failure modes. It also deployed the traditional FMEA severity index into two sub-indexes related to financial issues and damage to company´s image in order to characterize failures severity. When applied to a drug DC, the proposed method significantly reduced the frequency and severity of failures in the separation procedure.

  18. On a computational method for modelling complex ecosystems by superposition procedure

    International Nuclear Information System (INIS)

    He Shanyu.

    1986-12-01

    In this paper, the Superposition Procedure is concisely described, and a computational method for modelling a complex ecosystem is proposed. With this method, the information contained in acceptable submodels and observed data can be utilized to maximal degree. (author). 1 ref

  19. Evaluation of Computer-Based Procedure System Prototype

    Energy Technology Data Exchange (ETDEWEB)

    Johanna Oxstrand; Katya Le Blanc; Seth Hays

    2012-09-01

    This research effort is a part of the Light-Water Reactor Sustainability (LWRS) Program, which is a research and development (R&D) program sponsored by Department of Energy (DOE), performed in close collaboration with industry R&D programs, to provide the technical foundations for licensing and managing the long-term, safe, and economical operation of current nuclear power plants. The LWRS program serves to help the U.S. nuclear industry adopt new technologies and engineering solutions that facilitate the continued safe operation of the plants and extension of the current operating licenses. The introduction of advanced technology in existing nuclear power plants may help to manage the effects of aging systems, structures, and components. In addition, the incorporation of advanced technology in the existing LWR fleet may entice the future workforce, who will be familiar with advanced technology, to work for these utilities rather than more newly built nuclear power plants. Advantages are being sought by developing and deploying technologies that will increase safety and efficiency. One significant opportunity for existing plants to increase efficiency is to phase out the paper-based procedures (PBPs) currently used at most nuclear power plants and replace them, where feasible, with computer-based procedures (CBPs). PBPs have ensured safe operation of plants for decades, but limitations in paper-based systems do not allow them to reach the full potential for procedures to prevent human errors. The environment in a nuclear power plant is constantly changing depending on current plant status and operating mode. PBPs, which are static by nature, are being applied to a constantly changing context. This constraint often results in PBPs that are written in a manner that is intended to cover many potential operating scenarios. Hence, the procedure layout forces the operator to search through a large amount of irrelevant information to locate the pieces of information

  20. Investigation of a reinforcement-based toilet training procedure for children with autism.

    Science.gov (United States)

    Cicero, Frank R; Pfadt, Al

    2002-01-01

    Independent toileting is an important developmental skill which individuals with developmental disabilities often find a challenge to master. Effective toilet training interventions have been designed which rely on a combination of basic operant principles of positive reinforcement and punishment. In the present study, the effectiveness of a reinforcement-based toilet training intervention was investigated with three children with a diagnosis of autism. Procedures included a combination of positive reinforcement, graduated guidance, scheduled practice trials and forward prompting. Results indicated that all procedures were implemented in response to urination accidents. A three participants reduced urination accidents to zero and learned to spontaneously request use of the bathroom within 7-11 days of training. Gains were maintained over 6-month and 1-year follow-ups. Findings suggest that the proposed procedure is an effective and rapid method of toilet training, which can be implemented within a structured school setting with generalization to the home environment.

  1. Procedures for identifying evidence-based psychological treatments for older adults.

    Science.gov (United States)

    Yon, Adriana; Scogin, Forrest

    2007-03-01

    The authors describe the methods used to identify evidence-based psychological treatments for older adults in this contribution to the special section. Coding teams were assembled to review the literature on several problems relevant to mental health and aging. These teams used the manual developed by the Committee on Science and Practice of the Society for Clinical Psychology (Division 12) of the American Psychological Association that provided definitions of key constructs used in coding. The authors provide an overview of the process followed by the review teams and of some of the issues that emerged to illustrate the steps involved in the coding procedure. Identifying evidence-based treatments is a fundamental aspect of promoting evidence-based practice with older adults; such practice is advocated by most health care disciplines, including psychology. ((c) 2007 APA, all rights reserved).

  2. Guideline for Bayesian Net based Software Fault Estimation Method for Reactor Protection System

    International Nuclear Information System (INIS)

    Eom, Heung Seop; Park, Gee Yong; Jang, Seung Cheol

    2011-01-01

    The purpose of this paper is to provide a preliminary guideline for the estimation of software faults in a safety-critical software, for example, reactor protection system's software. As the fault estimation method is based on Bayesian Net which intensively uses subjective probability and informal data, it is necessary to define formal procedure of the method to minimize the variability of the results. The guideline describes assumptions, limitations and uncertainties, and the product of the fault estimation method. The procedure for conducting a software fault-estimation method is then outlined, highlighting the major tasks involved. The contents of the guideline are based on our own experience and a review of research guidelines developed for a PSA

  3. Price adjustment for traditional Chinese medicine procedures: Based on a standardized value parity model.

    Science.gov (United States)

    Wang, Haiyin; Jin, Chunlin; Jiang, Qingwu

    2017-11-20

    Traditional Chinese medicine (TCM) is an important part of China's medical system. Due to the prolonged low price of TCM procedures and the lack of an effective mechanism for dynamic price adjustment, the development of TCM has markedly lagged behind Western medicine. The World Health Organization (WHO) has emphasized the need to enhance the development of alternative and traditional medicine when creating national health care systems. The establishment of scientific and appropriate mechanisms to adjust the price of medical procedures in TCM is crucial to promoting the development of TCM. This study has examined incorporating value indicators and data on basic manpower expended, time spent, technical difficulty, and the degree of risk in the latest standards for the price of medical procedures in China, and this study also offers a price adjustment model with the relative price ratio as a key index. This study examined 144 TCM procedures and found that prices of TCM procedures were mainly based on the value of medical care provided; on average, medical care provided accounted for 89% of the price. Current price levels were generally low and the current price accounted for 56% of the standardized value of a procedure, on average. Current price levels accounted for a markedly lower standardized value of acupuncture, moxibustion, special treatment with TCM, and comprehensive TCM procedures. This study selected a total of 79 procedures and adjusted them by priority. The relationship between the price of TCM procedures and the suggested price was significantly optimized (p based on a standardized value parity model is a scientific and suitable method of price adjustment that can serve as a reference for other provinces and municipalities in China and other countries and regions that mainly have fee-for-service (FFS) medical care.

  4. Drawing-Based Procedural Modeling of Chinese Architectures.

    Science.gov (United States)

    Fei Hou; Yue Qi; Hong Qin

    2012-01-01

    This paper presents a novel modeling framework to build 3D models of Chinese architectures from elevation drawing. Our algorithm integrates the capability of automatic drawing recognition with powerful procedural modeling to extract production rules from elevation drawing. First, different from the previous symbol-based floor plan recognition, based on the novel concept of repetitive pattern trees, small horizontal repetitive regions of the elevation drawing are clustered in a bottom-up manner to form architectural components with maximum repetition, which collectively serve as building blocks for 3D model generation. Second, to discover the global architectural structure and its components' interdependencies, the components are structured into a shape tree in a top-down subdivision manner and recognized hierarchically at each level of the shape tree based on Markov Random Fields (MRFs). Third, shape grammar rules can be derived to construct 3D semantic model and its possible variations with the help of a 3D component repository. The salient contribution lies in the novel integration of procedural modeling with elevation drawing, with a unique application to Chinese architectures.

  5. Using mixed methods effectively in prevention science: designs, procedures, and examples.

    Science.gov (United States)

    Zhang, Wanqing; Watanabe-Galloway, Shinobu

    2014-10-01

    There is growing interest in using a combination of quantitative and qualitative methods to generate evidence about the effectiveness of health prevention, services, and intervention programs. With the emerging importance of mixed methods research across the social and health sciences, there has been an increased recognition of the value of using mixed methods for addressing research questions in different disciplines. We illustrate the mixed methods approach in prevention research, showing design procedures used in several published research articles. In this paper, we focused on two commonly used mixed methods designs: concurrent and sequential mixed methods designs. We discuss the types of mixed methods designs, the reasons for, and advantages of using a particular type of design, and the procedures of qualitative and quantitative data collection and integration. The studies reviewed in this paper show that the essence of qualitative research is to explore complex dynamic phenomena in prevention science, and the advantage of using mixed methods is that quantitative data can yield generalizable results and qualitative data can provide extensive insights. However, the emphasis of methodological rigor in a mixed methods application also requires considerable expertise in both qualitative and quantitative methods. Besides the necessary skills and effective interdisciplinary collaboration, this combined approach also requires an open-mindedness and reflection from the involved researchers.

  6. The use of Brainsuite iCT for frame-based stereotactic procedures

    DEFF Research Database (Denmark)

    Skjøth-Rasmussen, Jane; Jespersen, Bo; Brennum, Jannick

    2015-01-01

    BACKGROUND: Frame-based stereotactic procedures are the gold standard because of their superior stereotactic accuracy. The procedure used to be in multiple steps and was especially cumbersome and hazardous in intubated patients. A single-step procedure using intraoperative CT was created...

  7. Look-ahead procedures for Lanczos-type product methods based on three-term recurrences

    Energy Technology Data Exchange (ETDEWEB)

    Gutknecht, M.H.; Ressel, K.J. [Swiss Center for Scientific Computing, Zuerich (Switzerland)

    1996-12-31

    Lanczos-type product methods for the solution of large sparse non-Hermitian linear systems either square the Lanczos process or combine it with a local minimization of the residual. They inherit from the underlying Lanczos process the danger of breakdown. For various Lanczos-type product methods that are based on the Lanczos three-term recurrence, look-ahead versions are presented, which avoid such breakdowns or near breakdowns with a small computational overhead. Different look-ahead strategies are discussed and their efficiency is demonstrated in several numerical examples.

  8. A Simple Method for Identifying the Acromioclavicular Joint During Arthroscopic Procedures

    OpenAIRE

    Javed, Saqib; Heasley, Richard; Ravenscroft, Matt

    2013-01-01

    Arthroscopic acromioclavicular joint excision is performed via an anterior portal and is technically demanding. We present a simple method for identifying the acromioclavicular joint during arthroscopic procedures.

  9. Network-Based Method for Identifying Co- Regeneration Genes in Bone, Dentin, Nerve and Vessel Tissues.

    Science.gov (United States)

    Chen, Lei; Pan, Hongying; Zhang, Yu-Hang; Feng, Kaiyan; Kong, XiangYin; Huang, Tao; Cai, Yu-Dong

    2017-10-02

    Bone and dental diseases are serious public health problems. Most current clinical treatments for these diseases can produce side effects. Regeneration is a promising therapy for bone and dental diseases, yielding natural tissue recovery with few side effects. Because soft tissues inside the bone and dentin are densely populated with nerves and vessels, the study of bone and dentin regeneration should also consider the co-regeneration of nerves and vessels. In this study, a network-based method to identify co-regeneration genes for bone, dentin, nerve and vessel was constructed based on an extensive network of protein-protein interactions. Three procedures were applied in the network-based method. The first procedure, searching, sought the shortest paths connecting regeneration genes of one tissue type with regeneration genes of other tissues, thereby extracting possible co-regeneration genes. The second procedure, testing, employed a permutation test to evaluate whether possible genes were false discoveries; these genes were excluded by the testing procedure. The last procedure, screening, employed two rules, the betweenness ratio rule and interaction score rule, to select the most essential genes. A total of seventeen genes were inferred by the method, which were deemed to contribute to co-regeneration of at least two tissues. All these seventeen genes were extensively discussed to validate the utility of the method.

  10. Method and procedure of fatigue analysis for nuclear equipment

    International Nuclear Information System (INIS)

    Wen Jing; Fang Yonggang; Lu Yan; Zhang Yue; Sun Zaozhan; Zou Mingzhong

    2014-01-01

    As an example, the fatigue analysis for the upper head of the pressurizer in one NPP was carried out by using ANSYS, a finite element method analysis software. According to RCC-M code, only two kinds of typical transients of temperature and pressure were considered in the fatigue analysis. Meanwhile, the influence of earthquake was taken into account. The method and procedure of fatigue analysis for nuclear safety equipment were described in detail. This paper provides a reference for fatigue analysis and assessment of nuclear safety grade equipment and pipe. (authors)

  11. A library based fitting method for visual reflectance spectroscopy of human skin

    International Nuclear Information System (INIS)

    Verkruysse, Wim; Zhang Rong; Choi, Bernard; Lucassen, Gerald; Svaasand, Lars O; Nelson, J Stuart

    2005-01-01

    The diffuse reflectance spectrum of human skin in the visible region (400-800 nm) contains information on the concentrations of chromophores such as melanin and haemoglobin. This information may be extracted by fitting the reflectance spectrum with an optical diffusion based analytical expression applied to a layered skin model. With the use of the analytical expression, it is assumed that light transport is dominated by scattering. For port wine stain (PWS) and highly pigmented human skin, however, this assumption may not be valid resulting in a potentially large error in visual reflectance spectroscopy (VRS). Monte Carlo based techniques can overcome this problem but are currently too computationally intensive to be combined with previously used fitting procedures. The fitting procedure presented herein is based on a library search which enables the use of accurate reflectance spectra based on forward Monte Carlo simulations or diffusion theory. This allows for accurate VRS to characterize chromophore concentrations in PWS and highly pigmented human skin. The method is demonstrated using both simulated and measured reflectance spectra. An additional advantage of the method is that the fitting procedure is very fast

  12. A library based fitting method for visual reflectance spectroscopy of human skin

    Energy Technology Data Exchange (ETDEWEB)

    Verkruysse, Wim [Beckman Laser Institute, University of California, Irvine, CA 92612 (United States); Zhang Rong [Beckman Laser Institute, University of California, Irvine, CA 92612 (United States); Choi, Bernard [Beckman Laser Institute, University of California, Irvine, CA 92612 (United States); Lucassen, Gerald [Personal Care Institute, Philips Research, Prof Holstlaan 4, Eindhoven (Netherlands); Svaasand, Lars O [Department of Physical Electronics Norwegian University of Science and Technology, N-7491 Trondheim (Norway); Nelson, J Stuart [Beckman Laser Institute, University of California, Irvine, CA 92612 (United States)

    2005-01-07

    The diffuse reflectance spectrum of human skin in the visible region (400-800 nm) contains information on the concentrations of chromophores such as melanin and haemoglobin. This information may be extracted by fitting the reflectance spectrum with an optical diffusion based analytical expression applied to a layered skin model. With the use of the analytical expression, it is assumed that light transport is dominated by scattering. For port wine stain (PWS) and highly pigmented human skin, however, this assumption may not be valid resulting in a potentially large error in visual reflectance spectroscopy (VRS). Monte Carlo based techniques can overcome this problem but are currently too computationally intensive to be combined with previously used fitting procedures. The fitting procedure presented herein is based on a library search which enables the use of accurate reflectance spectra based on forward Monte Carlo simulations or diffusion theory. This allows for accurate VRS to characterize chromophore concentrations in PWS and highly pigmented human skin. The method is demonstrated using both simulated and measured reflectance spectra. An additional advantage of the method is that the fitting procedure is very fast.

  13. A library based fitting method for visual reflectance spectroscopy of human skin

    Science.gov (United States)

    Verkruysse, Wim; Zhang, Rong; Choi, Bernard; Lucassen, Gerald; Svaasand, Lars O.; Nelson, J. Stuart

    2005-01-01

    The diffuse reflectance spectrum of human skin in the visible region (400-800 nm) contains information on the concentrations of chromophores such as melanin and haemoglobin. This information may be extracted by fitting the reflectance spectrum with an optical diffusion based analytical expression applied to a layered skin model. With the use of the analytical expression, it is assumed that light transport is dominated by scattering. For port wine stain (PWS) and highly pigmented human skin, however, this assumption may not be valid resulting in a potentially large error in visual reflectance spectroscopy (VRS). Monte Carlo based techniques can overcome this problem but are currently too computationally intensive to be combined with previously used fitting procedures. The fitting procedure presented herein is based on a library search which enables the use of accurate reflectance spectra based on forward Monte Carlo simulations or diffusion theory. This allows for accurate VRS to characterize chromophore concentrations in PWS and highly pigmented human skin. The method is demonstrated using both simulated and measured reflectance spectra. An additional advantage of the method is that the fitting procedure is very fast.

  14. Three-dimensional Cross-Platform Planning for Complex Spinal Procedures: A New Method Adaptive to Different Navigation Systems.

    Science.gov (United States)

    Kosterhon, Michael; Gutenberg, Angelika; Kantelhardt, Sven R; Conrad, Jens; Nimer Amr, Amr; Gawehn, Joachim; Giese, Alf

    2017-08-01

    A feasibility study. To develop a method based on the DICOM standard which transfers complex 3-dimensional (3D) trajectories and objects from external planning software to any navigation system for planning and intraoperative guidance of complex spinal procedures. There have been many reports about navigation systems with embedded planning solutions but only few on how to transfer planning data generated in external software. Patients computerized tomography and/or magnetic resonance volume data sets of the affected spinal segments were imported to Amira software, reconstructed to 3D images and fused with magnetic resonance data for soft-tissue visualization, resulting in a virtual patient model. Objects needed for surgical plans or surgical procedures such as trajectories, implants or surgical instruments were either digitally constructed or computerized tomography scanned and virtually positioned within the 3D model as required. As crucial step of this method these objects were fused with the patient's original diagnostic image data, resulting in a single DICOM sequence, containing all preplanned information necessary for the operation. By this step it was possible to import complex surgical plans into any navigation system. We applied this method not only to intraoperatively adjustable implants and objects under experimental settings, but also planned and successfully performed surgical procedures, such as the percutaneous lateral approach to the lumbar spine following preplanned trajectories and a thoracic tumor resection including intervertebral body replacement using an optical navigation system. To demonstrate the versatility and compatibility of the method with an entirely different navigation system, virtually preplanned lumbar transpedicular screw placement was performed with a robotic guidance system. The presented method not only allows virtual planning of complex surgical procedures, but to export objects and surgical plans to any navigation or

  15. A new modification of summary-based analysis method for large software system testing

    Directory of Open Access Journals (Sweden)

    A. V. Sidorin

    2015-01-01

    Full Text Available The automated testing tools becoming a frequent practice require thorough computer-aided testing of large software systems, including system inter-component interfaces. To achieve a good coverage, one should overcome scalability problems of different methods of analysis. These problems arise from impossibility to analyze all the execution paths. The objective of this research is to build a method for inter-procedural analysis, which efficiency enables us to analyse large software systems (such as Android OS codebase as a whole for a reasonable time (no more than 4 hours. This article reviews existing methods of software analysis to detect their potential defects. It focuses on the symbolic execution method since it is widely used both in static analysis of source code and in hybrid analysis of object files and intermediate representation (concolic testing. The method of symbolic execution involves separation of a set of input data values into equivalence classes while choosing an execution path. The paper also considers advantages of this method and its shortcomings. One of the main scalability problems is related to inter-procedural analysis. Analysis time grows rapidly if an inlining method is used for inter-procedural analysis. So this work proposes a summary-based analysis method to solve scalability problems. Clang Static Analyzer, an open source static analyzer (a part of the LLVM project, has been chosen as a target system. It allows us to compare performance of inlining and summary-based inter-procedural analysis. A mathematical model for preliminary estimations is described in order to identify possible factors of performance improvement.

  16. Implementing Computer-Based Procedures: Thinking Outside the Paper Margins

    Energy Technology Data Exchange (ETDEWEB)

    Oxstrand, Johanna; Bly, Aaron

    2017-06-01

    In the past year there has been increased interest from the nuclear industry in adopting the use of electronic work packages and computer-based procedures (CBPs) in the field. The goal is to incorporate the use of technology in order to meet the Nuclear Promise requirements of reducing costs and improve efficiency and decrease human error rates of plant operations. Researchers, together with the nuclear industry, have been investigating the benefits an electronic work package system and specifically CBPs would have over current paper-based procedure practices. There are several classifications of CBPs ranging from a straight copy of the paper-based procedure in PDF format to a more intelligent dynamic CBP. A CBP system offers a vast variety of improvements, such as context driven job aids, integrated human performance tools (e.g., placekeeping and correct component verification), and dynamic step presentation. The latter means that the CBP system could only display relevant steps based on operating mode, plant status, and the task at hand. The improvements can lead to reduction of the worker’s workload and human error by allowing the work to focus on the task at hand more. A team of human factors researchers at the Idaho National Laboratory studied and developed design concepts for CBPs for field workers between 2012 and 2016. The focus of the research was to present information in a procedure in a manner that leveraged the dynamic and computational capabilities of a handheld device allowing the worker to focus more on the task at hand than on the administrative processes currently applied when conducting work in the plant. As a part of the research the team identified type of work, instructions, and scenarios where the transition to a dynamic CBP system might not be as beneficial as it would for other types of work in the plant. In most cases the decision to use a dynamic CBP system and utilize the dynamic capabilities gained will be beneficial to the worker

  17. Assessment of Calculation Procedures for Piles in Clay Based on Static Loading Tests

    DEFF Research Database (Denmark)

    Augustesen, Anders; Andersen, Lars

    2008-01-01

    College in London. The calculation procedures are assessed based on an established database of static loading tests. To make a consistent evaluation of the design methods, corrections related to undrained shear strength and time between pile driving and testing have been employed. The study indicates...... that the interpretation of the field tests is of paramount importance, both with regard to the soil profile and the loading conditions. Based on analyses of 253 static pile loading tests distributed on 111 sites, API-RP2A provides the better description of the data. However, it should be emphasised that some input......Numerous methods are available for the prediction of the axial capacity of piles in clay. In this paper, two well-known models are considered, namely the current API-RP2A (1987 to present) and the recently developed ICP method. The latter is developed by Jardine and his co-workers at Imperial...

  18. Procedures, analysis, and comparison of groundwater velocity measurement methods for unconfined aquifers

    International Nuclear Information System (INIS)

    Kearl, P.M.; Dexter, J.J.; Price, J.E.

    1988-09-01

    Six methods for determining the average linear velocity of ground- water were tested at two separate field sites. The methods tested include bail tests, pumping tests, wave propagation, tracer tests, Geoflo Meter/reg sign/, and borehole dilution. This report presents procedures for performing field tests and compares the results of each method on the basis of application, cost, and accuracy. Comparisons of methods to determine the ground-water velocity at two field sites show certain methods yield similar results while other methods measure significantly different values. The literature clearly supports the reliability of pumping tests for determining hydraulic conductivity. Results of this investigation support this finding. Pumping tests, however, are limited because they measure an average hydraulic conductivity which is only representative of the aquifer within the radius of influence. Bail tests are easy and inexpensive to perform. If the tests are conducted on the majority of wells at a hazardous waste site, then the heterogeneity of the site aquifer can be assessed. However, comparisons of bail-test results with pumping-test and tracer-test results indicate that the accuracy of the method is questionable. Consequently, the principal recommendation of this investigation, based on cost and reliability of the ground-water velocity measurement methods, is that bail tests should be performed on all or a majority of monitoring wells at a site to determine the ''relative'' hydraulic conductivities

  19. Automatic first-arrival picking based on extended super-virtual interferometry with quality control procedure

    Science.gov (United States)

    An, Shengpei; Hu, Tianyue; Liu, Yimou; Peng, Gengxin; Liang, Xianghao

    2017-12-01

    Static correction is a crucial step of seismic data processing for onshore play, which frequently has a complex near-surface condition. The effectiveness of the static correction depends on an accurate determination of first-arrival traveltimes. However, it is difficult to accurately auto-pick the first arrivals for data with low signal-to-noise ratios (SNR), especially for those measured in the area of the complex near-surface. The technique of the super-virtual interferometry (SVI) has the potential to enhance the SNR of first arrivals. In this paper, we develop the extended SVI with (1) the application of the reverse correlation to improve the capability of SNR enhancement at near-offset, and (2) the usage of the multi-domain method to partially overcome the limitation of current method, given insufficient available source-receiver combinations. Compared to the standard SVI, the SNR enhancement of the extended SVI can be up to 40%. In addition, we propose a quality control procedure, which is based on the statistical characteristics of multichannel recordings of first arrivals. It can auto-correct the mispicks, which might be spurious events generated by the SVI. This procedure is very robust, highly automatic and it can accommodate large data in batches. Finally, we develop one automatic first-arrival picking method to combine the extended SVI and the quality control procedure. Both the synthetic and the field data examples demonstrate that the proposed method is able to accurately auto-pick first arrivals in seismic traces with low SNR. The quality of the stacked seismic sections obtained from this method is much better than those obtained from an auto-picking method, which is commonly employed by the commercial software.

  20. Selective Sequential Zero-Base Budgeting Procedures Based on Total Factor Productivity Indicators

    OpenAIRE

    A. Ishikawa; E. F. Sudit

    1981-01-01

    The authors' purpose in this paper is to develop productivity-based sequential budgeting procedures designed to expedite identification of major problem areas in bugetary performance, as well as to reduce the costs associated with comprehensive zero-base analyses. The concept of total factor productivity is reviewed and its relations to ordinary and zero-based budgeting are discussed in detail. An outline for a selective sequential analysis based on monitoring of three key indicators of (a) i...

  1. Constructing an exposure chart: step by step (based on standard procedures)

    International Nuclear Information System (INIS)

    David, Jocelyn L; Cansino, Percedita T.; Taguibao, Angileo P.

    2000-01-01

    An exposure chart is very important in conducting radiographic inspection of materials. By using an accurate exposure chart, an inspector is able to avoid a trial and error way of determining correct time to expose a specimen, thereby producing a radiograph that has an acceptable density based on a standard. The chart gives the following information: x-ray machine model and brand, distance of the x-ray tube from the film, type and thickness of intensifying screens, film type, radiograph density, and film processing conditions. The methods of preparing an exposure chart are available in existing radiographic testing manuals. These described methods are presented in step by step procedures, covering the actual laboratory set-up, data gathering, computations, and transformation of derived data into Characteristic Curve and Exposure Chart

  2. 40 CFR 60.50Da - Compliance determination procedures and methods.

    Science.gov (United States)

    2010-07-01

    ... probe and filter holder heating system in the sampling train may be set to provide an average gas... correction factor, integrated or grab sampling and analysis procedures of Method 3B of appendix A of this... fuel oil, etc.), coal pulverizers, and bottom and fly ash interactions. This determination is optional...

  3. Mathematical calibration procedure of a capacitive sensor-based indexed metrology platform

    International Nuclear Information System (INIS)

    Brau-Avila, A; Valenzuela-Galvan, M; Herrera-Jimenez, V M; Santolaria, J; Aguilar, J J; Acero, R

    2017-01-01

    The demand for faster and more reliable measuring tasks for the control and quality assurance of modern production systems has created new challenges for the field of coordinate metrology. Thus, the search for new solutions in coordinate metrology systems and the need for the development of existing ones still persists. One example of such a system is the portable coordinate measuring machine (PCMM), the use of which in industry has considerably increased in recent years, mostly due to its flexibility for accomplishing in-line measuring tasks as well as its reduced cost and operational advantages compared to traditional coordinate measuring machines. Nevertheless, PCMMs have a significant drawback derived from the techniques applied in the verification and optimization procedures of their kinematic parameters. These techniques are based on the capture of data with the measuring instrument from a calibrated gauge object, fixed successively in various positions so that most of the instrument measuring volume is covered, which results in time-consuming, tedious and expensive verification and optimization procedures. In this work the mathematical calibration procedure of a capacitive sensor-based indexed metrology platform (IMP) is presented. This calibration procedure is based on the readings and geometric features of six capacitive sensors and their targets with nanometer resolution. The final goal of the IMP calibration procedure is to optimize the geometric features of the capacitive sensors and their targets in order to use the optimized data in the verification procedures of PCMMs. (paper)

  4. Mathematical calibration procedure of a capacitive sensor-based indexed metrology platform

    Science.gov (United States)

    Brau-Avila, A.; Santolaria, J.; Acero, R.; Valenzuela-Galvan, M.; Herrera-Jimenez, V. M.; Aguilar, J. J.

    2017-03-01

    The demand for faster and more reliable measuring tasks for the control and quality assurance of modern production systems has created new challenges for the field of coordinate metrology. Thus, the search for new solutions in coordinate metrology systems and the need for the development of existing ones still persists. One example of such a system is the portable coordinate measuring machine (PCMM), the use of which in industry has considerably increased in recent years, mostly due to its flexibility for accomplishing in-line measuring tasks as well as its reduced cost and operational advantages compared to traditional coordinate measuring machines. Nevertheless, PCMMs have a significant drawback derived from the techniques applied in the verification and optimization procedures of their kinematic parameters. These techniques are based on the capture of data with the measuring instrument from a calibrated gauge object, fixed successively in various positions so that most of the instrument measuring volume is covered, which results in time-consuming, tedious and expensive verification and optimization procedures. In this work the mathematical calibration procedure of a capacitive sensor-based indexed metrology platform (IMP) is presented. This calibration procedure is based on the readings and geometric features of six capacitive sensors and their targets with nanometer resolution. The final goal of the IMP calibration procedure is to optimize the geometric features of the capacitive sensors and their targets in order to use the optimized data in the verification procedures of PCMMs.

  5. Coordinate transformation based cryo-correlative methods for electron tomography and focused ion beam milling

    International Nuclear Information System (INIS)

    Fukuda, Yoshiyuki; Schrod, Nikolas; Schaffer, Miroslava; Feng, Li Rebekah; Baumeister, Wolfgang; Lucic, Vladan

    2014-01-01

    Correlative microscopy allows imaging of the same feature over multiple length scales, combining light microscopy with high resolution information provided by electron microscopy. We demonstrate two procedures for coordinate transformation based correlative microscopy of vitrified biological samples applicable to different imaging modes. The first procedure aims at navigating cryo-electron tomography to cellular regions identified by fluorescent labels. The second procedure, allowing navigation of focused ion beam milling to fluorescently labeled molecules, is based on the introduction of an intermediate scanning electron microscopy imaging step to overcome the large difference between cryo-light microscopy and focused ion beam imaging modes. These methods make it possible to image fluorescently labeled macromolecular complexes in their natural environments by cryo-electron tomography, while minimizing exposure to the electron beam during the search for features of interest. - Highlights: • Correlative light microscopy and focused ion beam milling of vitrified samples. • Coordinate transformation based cryo-correlative method. • Improved correlative light microscopy and cryo-electron tomography

  6. Evaluation of Flight Deck-Based Interval Management Crew Procedure Feasibility

    Science.gov (United States)

    Wilson, Sara R.; Murdoch, Jennifer L.; Hubbs, Clay E.; Swieringa, Kurt A.

    2013-01-01

    Air traffic demand is predicted to increase over the next 20 years, creating a need for new technologies and procedures to support this growth in a safe and efficient manner. The National Aeronautics and Space Administration's (NASA) Air Traffic Management Technology Demonstration - 1 (ATD-1) will operationally demonstrate the feasibility of efficient arrival operations combining ground-based and airborne NASA technologies. The integration of these technologies will increase throughput, reduce delay, conserve fuel, and minimize environmental impacts. The ground-based tools include Traffic Management Advisor with Terminal Metering for precise time-based scheduling and Controller Managed Spacing decision support tools for better managing aircraft delay with speed control. The core airborne technology in ATD-1 is Flight deck-based Interval Management (FIM). FIM tools provide pilots with speed commands calculated using information from Automatic Dependent Surveillance - Broadcast. The precise merging and spacing enabled by FIM avionics and flight crew procedures will reduce excess spacing buffers and result in higher terminal throughput. This paper describes a human-in-the-loop experiment designed to assess the acceptability and feasibility of the ATD-1 procedures used in a voice communications environment. This experiment utilized the ATD-1 integrated system of ground-based and airborne technologies. Pilot participants flew a high-fidelity fixed base simulator equipped with an airborne spacing algorithm and a FIM crew interface. Experiment scenarios involved multiple air traffic flows into the Dallas-Fort Worth Terminal Radar Control airspace. Results indicate that the proposed procedures were feasible for use by flight crews in a voice communications environment. The delivery accuracy at the achieve-by point was within +/- five seconds and the delivery precision was less than five seconds. Furthermore, FIM speed commands occurred at a rate of less than one per minute

  7. Metamodel-based inverse method for parameter identification: elastic-plastic damage model

    Science.gov (United States)

    Huang, Changwu; El Hami, Abdelkhalak; Radi, Bouchaïb

    2017-04-01

    This article proposed a metamodel-based inverse method for material parameter identification and applies it to elastic-plastic damage model parameter identification. An elastic-plastic damage model is presented and implemented in numerical simulation. The metamodel-based inverse method is proposed in order to overcome the disadvantage in computational cost of the inverse method. In the metamodel-based inverse method, a Kriging metamodel is constructed based on the experimental design in order to model the relationship between material parameters and the objective function values in the inverse problem, and then the optimization procedure is executed by the use of a metamodel. The applications of the presented material model and proposed parameter identification method in the standard A 2017-T4 tensile test prove that the presented elastic-plastic damage model is adequate to describe the material's mechanical behaviour and that the proposed metamodel-based inverse method not only enhances the efficiency of parameter identification but also gives reliable results.

  8. Hybrid method based on embedded coupled simulation of vortex particles in grid based solution

    Science.gov (United States)

    Kornev, Nikolai

    2017-09-01

    The paper presents a novel hybrid approach developed to improve the resolution of concentrated vortices in computational fluid mechanics. The method is based on combination of a grid based and the grid free computational vortex (CVM) methods. The large scale flow structures are simulated on the grid whereas the concentrated structures are modeled using CVM. Due to this combination the advantages of both methods are strengthened whereas the disadvantages are diminished. The procedure of the separation of small concentrated vortices from the large scale ones is based on LES filtering idea. The flow dynamics is governed by two coupled transport equations taking two-way interaction between large and fine structures into account. The fine structures are mapped back to the grid if their size grows due to diffusion. Algorithmic aspects of the hybrid method are discussed. Advantages of the new approach are illustrated on some simple two dimensional canonical flows containing concentrated vortices.

  9. A symptom based decision tree approach to boiling water reactor emergency operating procedures

    International Nuclear Information System (INIS)

    Knobel, R.C.

    1984-01-01

    This paper describes a Decision Tree approach to development of BWR Emergency Operating Procedures for use by operators during emergencies. This approach utilizes the symptom based Emergency Procedure Guidelines approved for implementation by the USNRC. Included in the paper is a discussion of the relative merits of the event based Emergency Operating Procedures currently in use at USBWR plants. The body of the paper is devoted to a discussion of the Decision Tree Approach to Emergency Operating Procedures soon to be implemented at two United States Boiling Water Reactor plants, why this approach solves many of the problems with procedures indentified in the post accident reviews of Three Mile Island procedures, and why only now is this approach both desirable and feasible. The paper discusses how nuclear plant simulators were involved in the development of the Emergency Operating Procedure decision trees, and in the verification and validation of these procedures. (orig./HP)

  10. Evaluation of the performance of MP4-based procedures for a wide range of thermochemical and kinetic properties

    Energy Technology Data Exchange (ETDEWEB)

    Yu, Li-Juan; Wan, Wenchao; Karton, Amir, E-mail: amir.karton@uwa.edu.au

    2016-11-30

    We evaluate the performance of standard and modified MPn procedures for a wide set of thermochemical and kinetic properties, including atomization energies, structural isomerization energies, conformational energies, and reaction barrier heights. The reference data are obtained at the CCSD(T)/CBS level by means of the Wn thermochemical protocols. We find that none of the MPn-based procedures show acceptable performance for the challenging W4-11 and BH76 databases. For the other thermochemical/kinetic databases, the MP2.5 and MP3.5 procedures provide the most attractive accuracy-to-computational cost ratios. The MP2.5 procedure results in a weighted-total-root-mean-square deviation (WTRMSD) of 3.4 kJ/mol, whilst the computationally more expensive MP3.5 procedure results in a WTRMSD of 1.9 kJ/mol (the same WTRMSD obtained for the CCSD(T) method in conjunction with a triple-zeta basis set). We also assess the performance of the computationally economical CCSD(T)/CBS(MP2) method, which provides the best overall performance for all the considered databases, including W4-11 and BH76.

  11. Application of Hplc-Pda Method Using Two Different Extraction Procedures for the Determination of Alkylresorcinols in Cereals

    Directory of Open Access Journals (Sweden)

    Gailāne Natālija

    2015-09-01

    Full Text Available Cereals, especially barley, are an important source of vitamins, minerals, dietary fibre and various phytochemicals, such as alkylresorcinols (ARs. Cereal ARs are a group of phenolic lipids located in the outer parts of grain, particularly in rye and wheat, but not found in refined flour or in refined products from cereals. This study focuses on the comparison of different extraction procedures applied for the determination of the content of ARs (C15:0 - C23:0 in grain of Latvian barley genotypes. The content of ARs in 1 rye and 16 barley samples grown with different amounts of fertilier was determined by High Performance Liquid Chromatography method with Photodiode Array detection (HPLC-PDA developed by us. Two different extraction methods were compared: accelerated Soxhlet extraction and 24-hour extraction. Aside from validation of the extraction procedures, validation parameters for the HPLC-PDA based quantitation method were provided. The coefficients of variation for repeatability and intermediate precision were < 9% and < 3%, respectively. The content of ARs determined with the HPLC-PDA method in conjunction with accelerated Soxhlet extraction was up to 1.5 times higher than using 24-hour extraction. AR content varied from 2.11 ± 0.04 to 3.80 ± 0.10 mg·100 g-1 for 24-hour extraction and from 2.66 ± 0.06 to 5.70 ± 0.20 mg·100 g-1 for accelerated Soxhlet extraction, indicating the increased efficiency of this procedure in analysis of ARs.

  12. Stepwise Procedure for Development and Validation of a Multipesticide Method

    Energy Technology Data Exchange (ETDEWEB)

    Ambrus, A. [Hungarian Food Safety Office, Budapest (Hungary)

    2009-07-15

    The stepwise procedure for development and the validation of so called multi-pesticide methods are described. Principles, preliminary actions, criteria for the selection of chromatographic separation, detection and performance verification of multi-pesticide methods are outlined. Also the long term repeatability and reproducibility, as well as the necessity for the documentation of laboratory work are highlighted. Appendix I hereof describes in detail the calculation of calibration parameters, whereas Appendix II focuses on the calculation of the significance of differences of concentrations obtained on two different separation columns. (author)

  13. Considering the normative, systemic and procedural dimensions in indicator-based sustainability assessments in agriculture

    International Nuclear Information System (INIS)

    Binder, Claudia R.; Feola, Giuseppe; Steinberger, Julia K.

    2010-01-01

    This paper develops a framework for evaluating sustainability assessment methods by separately analyzing their normative, systemic and procedural dimensions as suggested by Wiek and Binder [Wiek, A, Binder, C. Solution spaces for decision-making - a sustainability assessment tool for city-regions. Environ Impact Asses Rev 2005, 25: 589-608.]. The framework is then used to characterize indicator-based sustainability assessment methods in agriculture. For a long time, sustainability assessment in agriculture has focused mostly on environmental and technical issues, thus neglecting the economic and, above all, the social aspects of sustainability, the multi-functionality of agriculture and the applicability of the results. In response to these shortcomings, several integrative sustainability assessment methods have been developed for the agricultural sector. This paper reviews seven of these that represent the diversity of tools developed in this area. The reviewed assessment methods can be categorized into three types: (i) top-down farm assessment methods; (ii) top-down regional assessment methods with some stakeholder participation; (iii) bottom-up, integrated participatory or transdisciplinary methods with stakeholder participation throughout the process. The results readily show the trade-offs encountered when selecting an assessment method. A clear, standardized, top-down procedure allows for potentially benchmarking and comparing results across regions and sites. However, this comes at the cost of system specificity. As the top-down methods often have low stakeholder involvement, the application and implementation of the results might be difficult. Our analysis suggests that to include the aspects mentioned above in agricultural sustainability assessment, the bottom-up, integrated participatory or transdisciplinary methods are the most suitable ones.

  14. Are Self-study Procedural Teaching Methods Effective? A Pilot Study of a Family Medicine Residency Program.

    Science.gov (United States)

    Deffenbacher, Brandy; Langner, Shannon; Khodaee, Morteza

    2017-11-01

    A family medicine residency is a unique training environment where residents are exposed to care in multiple settings, across all ages. Procedures are an integral part of family medicine practice. Family medicine residency (FMR) programs are tasked with the job of teaching these skills at a level of intensity and frequency that allows a resident to achieve competency of such skills. In an environment that is limited by work hour restrictions, self-study teaching methods are one way to ensure all residents receive the fundamental knowledge of how to perform procedures. We developed and evaluated the efficacy of a self-study procedure teaching method and procedure evaluation checklist. A self-study procedure teaching intervention was created, consisting of instructional articles and videos on three procedures. To assess the efficacy of the intervention, and the competency of the residents, pre- and postintervention procedure performance sessions were completed. These sessions were reviewed and scored using a standardized procedure performance checklist. All 24 residents participated in the study. Overall, the resident procedure knowledge increased on two of the three procedures studied, and ability to perform procedure according to expert-validated checklist improved significantly on all procedures. A self-study intervention is a simple but effective way to increase and improve procedure training in a way that fits the complex scheduling needs of a residency training program. In addition, this study demonstrates that the procedure performance checklists are a simple and reliable way to increase assessment of resident procedure performance skills in a residency setting.

  15. Yet Another Method for Image Segmentation based on Histograms and Heuristics

    Directory of Open Access Journals (Sweden)

    Horia-Nicolai L. Teodorescu

    2012-07-01

    Full Text Available We introduce a method for image segmentation that requires little computations, yet providing comparable results to other methods. While the proposed method resembles to the known ones based on histograms, it is still different in the use of the gray level distribution. When to the basic procedure we add several heuristic rules, the method produces results that, in some cases, may outperform the results produced by the known methods. The paper reports preliminary results. More details on the method, improvements, and results will be presented in a future paper.

  16. Optimization of instrumental neutron activation analysis method by means of 2k experimental design technique aiming the validation of analytical procedures

    International Nuclear Information System (INIS)

    Petroni, Robson; Moreira, Edson G.

    2013-01-01

    In this study optimization of procedures and standardization of Instrumental Neutron Activation Analysis (INAA) methods were carried out for the determination of the elements arsenic, chromium, cobalt, iron, rubidium, scandium, selenium and zinc in biological materials. The aim is to validate the analytical methods for future accreditation at the National Institute of Metrology, Quality and Technology (INMETRO). The 2 k experimental design was applied for evaluation of the individual contribution of selected variables of the analytical procedure in the final mass fraction result. Samples of Mussel Tissue Certified Reference Material and multi-element standards were analyzed considering the following variables: sample decay time, counting time and sample distance to detector. The standard multi-element concentration (comparator standard), mass of the sample and irradiation time were maintained constant in this procedure. By means of the statistical analysis and theoretical and experimental considerations it was determined the optimized experimental conditions for the analytical methods that will be adopted for the validation procedure of INAA methods in the Neutron Activation Analysis Laboratory (LAN) of the Research Reactor Center (CRPq) at the Nuclear and Energy Research Institute (IPEN - CNEN/SP). Optimized conditions were estimated based on the results of z-score tests, main effect and interaction effects. The results obtained with the different experimental configurations were evaluated for accuracy (precision and trueness) for each measurement. (author)

  17. Development of a modified cortisol extraction procedure for intermediately sized fish not amenable to whole-body or plasma extraction methods.

    Science.gov (United States)

    Guest, Taylor W; Blaylock, Reginald B; Evans, Andrew N

    2016-02-01

    The corticosteroid hormone cortisol is the central mediator of the teleost stress response. Therefore, the accurate quantification of cortisol in teleost fishes is a vital tool for addressing fundamental questions about an animal's physiological response to environmental stressors. Conventional steroid extraction methods using plasma or whole-body homogenates, however, are inefficient within an intermediate size range of fish that are too small for phlebotomy and too large for whole-body steroid extractions. To assess the potential effects of hatchery-induced stress on survival of fingerling hatchery-reared Spotted Seatrout (Cynoscion nebulosus), we developed a novel extraction procedure for measuring cortisol in intermediately sized fish (50-100 mm in length) that are not amenable to standard cortisol extraction methods. By excising a standardized portion of the caudal peduncle, this tissue extraction procedure allows for a small portion of a larger fish to be sampled for cortisol, while minimizing the potential interference from lipids that may be extracted using whole-body homogenization procedures. Assay precision was comparable to published plasma and whole-body extraction procedures, and cortisol quantification over a wide range of sample dilutions displayed parallelism versus assay standards. Intra-assay %CV was 8.54%, and average recovery of spiked samples was 102%. Also, tissue cortisol levels quantified using this method increase 30 min after handling stress and are significantly correlated with blood values. We conclude that this modified cortisol extraction procedure provides an excellent alternative to plasma and whole-body extraction procedures for intermediately sized fish, and will facilitate the efficient assessment of cortisol in a variety of situations ranging from basic laboratory research to industrial and field-based environmental health applications.

  18. Review of cause-based decision tree approach for the development of domestic standard human reliability analysis procedure in low power/shutdown operation probabilistic safety assessment

    International Nuclear Information System (INIS)

    Kang, D. I.; Jung, W. D.

    2003-01-01

    We review the Cause-Based Decision Tree (CBDT) approach to decide whether we incorporate it or not for the development of domestic standard Human Reliability Analysis (HRA) procedure in low power/shutdown operation Probabilistic Safety Assessment (PSA). In this paper, we introduce the cause based decision tree approach, quantify human errors using it, and identify merits and demerits of it in comparision with previously used THERP. The review results show that it is difficult to incorporate the CBDT method for the development of domestic standard HRA procedure in low power/shutdown PSA because the CBDT method need for the subjective judgment of HRA analyst like as THERP. However, it is expected that the incorporation of the CBDT method into the development of domestic standard HRA procedure only for the comparision of quantitative HRA results will relieve the burden of development of detailed HRA procedure and will help maintain consistent quantitative HRA results

  19. Evaluation of Patient Radiation Dose during Cardiac Interventional Procedures: What Is the Most Effective Method?

    International Nuclear Information System (INIS)

    Chida, K.; Saito, H.; Ishibashi, T.; Zuguchi, M.; Kagaya, Y.; Takahashi, S.

    2009-01-01

    Cardiac interventional radiology has lower risks than surgical procedures. This is despite the fact that radiation doses from cardiac intervention procedures are the highest of any commonly performed general X-ray examination. Maximum radiation skin doses (MSDs) should be determined to avoid radiation-associated skin injuries in patients undergoing cardiac intervention procedures. However, real-time evaluation of MSD is unavailable for many cardiac intervention procedures. This review describes methods of determining MSD during cardiac intervention procedures. Currently, in most cardiac intervention procedures, real-time measuring of MSD is not feasible. Thus, we recommend that physicians record the patient's total entrance skin dose, such as the dose at the interventional reference point when it can be monitored, in order to estimate MSD in intervention procedures

  20. New procedure for criticality search using coarse mesh nodal methods

    International Nuclear Information System (INIS)

    Pereira, Wanderson F.; Silva, Fernando C. da; Martinez, Aquilino S.

    2011-01-01

    The coarse mesh nodal methods have as their primary goal to calculate the neutron flux inside the reactor core. Many computer systems use a specific form of calculation, which is called nodal method. In classical computing systems that use the criticality search is made after the complete convergence of the iterative process of calculating the neutron flux. In this paper, we proposed a new method for the calculation of criticality, condition which will be over very iterative process of calculating the neutron flux. Thus, the processing time for calculating the neutron flux was reduced by half compared with the procedure developed by the Nuclear Engineering Program of COPPE/UFRJ (PEN/COPPE/UFRJ). (author)

  1. New procedure for criticality search using coarse mesh nodal methods

    Energy Technology Data Exchange (ETDEWEB)

    Pereira, Wanderson F.; Silva, Fernando C. da; Martinez, Aquilino S., E-mail: wneto@con.ufrj.b, E-mail: fernando@con.ufrj.b, E-mail: Aquilino@lmp.ufrj.b [Coordenacao dos Programas de Pos-Graduacao de Engenharia (PEN/COPPE/UFRJ), Rio de Janeiro, RJ (Brazil). Programa de Engenharia Nuclear

    2011-07-01

    The coarse mesh nodal methods have as their primary goal to calculate the neutron flux inside the reactor core. Many computer systems use a specific form of calculation, which is called nodal method. In classical computing systems that use the criticality search is made after the complete convergence of the iterative process of calculating the neutron flux. In this paper, we proposed a new method for the calculation of criticality, condition which will be over very iterative process of calculating the neutron flux. Thus, the processing time for calculating the neutron flux was reduced by half compared with the procedure developed by the Nuclear Engineering Program of COPPE/UFRJ (PEN/COPPE/UFRJ). (author)

  2. A Neural Networks Based Operation Guidance System for Procedure Presentation and Validation

    International Nuclear Information System (INIS)

    Seung, Kun Mo; Lee, Seung Jun; Seong, Poong Hyun

    2006-01-01

    In this paper, a neural network based operator support system is proposed to reduce operator's errors in abnormal situations in nuclear power plants (NPPs). There are many complicated situations, in which regular and suitable operations should be done by operators accordingly. In order to regulate and validate operators' operations, it is necessary to develop an operator support system which includes computer based procedures with the functions for operation validation. Many computerized procedures systems (CPS) have been recently developed. Focusing on the human machine interface (HMI) design and procedures' computerization, most of CPSs used various methodologies to enhance system's convenience, reliability and accessibility. Other than only showing procedures, the proposed system integrates a simple CPS and an operation validation system (OVS) by using artificial neural network (ANN) for operational permission and quantitative evaluation

  3. An XML Representation for Crew Procedures

    Science.gov (United States)

    Simpson, Richard C.

    2005-01-01

    NASA ensures safe operation of complex systems through the use of formally-documented procedures, which encode the operational knowledge of the system as derived from system experts. Crew members use procedure documentation on the ground for training purposes and on-board space shuttle and space station to guide their activities. Investigators at JSC are developing a new representation for procedures that is content-based (as opposed to display-based). Instead of specifying how a procedure should look on the printed page, the content-based representation will identify the components of a procedure and (more importantly) how the components are related (e.g., how the activities within a procedure are sequenced; what resources need to be available for each activity). This approach will allow different sets of rules to be created for displaying procedures on a computer screen, on a hand-held personal digital assistant (PDA), verbally, or on a printed page, and will also allow intelligent reasoning processes to automatically interpret and use procedure definitions. During his NASA fellowship, Dr. Simpson examined how various industries represent procedures (also called business processes or workflows), in areas such as manufacturing, accounting, shipping, or customer service. A useful method for designing and evaluating workflow representation languages is by determining their ability to encode various workflow patterns, which depict abstract relationships between the components of a procedure removed from the context of a specific procedure or industry. Investigators have used this type of analysis to evaluate how well-suited existing workflow representation languages are for various industries based on the workflow patterns that commonly arise across industry-specific procedures. Based on this type of analysis, it is already clear that existing workflow representations capture discrete flow of control (i.e., when one activity should start and stop based on when other

  4. Standardized methods for photography in procedural dermatology using simple equipment.

    Science.gov (United States)

    Hexsel, Doris; Hexsel, Camile L; Dal'Forno, Taciana; Schilling de Souza, Juliana; Silva, Aline F; Siega, Carolina

    2017-04-01

    Photography is an important tool in dermatology. Reproducing the settings of before photos after interventions allows more accurate evaluation of treatment outcomes. In this article, we describe standardized methods and tips to obtain photographs, both for clinical practice and research procedural dermatology, using common equipment. Standards for the studio, cameras, photographer, patients, and framing are presented in this article. © 2017 The International Society of Dermatology.

  5. Validity of the Draw-a-Person: Screening Procedure for Emotional Disturbance (DAP:SPED) in Strengths-Based Assessment

    Science.gov (United States)

    Matto, Holly C.; Naglieri, Jack A.; Clausen, Cinny

    2005-01-01

    Objective: This is the first validity study to date to examine the relationship between the Draw-A-Person: Screening Procedure for Emotional Disturbance (DAP:SPED) and strengths-based emotional and behavioral measures. The incremental predictive validity of the DAP:SPED relative to the Behavioral and Emotional Rating Scale was examined. Method:…

  6. The Kjeldahl method as a primary reference procedure for total protein in certified reference materials used in clinical chemistry. I. A review of Kjeldahl methods adopted by laboratory medicine.

    Science.gov (United States)

    Chromý, Vratislav; Vinklárková, Bára; Šprongl, Luděk; Bittová, Miroslava

    2015-01-01

    We found previously that albumin-calibrated total protein in certified reference materials causes unacceptable positive bias in analysis of human sera. The simplest way to cure this defect is the use of human-based serum/plasma standards calibrated by the Kjeldahl method. Such standards, commutative with serum samples, will compensate for bias caused by lipids and bilirubin in most human sera. To find a suitable primary reference procedure for total protein in reference materials, we reviewed Kjeldahl methods adopted by laboratory medicine. We found two methods recommended for total protein in human samples: an indirect analysis based on total Kjeldahl nitrogen corrected for its nonprotein nitrogen and a direct analysis made on isolated protein precipitates. The methods found will be assessed in a subsequent article.

  7. A standard curve based method for relative real time PCR data processing

    Directory of Open Access Journals (Sweden)

    Krause Andreas

    2005-03-01

    Full Text Available Abstract Background Currently real time PCR is the most precise method by which to measure gene expression. The method generates a large amount of raw numerical data and processing may notably influence final results. The data processing is based either on standard curves or on PCR efficiency assessment. At the moment, the PCR efficiency approach is preferred in relative PCR whilst the standard curve is often used for absolute PCR. However, there are no barriers to employ standard curves for relative PCR. This article provides an implementation of the standard curve method and discusses its advantages and limitations in relative real time PCR. Results We designed a procedure for data processing in relative real time PCR. The procedure completely avoids PCR efficiency assessment, minimizes operator involvement and provides a statistical assessment of intra-assay variation. The procedure includes the following steps. (I Noise is filtered from raw fluorescence readings by smoothing, baseline subtraction and amplitude normalization. (II The optimal threshold is selected automatically from regression parameters of the standard curve. (III Crossing points (CPs are derived directly from coordinates of points where the threshold line crosses fluorescence plots obtained after the noise filtering. (IV The means and their variances are calculated for CPs in PCR replicas. (V The final results are derived from the CPs' means. The CPs' variances are traced to results by the law of error propagation. A detailed description and analysis of this data processing is provided. The limitations associated with the use of parametric statistical methods and amplitude normalization are specifically analyzed and found fit to the routine laboratory practice. Different options are discussed for aggregation of data obtained from multiple reference genes. Conclusion A standard curve based procedure for PCR data processing has been compiled and validated. It illustrates that

  8. Limitations of subjective cognitive load measures in simulation-based procedural training.

    Science.gov (United States)

    Naismith, Laura M; Cheung, Jeffrey J H; Ringsted, Charlotte; Cavalcanti, Rodrigo B

    2015-08-01

    The effective implementation of cognitive load theory (CLT) to optimise the instructional design of simulation-based training requires sensitive and reliable measures of cognitive load. This mixed-methods study assessed relationships between commonly used measures of total cognitive load and the extent to which these measures reflected participants' experiences of cognitive load in simulation-based procedural skills training. Two groups of medical residents (n = 38) completed three questionnaires after participating in simulation-based procedural skills training sessions: the Paas Cognitive Load Scale; the NASA Task Load Index (TLX), and a cognitive load component (CLC) questionnaire we developed to assess total cognitive load as the sum of intrinsic load (how complex the task is), extraneous load (how the task is presented) and germane load (how the learner processes the task for learning). We calculated Pearson's correlation coefficients to assess agreement among these instruments. Group interviews explored residents' perceptions about how the simulation sessions contributed to their total cognitive load. Interviews were audio-recorded, transcribed and subjected to qualitative content analysis. Total cognitive load scores differed significantly according to the instrument used to assess them. In particular, there was poor agreement between the Paas Scale and the TLX. Quantitative and qualitative findings supported intrinsic cognitive load as synonymous with mental effort (Paas Scale), mental demand (TLX) and task difficulty and complexity (CLC questionnaire). Additional qualitative themes relating to extraneous and germane cognitive loads were not reflected in any of the questionnaires. The Paas Scale, TLX and CLC questionnaire appear to be interchangeable as measures of intrinsic cognitive load, but not of total cognitive load. A more complete understanding of the sources of extraneous and germane cognitive loads in simulation-based training contexts is

  9. Procedural Personas for Player Decision Modeling and Procedural Content Generation

    DEFF Research Database (Denmark)

    Holmgård, Christoffer

    2016-01-01

    ." These methods for constructing procedural personas are then integrated with existing procedural content generation systems, acting as critics that shape the output of these systems, optimizing generated content for different personas and by extension, different kinds of players and their decision making styles......How can player models and artificially intelligent (AI) agents be useful in early-stage iterative game and simulation design? One answer may be as ways of generating synthetic play-test data, before a game or level has ever seen a player, or when the sampled amount of play test data is very low....... This thesis explores methods for creating low-complexity, easily interpretable, generative AI agents for use in game and simulation design. Based on insights from decision theory and behavioral economics, the thesis investigates how player decision making styles may be defined, operationalised, and measured...

  10. Studies of oxide-based thin-layered heterostructures by X-ray scattering methods

    Energy Technology Data Exchange (ETDEWEB)

    Durand, O. [Thales Research and Technology France, Route Departementale 128, F-91767 Palaiseau Cedex (France)]. E-mail: olivier.durand@thalesgroup.com; Rogers, D. [Nanovation SARL, 103 bis rue de Versailles 91400 Orsay (France); Universite de Technologie de Troyes, 10-12 rue Marie Curie, 10010 (France); Teherani, F. Hosseini [Nanovation SARL, 103 bis rue de Versailles 91400 Orsay (France); Andrieux, M. [LEMHE, ICMMOCNRS-UMR 8182, Universite d' Orsay, Batiment 410, 91410 Orsay (France); Modreanu, M. [Tyndall National Institute, Lee Maltings, Prospect Row, Cork (Ireland)

    2007-06-04

    Some X-ray scattering methods (X-ray reflectometry and Diffractometry) dedicated to the study of thin-layered heterostructures are presented with a particular focus, for practical purposes, on the description of fast, accurate and robust techniques. The use of X-ray scattering metrology as a routinely working non-destructive testing method, particularly by using procedures simplifying the data-evaluation, is emphasized. The model-independent Fourier-inversion method applied to a reflectivity curve allows a fast determination of the individual layer thicknesses. We demonstrate the capability of this method by reporting X-ray reflectometry study on multilayered oxide structures, even when the number of the layers constitutive of the stack is not known a-priori. Fast Fourier transform-based procedure has also been employed successfully on high resolution X-ray diffraction profiles. A study of the reliability of the integral-breadth methods in diffraction line-broadening analysis applied to thin layers, in order to determine coherent domain sizes, is also reported. Examples from studies of oxides-based thin-layers heterostructures will illustrate these methods. In particular, X-ray scattering studies performed on high-k HfO{sub 2} and SrZrO{sub 3} thin-layers, a (GaAs/AlOx) waveguide, and a ZnO thin-layer are reported.

  11. X-ray computerized tomography based on the method of reciprocal projection with filtration by double differentiation. Procedure and information peculiarities

    International Nuclear Information System (INIS)

    Vajnberg, Eh.I.; Kazak, I.A.; Fajngojz, M.L.

    1985-01-01

    The results of experimental evaluation of procedure and information peculiarities of the method of reciprocal projection with filtration of projections by double differentiation (RPFDD) for the monitoring of industrial products are generalized. Succession of stages n the RPFDD method permits to separately optimize reconstruction of high-frequeny and low-frequency tomogram structure which sharply reduces the acuteness of contradictions of conventional scheme between the required increase of accuracy and intolerable growth of computerized charges. High accuracy of evaluation of the linear attenuation factor of low-frequency structures in a wide range of densities at the last stage of RPFDD scheme is attained despite the non-conventionally small range of values at earlier stages of computerized processing

  12. Assessment of health-care waste disposal methods using a VIKOR-based fuzzy multi-criteria decision making method

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Hu-Chen [School of Management, Hefei University of Technology, Hefei 230009 (China); Department of Industrial Engineering and Management, Tokyo Institute of Technology, Tokyo 152-8552 (Japan); Wu, Jing [Department of Public Management, Tongji University, Shanghai 200092 (China); Li, Ping, E-mail: yiwuchulp@126.com [Shanghai Pudong New Area Zhoupu Hospital, No. 135 Guanyue Road, Shanghai 201318 (China); East Hospital Affiliated to Tongji University, No. 150 Jimo Road, Shanghai 200120 (China)

    2013-12-15

    Highlights: • Propose a VIKOR-based fuzzy MCDM technique for evaluating HCW disposal methods. • Linguistic variables are used to assess the ratings and weights for the criteria. • The OWA operator is utilized to aggregate individual opinions of decision makers. • A case study is given to illustrate the procedure of the proposed framework. - Abstract: Nowadays selection of the appropriate treatment method in health-care waste (HCW) management has become a challenge task for the municipal authorities especially in developing countries. Assessment of HCW disposal alternatives can be regarded as a complicated multi-criteria decision making (MCDM) problem which requires consideration of multiple alternative solutions and conflicting tangible and intangible criteria. The objective of this paper is to present a new MCDM technique based on fuzzy set theory and VIKOR method for evaluating HCW disposal methods. Linguistic variables are used by decision makers to assess the ratings and weights for the established criteria. The ordered weighted averaging (OWA) operator is utilized to aggregate individual opinions of decision makers into a group assessment. The computational procedure of the proposed framework is illustrated through a case study in Shanghai, one of the largest cities of China. The HCW treatment alternatives considered in this study include “incineration”, “steam sterilization”, “microwave” and “landfill”. The results obtained using the proposed approach are analyzed in a comparative way.

  13. Assessment of health-care waste disposal methods using a VIKOR-based fuzzy multi-criteria decision making method

    International Nuclear Information System (INIS)

    Liu, Hu-Chen; Wu, Jing; Li, Ping

    2013-01-01

    Highlights: • Propose a VIKOR-based fuzzy MCDM technique for evaluating HCW disposal methods. • Linguistic variables are used to assess the ratings and weights for the criteria. • The OWA operator is utilized to aggregate individual opinions of decision makers. • A case study is given to illustrate the procedure of the proposed framework. - Abstract: Nowadays selection of the appropriate treatment method in health-care waste (HCW) management has become a challenge task for the municipal authorities especially in developing countries. Assessment of HCW disposal alternatives can be regarded as a complicated multi-criteria decision making (MCDM) problem which requires consideration of multiple alternative solutions and conflicting tangible and intangible criteria. The objective of this paper is to present a new MCDM technique based on fuzzy set theory and VIKOR method for evaluating HCW disposal methods. Linguistic variables are used by decision makers to assess the ratings and weights for the established criteria. The ordered weighted averaging (OWA) operator is utilized to aggregate individual opinions of decision makers into a group assessment. The computational procedure of the proposed framework is illustrated through a case study in Shanghai, one of the largest cities of China. The HCW treatment alternatives considered in this study include “incineration”, “steam sterilization”, “microwave” and “landfill”. The results obtained using the proposed approach are analyzed in a comparative way

  14. Development of an ICF-based eligibility procedure for education in Switzerland.

    Science.gov (United States)

    Hollenweger, Judith

    2011-05-31

    Starting in January 2011, Switzerland will implement a multidimensional, context-sensitive procedure to establish eligibility in education systems. This paper provides a brief overview of the different eligibility-related practices with a special focus on children with disabilities. The paper then outlines the philosophical and conceptual framework of the eligibility procedure based on the International Classification of Functioning, Disability and Health, and the UN Convention on the Rights of Persons with Disability. The different components and methodology applied to organise information in the process towards establishing eligibility are also presented. Finally, some observations are made regarding transparent and just applications of the eligibility procedure, and the implementation of this new eligibility procedure.

  15. Optimization of instrumental neutron activation analysis method by means of 2{sup k} experimental design technique aiming the validation of analytical procedures

    Energy Technology Data Exchange (ETDEWEB)

    Petroni, Robson; Moreira, Edson G., E-mail: rpetroni@ipen.br, E-mail: emoreira@ipen.br [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil)

    2013-07-01

    In this study optimization of procedures and standardization of Instrumental Neutron Activation Analysis (INAA) methods were carried out for the determination of the elements arsenic, chromium, cobalt, iron, rubidium, scandium, selenium and zinc in biological materials. The aim is to validate the analytical methods for future accreditation at the National Institute of Metrology, Quality and Technology (INMETRO). The 2{sup k} experimental design was applied for evaluation of the individual contribution of selected variables of the analytical procedure in the final mass fraction result. Samples of Mussel Tissue Certified Reference Material and multi-element standards were analyzed considering the following variables: sample decay time, counting time and sample distance to detector. The standard multi-element concentration (comparator standard), mass of the sample and irradiation time were maintained constant in this procedure. By means of the statistical analysis and theoretical and experimental considerations it was determined the optimized experimental conditions for the analytical methods that will be adopted for the validation procedure of INAA methods in the Neutron Activation Analysis Laboratory (LAN) of the Research Reactor Center (CRPq) at the Nuclear and Energy Research Institute (IPEN - CNEN/SP). Optimized conditions were estimated based on the results of z-score tests, main effect and interaction effects. The results obtained with the different experimental configurations were evaluated for accuracy (precision and trueness) for each measurement. (author)

  16. Benefits of computer screen-based simulation in learning cardiac arrest procedures.

    Science.gov (United States)

    Bonnetain, Elodie; Boucheix, Jean-Michel; Hamet, Maël; Freysz, Marc

    2010-07-01

    What is the best way to train medical students early so that they acquire basic skills in cardiopulmonary resuscitation as effectively as possible? Studies have shown the benefits of high-fidelity patient simulators, but have also demonstrated their limits. New computer screen-based multimedia simulators have fewer constraints than high-fidelity patient simulators. In this area, as yet, there has been no research on the effectiveness of transfer of learning from a computer screen-based simulator to more realistic situations such as those encountered with high-fidelity patient simulators. We tested the benefits of learning cardiac arrest procedures using a multimedia computer screen-based simulator in 28 Year 2 medical students. Just before the end of the traditional resuscitation course, we compared two groups. An experiment group (EG) was first asked to learn to perform the appropriate procedures in a cardiac arrest scenario (CA1) in the computer screen-based learning environment and was then tested on a high-fidelity patient simulator in another cardiac arrest simulation (CA2). While the EG was learning to perform CA1 procedures in the computer screen-based learning environment, a control group (CG) actively continued to learn cardiac arrest procedures using practical exercises in a traditional class environment. Both groups were given the same amount of practice, exercises and trials. The CG was then also tested on the high-fidelity patient simulator for CA2, after which it was asked to perform CA1 using the computer screen-based simulator. Performances with both simulators were scored on a precise 23-point scale. On the test on a high-fidelity patient simulator, the EG trained with a multimedia computer screen-based simulator performed significantly better than the CG trained with traditional exercises and practice (16.21 versus 11.13 of 23 possible points, respectively; p<0.001). Computer screen-based simulation appears to be effective in preparing learners to

  17. An improved method for determination of technetium-99m half-life for the quality assurance procedures of radiopharmaceuticals

    International Nuclear Information System (INIS)

    Abd Jalil Abd Hamid; Juhari Mohd Yusof; Zakaria Ibrahim; Wan Mohd Ferdaus Wan Ishak; Mohamad Hafiz Ahmad

    2009-01-01

    An improve method for identity tests of technetium-99m for the quality assurance procedures are presented. Computerized methods based on the least-squares of decay curve fitting for half-life estimation of technetium-99m was tested. Thus, least-squares method was employ as a decay curve fitting procedures in our software. Theoretical calculated half-life of technetium-99m for evaluation was performed for comparison. In Fig. 3 is shown, the decay curve fitting of a sample over one second counting time interval. The R2 value of the curve suggests that the time of the study was too short to obtain acceptable value. A similar measurement for another data set was done for a longer period of time and in Table 1 is shown a representative decay curve fitting. The value was found to be 6.006 hours with a discrepancy of -0.28% from the value taken from the literature. The value is in agreement with the literature for time interval greater than 2 seconds. The results obtained by this method show that the used of least-squares method for decay curve fitting are appropriate for routine identity tests. This confirmed that the least-squares method applied in our decay curve fitting software are remarkably improved and convenient for routine identity tests purposes. (Author)

  18. Random walk-based similarity measure method for patterns in complex object

    Directory of Open Access Journals (Sweden)

    Liu Shihu

    2017-04-01

    Full Text Available This paper discusses the similarity of the patterns in complex objects. The complex object is composed both of the attribute information of patterns and the relational information between patterns. Bearing in mind the specificity of complex object, a random walk-based similarity measurement method for patterns is constructed. In this method, the reachability of any two patterns with respect to the relational information is fully studied, and in the case of similarity of patterns with respect to the relational information can be calculated. On this bases, an integrated similarity measurement method is proposed, and algorithms 1 and 2 show the performed calculation procedure. One can find that this method makes full use of the attribute information and relational information. Finally, a synthetic example shows that our proposed similarity measurement method is validated.

  19. Procedures and methods of benefit assessments for medicines in Germany.

    Science.gov (United States)

    Bekkering, Geertruida E; Kleijnen, Jos

    2008-11-01

    in Health Care (IQWiG) in Germany invites comments on their protocol and preliminary report by posting them on their website, and comments are made public, the individual comments are not evaluated openly, and therefore it remains uncertain whether or not they lead to changes in the reports. The participation of relevant parties in the assessment process as implemented by NICE guarantees a process that is transparent to all relevant parties.Transparency of the whole process is assured by clear reporting of procedures and criteria in all phases undertaken in the benefit assessment. In a scoping process, a draft scope is commented on first in writing and subsequently in the form of a scoping workshop. In this way, all relevant aspects can be heard and included in the final scope. The protocol is then developed, followed by evidence assessment. The methods used should be completely reported to show readers that the assessment has been performed with scientific rigour and that bias has been prevented where possible. All relevant parties should have the opportunity to comment on the draft protocol and the draft preliminary report. Each comment should be evaluated as to whether or not it will lead to changes, and both the comments and the evaluation should be made public to ensure transparency of this process. The same procedure should be used for the peer-review phase. Based on the final report of the evidence assessment, the institute forms recommendations and the FJC appraises the evidence.During the writing of the final report, a separation between the evidence assessment and the evidence-appraisal phase should be implemented. Ideally, this separation should be legally enforced to prevent any confusion about conflict of interests.Such a process guarantees a feasible combination of the legal requirements for transparency and involvement of relevant parties with international standards of EBM to ensure that the benefit assessments of medicines in Germany are performed

  20. [Procedures and methods of benefit assessments for medicines in Germany].

    Science.gov (United States)

    Bekkering, G E; Kleijnen, J

    2008-12-01

    Quality and Efficiency in Health Care (IQWiG) in Germany invites comments on their protocol and preliminary report by posting them on their website, and comments are made public, the individual comments are not evaluated openly, and therefore it remains uncertain whether or not they lead to changes in the reports. The participation of relevant parties in the assessment process as implemented by NICE guarantees a process that is transparent to all relevant parties. Transparency of the whole process is assured by clear reporting of procedures and criteria in all phases undertaken in the benefit assessment. In a scoping process, a draft scope is commented on first in writing and subsequently in the form of a scoping workshop. In this way, all relevant aspects can be heard and included in the final scope. The protocol is then developed, followed by evidence assessment. The methods used should be completely reported to show readers that the assessment has been performed with scientific rigour and that bias has been prevented where possible. All relevant parties should have the opportunity to comment on the draft protocol and the draft preliminary report. Each comment should be evaluated as to whether or not it will lead to changes, and both the comments and the evaluation should be made public to ensure transparency of this process. The same procedure should be used for the peer-review phase. Based on the final report of the evidence assessment, the institute forms recommendations and the FJC appraises the evidence. During the writing of the final report, a separation between the evidence assessment and the evidence appraisal phase should be implemented. Ideally, this separation should be legally enforced to prevent any confusion about conflict of interests. Such a process guarantees a feasible combination of the legal requirements for transparency and involvement of relevant parties with international standards of EBM to ensure that the benefit assessments of medicines in

  1. A finite element based substructuring procedure for design analysis of large smart structural systems

    International Nuclear Information System (INIS)

    Ashwin, U; Raja, S; Dwarakanathan, D

    2009-01-01

    A substructuring based design analysis procedure is presented for large smart structural system using the Craig–Bampton method. The smart structural system is distinctively characterized as an active substructure, modelled as a design problem, and a passive substructure, idealized as an analysis problem. Furthermore, a novel thought has been applied by introducing the electro–elastic coupling into the reduction scheme to solve the global structural control problem in a local domain. As an illustration, a smart composite box beam with surface bonded actuators/sensors is considered, and results of the local to global control analysis are presented to show the potential use of the developed procedure. The present numerical scheme is useful for optimally designing the active substructures to study their locations, coupled structure–actuator interaction and provide a solution to the global design of large smart structural systems

  2. A permutation-based multiple testing method for time-course microarray experiments

    Directory of Open Access Journals (Sweden)

    George Stephen L

    2009-10-01

    Full Text Available Abstract Background Time-course microarray experiments are widely used to study the temporal profiles of gene expression. Storey et al. (2005 developed a method for analyzing time-course microarray studies that can be applied to discovering genes whose expression trajectories change over time within a single biological group, or those that follow different time trajectories among multiple groups. They estimated the expression trajectories of each gene using natural cubic splines under the null (no time-course and alternative (time-course hypotheses, and used a goodness of fit test statistic to quantify the discrepancy. The null distribution of the statistic was approximated through a bootstrap method. Gene expression levels in microarray data are often complicatedly correlated. An accurate type I error control adjusting for multiple testing requires the joint null distribution of test statistics for a large number of genes. For this purpose, permutation methods have been widely used because of computational ease and their intuitive interpretation. Results In this paper, we propose a permutation-based multiple testing procedure based on the test statistic used by Storey et al. (2005. We also propose an efficient computation algorithm. Extensive simulations are conducted to investigate the performance of the permutation-based multiple testing procedure. The application of the proposed method is illustrated using the Caenorhabditis elegans dauer developmental data. Conclusion Our method is computationally efficient and applicable for identifying genes whose expression levels are time-dependent in a single biological group and for identifying the genes for which the time-profile depends on the group in a multi-group setting.

  3. Quantitative determination of pefloxacin mesylate by residual-base neutralisation method

    Directory of Open Access Journals (Sweden)

    HULIKALCHANDRA SHEKAR PRAMEELA

    2004-05-01

    Full Text Available This work describes two procedures based on residual base determination for the quantification of pefloxacin mesylate (PFM in bulk drug and in pharmaceutical products. In the first method involving titrimetry, the drug solution is treated with a measured excess of sodium hydroxide followed by back titration of the residual base with hydrochloric acid using a phenol red-bromothymol blue mixed indicator. The second spectrophotometrie method involves treatment of a fixed amount of sodium hydroxide – phenol red mixture with varying amounts of the drug, and measuring the decrease in the absorbance of the dye at 560 nm. In the titrimetric method, a reaction stoichiometry of 1:1 was found in the quantification range of 4–20 mg of drug. The spectrophotometric method allows the determination of PFM in the 5–40 mg ml-1 range. The molar absorptivity is 5.91¤103 l mol-1 cm-1 and the Sandell sensitivity is 56.37 ng cm-2. The methods were applied successfully to the determination of PFM in pharmaceutical preparations.

  4. Review: Janice M. Morse & Linda Niehaus (2009). Mixed Method Design: Principles and Procedures

    OpenAIRE

    Öhlen, Joakim

    2010-01-01

    Mixed method design related to the use of a combination of methods, usually quantitative and qualitative, is increasingly used for the investigation of complex phenomena. This review discusses the book, "Mixed Method Design: Principles and Procedures," by Janice M. MORSE and Linda NIEHAUS. A distinctive feature of their approach is the consideration of mixed methods design out of a core and a supplemental component. In order to define these components they emphasize the overall conceptual dir...

  5. A variation of the housing unit method for estimating the age and gender distribution of small, rural areas: A case study of the local expert procedure

    International Nuclear Information System (INIS)

    Carlson, J.F.; Roe, L.K.; Williams, C.A.; Swanson, D.A.

    1993-01-01

    This paper describes the methodologies used in the development of a demographic data base established in support of the Yucca Mountain Site Characterization Project Radiological Monitoring Plan (RadMP). It also examines the suitability of a survey-based procedure for estimating population in small, rural areas. The procedure is a variation of the Housing Unit Method. It employs the use of local experts enlisted to provide information about the demographic characteristics of households randomly selected from residential units sample frames developed from utility records. The procedure is nonintrusive and less costly than traditional survey data collection efforts. Because the procedure is based on random sampling, confidence intervals can be constructed around the population estimated by the technique. The results of a case study are provided in which the total population, and age and gender of the population, is estimated for three unincorporated communities in rural, southern Nevada

  6. Deflection-based method for seismic response analysis of concrete walls: Benchmarking of CAMUS experiment

    International Nuclear Information System (INIS)

    Basu, Prabir C.; Roshan, A.D.

    2007-01-01

    A number of shake table tests had been conducted on the scaled down model of a concrete wall as part of CAMUS experiment. The experiments were conducted between 1996 and 1998 in the CEA facilities in Saclay, France. Benchmarking of CAMUS experiments was undertaken as a part of the coordinated research program on 'Safety Significance of Near-Field Earthquakes' organised by International Atomic Energy Agency (IAEA). Technique of deflection-based method was adopted for benchmarking exercise. Non-linear static procedure of deflection-based method has two basic steps: pushover analysis, and determination of target displacement or performance point. Pushover analysis is an analytical procedure to assess the capacity to withstand seismic loading effect that a structural system can offer considering the redundancies and inelastic deformation. Outcome of a pushover analysis is the plot of force-displacement (base shear-top/roof displacement) curve of the structure. This is obtained by step-by-step non-linear static analysis of the structure with increasing value of load. The second step is to determine target displacement, which is also known as performance point. The target displacement is the likely maximum displacement of the structure due to a specified seismic input motion. Established procedures, FEMA-273 and ATC-40, are available to determine this maximum deflection. The responses of CAMUS test specimen are determined by deflection-based method and analytically calculated values compare well with the test results

  7. Comparison of Six DNA Extraction Procedures and the Application of Plastid DNA Enrichment Methods in Selected Non-photosynthetic Plants

    Directory of Open Access Journals (Sweden)

    Shin-Yi Shyu

    2013-12-01

    Full Text Available Genomic DNA was isolated using three DNA extraction commercial kits and three CTAB-based methods for two non-photosynthetic plants, Balanophora japonica and Mitrastemon kanehirai. The quality of the isolated DNA was evaluated and subjected to following restriction enzyme digestions. All six procedures yielded DNA of sufficient quality for PCR, and the method described by Barnwell et al. (1998 performed well in isolating DNA from both species for restriction enzyme digestion. In addition, we succeeded to enrich plastid DNA content by using the methods depending on a high salt buffer to deplete nuclear material. The ‘high salt’ methods based on protocol presented by Milligan (1989 were able to increase plastid DNA effectively and significantly reduce nuclear DNA from M. kanehirai. The plastid DNA enrichment protocols are inexpensive and not time-consuming, and may be applicable to other non-photosynthetic plants.

  8. Radiochemical procedures

    International Nuclear Information System (INIS)

    Lyon, W.S.

    1982-01-01

    The modern counting instrumentation has largely obviated the need for separation processes in the radiochemical analysis but problems in low-level radioactivity measurement, environmental-type analyses, and special situations caused in the last years a renaissance of the need for separation techniques. Most of the radiochemical procedures, based on the classic works of the Manhattan Project chemists of the 1940's, were published in the National Nuclear Energy Series (NNES). Improvements such as new solvent extraction and ion exchange separations have been added to these methods throughout the years. Recently the Los Alamos Group have reissued their collected Radiochemical Procedures containing a short summary and review of basic inorganic chemistry - 'Chemistry of the Elements on the Basis of Electronic Configuration'. (A.L.)

  9. Robust procedures in chemometrics

    DEFF Research Database (Denmark)

    Kotwa, Ewelina

    properties of the analysed data. The broad theoretical background of robust procedures was given as a very useful supplement to the classical methods, and a new tool, based on robust PCA, aiming at identifying Rayleigh and Raman scatters in excitation-mission (EEM) data was developed. The results show...

  10. Symptom-based emergency operating procedures development for Ignalina NPP

    International Nuclear Information System (INIS)

    Kruglov, Y.

    1999-01-01

    In this paper and lecture are presented: (1) Introduction; (2) EOP project work stages and documentation; (3) Selection and justification of accident management strategy; (4) Content of EOP package; (5) Development of EOP package; (6) EOP package verification; (7) EOP package validation; (8) EOP training; (9) EOP implementation; (10) Conditions of symptom-based emergency operating producers package application and its interconnection with event-based emergency operating procedures; (11) Rules of EOP application; EOP maintenance

  11. Template-based procedures for neural network interpretation.

    Science.gov (United States)

    Alexander, J A.; Mozer, M C.

    1999-04-01

    Although neural networks often achieve impressive learning and generalization performance, their internal workings are typically all but impossible to decipher. This characteristic of the networks, their opacity, is one of the disadvantages of connectionism compared to more traditional, rule-oriented approaches to artificial intelligence. Without a thorough understanding of the network behavior, confidence in a system's results is lowered, and the transfer of learned knowledge to other processing systems - including humans - is precluded. Methods that address the opacity problem by casting network weights in symbolic terms are commonly referred to as rule extraction techniques. This work describes a principled approach to symbolic rule extraction from standard multilayer feedforward networks based on the notion of weight templates, parameterized regions of weight space corresponding to specific symbolic expressions. With an appropriate choice of representation, we show how template parameters may be efficiently identified and instantiated to yield the optimal match to the actual weights of a unit. Depending on the requirements of the application domain, the approach can accommodate n-ary disjunctions and conjunctions with O(k) complexity, simple n-of-m expressions with O(k(2)) complexity, or more general classes of recursive n-of-m expressions with O(k(L+2)) complexity, where k is the number of inputs to an unit and L the recursion level of the expression class. Compared to other approaches in the literature, our method of rule extraction offers benefits in simplicity, computational performance, and overall flexibility. Simulation results on a variety of problems demonstrate the application of our procedures as well as the strengths and the weaknesses of our general approach.

  12. Assessing distractors and teamwork during surgery: developing an event-based method for direct observation.

    Science.gov (United States)

    Seelandt, Julia C; Tschan, Franziska; Keller, Sandra; Beldi, Guido; Jenni, Nadja; Kurmann, Anita; Candinas, Daniel; Semmer, Norbert K

    2014-11-01

    To develop a behavioural observation method to simultaneously assess distractors and communication/teamwork during surgical procedures through direct, on-site observations; to establish the reliability of the method for long (>3 h) procedures. Observational categories for an event-based coding system were developed based on expert interviews, observations and a literature review. Using Cohen's κ and the intraclass correlation coefficient, interobserver agreement was assessed for 29 procedures. Agreement was calculated for the entire surgery, and for the 1st hour. In addition, interobserver agreement was assessed between two tired observers and between a tired and a non-tired observer after 3 h of surgery. The observational system has five codes for distractors (door openings, noise distractors, technical distractors, side conversations and interruptions), eight codes for communication/teamwork (case-relevant communication, teaching, leadership, problem solving, case-irrelevant communication, laughter, tension and communication with external visitors) and five contextual codes (incision, last stitch, personnel changes in the sterile team, location changes around the table and incidents). Based on 5-min intervals, Cohen's κ was good to excellent for distractors (0.74-0.98) and for communication/teamwork (0.70-1). Based on frequency counts, intraclass correlation coefficient was excellent for distractors (0.86-0.99) and good to excellent for communication/teamwork (0.45-0.99). After 3 h of surgery, Cohen's κ was 0.78-0.93 for distractors, and 0.79-1 for communication/teamwork. The observational method developed allows a single observer to simultaneously assess distractors and communication/teamwork. Even for long procedures, high interobserver agreement can be achieved. Data collected with this method allow for investigating separate or combined effects of distractions and communication/teamwork on surgical performance and patient outcomes. Published by the

  13. SUBJECTIVE CURE RATES AFTER TVT PROCEDURE FOR TREATMENT OF FEMALE URINARY INCONTINENCE – A QUESTIONNAIRE BASED STUDY

    Directory of Open Access Journals (Sweden)

    Igor But

    2003-12-01

    Full Text Available Background. The aim of this study was to assess the subjective cure rate after the tension-free vaginal tape (TVT procedure in patients with stress (SUI and mixed (MUI urinary incontinence.Methods. This is a questionnaire based study done in 43 patients with SUI and 52 patients with MUI. In the assessement of the subjective cure rate the visual analogue scale and the symptom assessment index (SAI were used. Data were analyzed using nonparametric statistics.Results. The subjective cure rate assessed 19.6 months after TVT amounted to 89.3%. Urinary incontinence after TVT procedure was noted in 26 patients (27.4% and the majority of these women (73.1% were diagnosed with MUI. In patients with SUI and postoperative stable bladder a higher success rate was observed (96.7%. In 18.6% patients with SUI, de novo overactive bladder symptoms occurred. These patients estimated a significantly (p = 0.027 lower cure rate (81.9% after TVT procedure. In patients with MUI, the cure rate after TVT amounted to 85.6%. The subjective cure rate was lower (79.4% in case of persistent overactive bladder symptoms. However, it was significantly higher (97.5% in case of a postoperatively stable bladder (p = 0.016. In the group of MUI patients, the symptoms of overactive bladder disease resolved spontaneously in 17 patients (32.7% postoperatively. The patients were satisfied with TVT and 92.6% would recommend this procedure to others.Conclusions. The TVT procedure is a very effective method of treatment for stress as well as mixed urinary incontinence. The success rate of the procedure is high, however, it is influenced by bladder activity.

  14. High-speed technique based on a parallel projection correlation procedure for digital image correlation

    Science.gov (United States)

    Zaripov, D. I.; Renfu, Li

    2018-05-01

    The implementation of high-efficiency digital image correlation methods based on a zero-normalized cross-correlation (ZNCC) procedure for high-speed, time-resolved measurements using a high-resolution digital camera is associated with big data processing and is often time consuming. In order to speed-up ZNCC computation, a high-speed technique based on a parallel projection correlation procedure is proposed. The proposed technique involves the use of interrogation window projections instead of its two-dimensional field of luminous intensity. This simplification allows acceleration of ZNCC computation up to 28.8 times compared to ZNCC calculated directly, depending on the size of interrogation window and region of interest. The results of three synthetic test cases, such as a one-dimensional uniform flow, a linear shear flow and a turbulent boundary-layer flow, are discussed in terms of accuracy. In the latter case, the proposed technique is implemented together with an iterative window-deformation technique. On the basis of the results of the present work, the proposed technique is recommended to be used for initial velocity field calculation, with further correction using more accurate techniques.

  15. A comparison of sequential and information-based methods for determining the co-integration rank in heteroskedastic VAR MODELS

    DEFF Research Database (Denmark)

    Cavaliere, Giuseppe; Angelis, Luca De; Rahbek, Anders

    2015-01-01

    In this article, we investigate the behaviour of a number of methods for estimating the co-integration rank in VAR systems characterized by heteroskedastic innovation processes. In particular, we compare the efficacy of the most widely used information criteria, such as Akaike Information Criterion....... The relative finite-sample properties of the different methods are investigated by means of a Monte Carlo simulation study. For the simulation DGPs considered in the analysis, we find that the BIC-based procedure and the bootstrap sequential test procedure deliver the best overall performance in terms......-based method to over-estimate the co-integration rank in relatively small sample sizes....

  16. PC based temporary shielding administrative procedure (TSAP)

    International Nuclear Information System (INIS)

    Olsen, D.E.; Pederson, G.E.; Hamby, P.N.

    1995-01-01

    A completely new Administrative Procedure for temporary shielding was developed for use at Commonwealth Edison's six nuclear stations. This procedure promotes the use of shielding, and addresses industry requirements for the use and control of temporary shielding. The importance of an effective procedure has increased since more temporary shielding is being used as ALARA goals become more ambitious. To help implement the administrative procedure, a personal computer software program was written to incorporate the procedural requirements. This software incorporates the useability of a Windows graphical user interface with extensive help and database features. This combination of a comprehensive administrative procedure and user friendly software promotes the effective use and management of temporary shielding while ensuring that industry requirements are met

  17. PC based temporary shielding administrative procedure (TSAP)

    Energy Technology Data Exchange (ETDEWEB)

    Olsen, D.E.; Pederson, G.E. [Sargent & Lundy, Chicago, IL (United States); Hamby, P.N. [Commonwealth Edison Co., Downers Grove, IL (United States)

    1995-03-01

    A completely new Administrative Procedure for temporary shielding was developed for use at Commonwealth Edison`s six nuclear stations. This procedure promotes the use of shielding, and addresses industry requirements for the use and control of temporary shielding. The importance of an effective procedure has increased since more temporary shielding is being used as ALARA goals become more ambitious. To help implement the administrative procedure, a personal computer software program was written to incorporate the procedural requirements. This software incorporates the useability of a Windows graphical user interface with extensive help and database features. This combination of a comprehensive administrative procedure and user friendly software promotes the effective use and management of temporary shielding while ensuring that industry requirements are met.

  18. Clear-sky classification procedures and models using a world-wide data-base

    International Nuclear Information System (INIS)

    Younes, S.; Muneer, T.

    2007-01-01

    Clear-sky data need to be extracted from all-sky measured solar-irradiance dataset, often by using algorithms that rely on other measured meteorological parameters. Current procedures for clear-sky data extraction have been examined and compared with each other to determine their reliability and location dependency. New clear-sky determination algorithms are proposed that are based on a combination of clearness index, diffuse ratio, cloud cover and Linke's turbidity limits. Various researchers have proposed clear-sky irradiance models that rely on synoptic parameters; four of these models, MRM, PRM, YRM and REST2 have been compared for six world-wide-locations. Based on a previously-developed comprehensive accuracy scoring method, the models MRM, REST2 and YRM were found to be of satisfactory performance in decreasing order. The so-called Page radiation model (PRM) was found to underestimate solar radiation, even though local turbidity data were provided for its operation

  19. Measurement of unattached radon progeny based in electrostatic deposition method

    International Nuclear Information System (INIS)

    Canoba, A.C.; Lopez, F.O.

    1999-01-01

    A method for the measurement of unattached radon progeny based on its electrostatic deposition onto wire screens, using only one pump, has been implemented and calibrated. The importance of being able of making use of this method is related with the special radiological significance that has the unattached fraction of the short-lived radon progeny. Because of this, the assessment of exposure could be directly related to dose with far greater accuracy than before. The advantages of this method are its simplicity, even with the tools needed for the sample collection, as well as the measurement instruments used. Also, the suitability of this method is enhanced by the fact that it can effectively be used with a simple measuring procedure such as the Kusnetz method. (author)

  20. Improved Savitzky-Golay-method-based fluorescence subtraction algorithm for rapid recovery of Raman spectra.

    Science.gov (United States)

    Chen, Kun; Zhang, Hongyuan; Wei, Haoyun; Li, Yan

    2014-08-20

    In this paper, we propose an improved subtraction algorithm for rapid recovery of Raman spectra that can substantially reduce the computation time. This algorithm is based on an improved Savitzky-Golay (SG) iterative smoothing method, which involves two key novel approaches: (a) the use of the Gauss-Seidel method and (b) the introduction of a relaxation factor into the iterative procedure. By applying a novel successive relaxation (SG-SR) iterative method to the relaxation factor, additional improvement in the convergence speed over the standard Savitzky-Golay procedure is realized. The proposed improved algorithm (the RIA-SG-SR algorithm), which uses SG-SR-based iteration instead of Savitzky-Golay iteration, has been optimized and validated with a mathematically simulated Raman spectrum, as well as experimentally measured Raman spectra from non-biological and biological samples. The method results in a significant reduction in computing cost while yielding consistent rejection of fluorescence and noise for spectra with low signal-to-fluorescence ratios and varied baselines. In the simulation, RIA-SG-SR achieved 1 order of magnitude improvement in iteration number and 2 orders of magnitude improvement in computation time compared with the range-independent background-subtraction algorithm (RIA). Furthermore the computation time of the experimentally measured raw Raman spectrum processing from skin tissue decreased from 6.72 to 0.094 s. In general, the processing of the SG-SR method can be conducted within dozens of milliseconds, which can provide a real-time procedure in practical situations.

  1. Solution Procedure for Transport Modeling in Effluent Recharge Based on Operator-Splitting Techniques

    Directory of Open Access Journals (Sweden)

    Shutang Zhu

    2008-01-01

    Full Text Available The coupling of groundwater movement and reactive transport during groundwater recharge with wastewater leads to a complicated mathematical model, involving terms to describe convection-dispersion, adsorption/desorption and/or biodegradation, and so forth. It has been found very difficult to solve such a coupled model either analytically or numerically. The present study adopts operator-splitting techniques to decompose the coupled model into two submodels with different intrinsic characteristics. By applying an upwind finite difference scheme to the finite volume integral of the convection flux term, an implicit solution procedure is derived to solve the convection-dominant equation. The dispersion term is discretized in a standard central-difference scheme while the dispersion-dominant equation is solved using either the preconditioned Jacobi conjugate gradient (PJCG method or Thomas method based on local-one-dimensional scheme. The solution method proposed in this study is applied to the demonstration project of groundwater recharge with secondary effluent at Gaobeidian sewage treatment plant (STP successfully.

  2. Learn, see, practice, prove, do, maintain: an evidence-based pedagogical framework for procedural skill training in medicine.

    Science.gov (United States)

    Sawyer, Taylor; White, Marjorie; Zaveri, Pavan; Chang, Todd; Ades, Anne; French, Heather; Anderson, JoDee; Auerbach, Marc; Johnston, Lindsay; Kessler, David

    2015-08-01

    Acquisition of competency in procedural skills is a fundamental goal of medical training. In this Perspective, the authors propose an evidence-based pedagogical framework for procedural skill training. The framework was developed based on a review of the literature using a critical synthesis approach and builds on earlier models of procedural skill training in medicine. The authors begin by describing the fundamentals of procedural skill development. Then, a six-step pedagogical framework for procedural skills training is presented: Learn, See, Practice, Prove, Do, and Maintain. In this framework, procedural skill training begins with the learner acquiring requisite cognitive knowledge through didactic education (Learn) and observation of the procedure (See). The learner then progresses to the stage of psychomotor skill acquisition and is allowed to deliberately practice the procedure on a simulator (Practice). Simulation-based mastery learning is employed to allow the trainee to prove competency prior to performing the procedure on a patient (Prove). Once competency is demonstrated on a simulator, the trainee is allowed to perform the procedure on patients with direct supervision, until he or she can be entrusted to perform the procedure independently (Do). Maintenance of the skill is ensured through continued clinical practice, supplemented by simulation-based training as needed (Maintain). Evidence in support of each component of the framework is presented. Implementation of the proposed framework presents a paradigm shift in procedural skill training. However, the authors believe that adoption of the framework will improve procedural skill training and patient safety.

  3. Do Basic Psychomotor Skills Transfer Between Different Image-based Procedures?

    NARCIS (Netherlands)

    Buzink, S.N.; Goossens, R.H.M.; Schoon, E.J.; De Ridder, H.; Jakimowicz, J.J.

    2010-01-01

    Background - Surgical techniques that draw from multiple types of image-based procedures (IBP) are increasing, such as Natural Orifice Transluminal Endoscopic Surgery, fusing laparoscopy and flexible endoscopy. However, little is known about the relation between psychomotor skills for performing

  4. Evaluation of a New Automated Processing System (TACAS™ Pro) for Liquid-Based Procedures.

    Science.gov (United States)

    Kuramoto, Hiroyuki; Sugimoto, Naoko; Iwami, Yoshiko; Kato, Chizuyo; Hori, Masuko; Iida, Manichi

    2015-01-01

    To evaluate a fully automated processing system (TACAS™ Pro) for liquid-based procedures (LBPs). Materials were 3,483 and additionally 502 specimens that were taken at Kanagawa Health Service Association. Specimens obtained with a Cervex-Brush® were first smeared to glass slides using one side of the brush and then processed to TACAS Pro. (1) The microscopy watching time per normal case was 3.65 ± 0.85 min in the conventional procedure, whereas in the LBP it was 1.95 ± 0.60 min, and the latter reduced workload to 53%. (2) The handling time of TACAS Pro per day was 2 h and 25.8 min. The workload at a laboratory offset it and revealed the work saving to be 63.8%. (3) Unsatisfactory rates were 0% in the conventional procedure, whereas in the LBP it was 1.88% at first. The latter rate decreased to 0.5% after system improvement. (4) Specimens which may disturb microscopy analysis were found in 1.06%, including 3 cases of possible carry-over of cells to the following slides. An additional study with the revised system confirmed no carry-over. (5) Incidences of abnormal cytology were consistent between the two methods. The revised automated processing system TACAS Pro is a feasible and useful LBP and reduces the workload of cytology laboratories. © 2015 S. Karger AG, Basel.

  5. A web-based procedure for liver segmentation in CT images

    Science.gov (United States)

    Yuan, Rong; Luo, Ming; Wang, Luyao; Xie, Qingguo

    2015-03-01

    Liver segmentation in CT images has been acknowledged as a basic and indispensable part in systems of computer aided liver surgery for operation design and risk evaluation. In this paper, we will introduce and implement a web-based procedure for liver segmentation to help radiologists and surgeons get an accurate result efficiently and expediently. Several clinical datasets are used to evaluate the accessibility and the accuracy. This procedure seems a promising approach for extraction of liver volumetry of various shapes. Moreover, it is possible for user to access the segmentation wherever the Internet is available without any specific machine.

  6. Procedures and practices for abnormal occurrences and emergencies

    International Nuclear Information System (INIS)

    Blaesig, H.

    1986-01-01

    This lecture contains the concept of the Emergency Operating Procedures (EOPs) of German power plants. As the procedures depend on the technique of the plant the level of automation and the types of information are described first. After this, the method to diagnose a transient or accident, following entry into an emergency procedure is explained. An overview about the design basis accidents and the aim of the actions in the procedures is given basing on the existing rules and regulations. Finally the theoretical principles are explained taking the corresponding procedures and examples of two German PWRs. (orig.)

  7. A procedure for safety assessment of components with cracks - Handbook

    International Nuclear Information System (INIS)

    Andersson, P.; Bergman, M.; Brickstad, B.; Dahlberg, L.; Nilsson, F.; Sattari-Far, I.

    1996-01-01

    In this handbook a procedure is described which can be used both for assessment of detected cracks or crack like defects or for defect tolerance analysis. The procedure can be used to calculate possible crack growth due to fatigue or stress corrosion and to calculate the reserve margin for failure due to fracture and plastic collapse. For ductile materials, the procedure gives the reserve margin for initiation of stable crack growth. Thus, an extra reserve margin, unknown to size, exists for failure in components made of ductile materials. The procedure was developed for operative use with the following objectives in mind: The procedure should be able to handle both linear and non-linear problems without any a priori division; The procedure shall ensure uniqueness of the safety assessment; The procedure should be well defined and easy to use; The conservatism of the procedure should be well validated; The handbook that documents the procedure should be so complete that for most assessments access to any other fracture mechanics literature should not be necessary. The method utilized is based on the R6-method developed at Nuclear Electric plc. This method can in principle be used for all metallic materials. It is, however, more extensively verified for steel alloys only. The method is not intended for use in temperatures where creep deformation is of importance. The first edition of the handbook was released in 1990 and the second in 1991. This third edition has been extensively revised. A Windows-based program (SACC) has been developed which can perform the assessments described in the book including calculation of crack growth due to stress corrosion and fatigue. 52 refs., 27 figs., 35 tabs

  8. Refinement procedure for the image alignment in high-resolution electron tomography

    International Nuclear Information System (INIS)

    Houben, L.; Bar Sadan, M.

    2011-01-01

    High-resolution electron tomography from a tilt series of transmission electron microscopy images requires an accurate image alignment procedure in order to maximise the resolution of the tomogram. This is the case in particular for ultra-high resolution where even very small misalignments between individual images can dramatically reduce the fidelity of the resultant reconstruction. A tomographic-reconstruction based and marker-free method is proposed, which uses an iterative optimisation of the tomogram resolution. The method utilises a search algorithm that maximises the contrast in tomogram sub-volumes. Unlike conventional cross-correlation analysis it provides the required correlation over a large tilt angle separation and guarantees a consistent alignment of images for the full range of object tilt angles. An assessment based on experimental reconstructions shows that the marker-free procedure is competitive to the reference of marker-based procedures at lower resolution and yields sub-pixel accuracy even for simulated high-resolution data. -- Highlights: → Alignment procedure for electron tomography based on iterative tomogram contrast optimisation. → Marker-free, independent of object, little user interaction. → Accuracy competitive with fiducial marker methods and suited for high-resolution tomography.

  9. 18 CFR 284.502 - Procedures for applying for market-based rates.

    Science.gov (United States)

    2010-04-01

    ... 18 Conservation of Power and Water Resources 1 2010-04-01 2010-04-01 false Procedures for applying for market-based rates. 284.502 Section 284.502 Conservation of Power and Water Resources FEDERAL... POLICY ACT OF 1978 AND RELATED AUTHORITIES Applications for Market-Based Rates for Storage § 284.502...

  10. Activities identification for activity-based cost/management applications of the diagnostics outpatient procedures.

    Science.gov (United States)

    Alrashdan, Abdalla; Momani, Amer; Ababneh, Tamador

    2012-01-01

    One of the most challenging problems facing healthcare providers is to determine the actual cost for their procedures, which is important for internal accounting and price justification to insurers. The objective of this paper is to find suitable categories to identify the diagnostic outpatient medical procedures and translate them from functional orientation to process orientation. A hierarchal task tree is developed based on a classification schema of procedural activities. Each procedure is seen as a process consisting of a number of activities. This makes a powerful foundation for activity-based cost/management implementation and provides enough information to discover the value-added and non-value-added activities that assist in process improvement and eventually may lead to cost reduction. Work measurement techniques are used to identify the standard time of each activity at the lowest level of the task tree. A real case study at a private hospital is presented to demonstrate the proposed methodology. © 2011 National Association for Healthcare Quality.

  11. A Method Based on Dial's Algorithm for Multi-time Dynamic Traffic Assignment

    Directory of Open Access Journals (Sweden)

    Rongjie Kuang

    2014-03-01

    Full Text Available Due to static traffic assignment has poor performance in reflecting actual case and dynamic traffic assignment may incurs excessive compute cost, method of multi-time dynamic traffic assignment combining static and dynamic traffic assignment balances factors of precision and cost effectively. A method based on Dial's logit algorithm is proposed in the article to solve the dynamic stochastic user equilibrium problem in dynamic traffic assignment. Before that, a fitting function that can proximately reflect overloaded traffic condition of link is proposed and used to give corresponding model. Numerical example is given to illustrate heuristic procedure of method and to compare results with one of same example solved by other literature's algorithm. Results show that method based on Dial's algorithm is preferable to algorithm from others.

  12. Student Opinions about the Seven-Step Procedure in Problem-Based Hospitality Management Education

    Science.gov (United States)

    Zwaal, Wichard; Otting, Hans

    2014-01-01

    This study investigates how hospitality management students appreciate the role and application of the seven-step procedure in problem-based learning. A survey was developed containing sections about personal characteristics, recall of the seven steps, overall report marks, and 30 statements about the seven-step procedure. The survey was…

  13. Review: Janice M. Morse & Linda Niehaus (2009). Mixed method design: principles and procedures

    OpenAIRE

    Öhlen, Joakim

    2010-01-01

    Mixed-Method-Designs, in denen quantitative und qualitative Methoden Verwendung finden, erfreuen sich zunehmender Beliebtheit für die Untersuchung komplexer Phänomene. Die vorliegende Besprechung beschäftigt sich in diesem Zusammenhang mit dem Buch "Mixed Method Design: Principles and Procedures" von Janice M. MORSE und Linda NIEHAUS, die für solche Designs Kern- und Ergänzungskomponenten zu identifizieren versuchen. Hierzu differenzieren sie zwischen Projekten, die einer eher deduktiven oder...

  14. Computer Based Procedures for Field Workers - FY16 Research Activities

    International Nuclear Information System (INIS)

    Oxstrand, Johanna; Bly, Aaron

    2016-01-01

    The Computer-Based Procedure (CBP) research effort is a part of the Light-Water Reactor Sustainability (LWRS) Program, which provides the technical foundations for licensing and managing the long-term, safe, and economical operation of current nuclear power plants. One of the primary missions of the LWRS program is to help the U.S. nuclear industry adopt new technologies and engineering solutions that facilitate the continued safe operation of the plants and extension of the current operating licenses. One area that could yield tremendous savings in increased efficiency and safety is in improving procedure use. A CBP provides the opportunity to incorporate context-driven job aids, such as drawings, photos, and just-in-time training. The presentation of information in CBPs can be much more flexible and tailored to the task, actual plant condition, and operation mode. The dynamic presentation of the procedure will guide the user down the path of relevant steps, thus minimizing time spent by the field worker to evaluate plant conditions and decisions related to the applicability of each step. This dynamic presentation of the procedure also minimizes the risk of conducting steps out of order and/or incorrectly assessed applicability of steps. This report provides a summary of the main research activities conducted in the Computer-Based Procedures for Field Workers effort since 2012. The main focus of the report is on the research activities conducted in fiscal year 2016. The activities discussed are the Nuclear Electronic Work Packages - Enterprise Requirements initiative, the development of a design guidance for CBPs (which compiles all insights gained through the years of CBP research), the facilitation of vendor studies at the Idaho National Laboratory (INL) Advanced Test Reactor (ATR), a pilot study for how to enhance the plant design modification work process, the collection of feedback from a field evaluation study at Plant Vogtle, and path forward to

  15. Computer Based Procedures for Field Workers - FY16 Research Activities

    Energy Technology Data Exchange (ETDEWEB)

    Oxstrand, Johanna [Idaho National Lab. (INL), Idaho Falls, ID (United States); Bly, Aaron [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2016-09-01

    The Computer-Based Procedure (CBP) research effort is a part of the Light-Water Reactor Sustainability (LWRS) Program, which provides the technical foundations for licensing and managing the long-term, safe, and economical operation of current nuclear power plants. One of the primary missions of the LWRS program is to help the U.S. nuclear industry adopt new technologies and engineering solutions that facilitate the continued safe operation of the plants and extension of the current operating licenses. One area that could yield tremendous savings in increased efficiency and safety is in improving procedure use. A CBP provides the opportunity to incorporate context-driven job aids, such as drawings, photos, and just-in-time training. The presentation of information in CBPs can be much more flexible and tailored to the task, actual plant condition, and operation mode. The dynamic presentation of the procedure will guide the user down the path of relevant steps, thus minimizing time spent by the field worker to evaluate plant conditions and decisions related to the applicability of each step. This dynamic presentation of the procedure also minimizes the risk of conducting steps out of order and/or incorrectly assessed applicability of steps. This report provides a summary of the main research activities conducted in the Computer-Based Procedures for Field Workers effort since 2012. The main focus of the report is on the research activities conducted in fiscal year 2016. The activities discussed are the Nuclear Electronic Work Packages – Enterprise Requirements initiative, the development of a design guidance for CBPs (which compiles all insights gained through the years of CBP research), the facilitation of vendor studies at the Idaho National Laboratory (INL) Advanced Test Reactor (ATR), a pilot study for how to enhance the plant design modification work process, the collection of feedback from a field evaluation study at Plant Vogtle, and path forward to

  16. TU-FG-201-12: Designing a Risk-Based Quality Assurance Program for a Newly Implemented Y-90 Microspheres Procedure

    Energy Technology Data Exchange (ETDEWEB)

    Vile, D; Zhang, L; Cuttino, L; Kim, S; Palta, J [Virginia Commonwealth University, Richmond, VA (United States)

    2016-06-15

    Purpose: To create a quality assurance program based upon a risk-based assessment of a newly implemented SirSpheres Y-90 procedure. Methods: A process map was created for a newly implemented SirSpheres procedure at a community hospital. The process map documented each step of this collaborative procedure, as well as the roles and responsibilities of each member. From the process map, different potential failure modes were determined as well as any current controls in place. From this list, a full failure mode and effects analysis (FMEA) was performed by grading each failure mode’s likelihood of occurrence, likelihood of detection, and potential severity. These numbers were then multiplied to compute the risk priority number (RPN) for each potential failure mode. Failure modes were then ranked based on their RPN. Additional controls were then added, with failure modes corresponding to the highest RPNs taking priority. Results: A process map was created that succinctly outlined each step in the SirSpheres procedure in its current implementation. From this, 72 potential failure modes were identified and ranked according to their associated RPN. Quality assurance controls and safety barriers were then added for failure modes associated with the highest risk being addressed first. Conclusion: A quality assurance program was created from a risk-based assessment of the SirSpheres process. Process mapping and FMEA were effective in identifying potential high-risk failure modes for this new procedure, which were prioritized for new quality assurance controls. TG 100 recommends the fault tree analysis methodology to design a comprehensive and effective QC/QM program, yet we found that by simply introducing additional safety barriers to address high RPN failure modes makes the whole process simpler and safer.

  17. TU-FG-201-12: Designing a Risk-Based Quality Assurance Program for a Newly Implemented Y-90 Microspheres Procedure

    International Nuclear Information System (INIS)

    Vile, D; Zhang, L; Cuttino, L; Kim, S; Palta, J

    2016-01-01

    Purpose: To create a quality assurance program based upon a risk-based assessment of a newly implemented SirSpheres Y-90 procedure. Methods: A process map was created for a newly implemented SirSpheres procedure at a community hospital. The process map documented each step of this collaborative procedure, as well as the roles and responsibilities of each member. From the process map, different potential failure modes were determined as well as any current controls in place. From this list, a full failure mode and effects analysis (FMEA) was performed by grading each failure mode’s likelihood of occurrence, likelihood of detection, and potential severity. These numbers were then multiplied to compute the risk priority number (RPN) for each potential failure mode. Failure modes were then ranked based on their RPN. Additional controls were then added, with failure modes corresponding to the highest RPNs taking priority. Results: A process map was created that succinctly outlined each step in the SirSpheres procedure in its current implementation. From this, 72 potential failure modes were identified and ranked according to their associated RPN. Quality assurance controls and safety barriers were then added for failure modes associated with the highest risk being addressed first. Conclusion: A quality assurance program was created from a risk-based assessment of the SirSpheres process. Process mapping and FMEA were effective in identifying potential high-risk failure modes for this new procedure, which were prioritized for new quality assurance controls. TG 100 recommends the fault tree analysis methodology to design a comprehensive and effective QC/QM program, yet we found that by simply introducing additional safety barriers to address high RPN failure modes makes the whole process simpler and safer.

  18. Preliminary Investigation of Time Remaining Display on the Computer-based Emergency Operating Procedure

    Science.gov (United States)

    Suryono, T. J.; Gofuku, A.

    2018-02-01

    One of the important thing in the mitigation of accidents in nuclear power plant accidents is time management. The accidents should be resolved as soon as possible in order to prevent the core melting and the release of radioactive material to the environment. In this case, operators should follow the emergency operating procedure related with the accident, in step by step order and in allowable time. Nowadays, the advanced main control rooms are equipped with computer-based procedures (CBPs) which is make it easier for operators to do their tasks of monitoring and controlling the reactor. However, most of the CBPs do not include the time remaining display feature which informs operators of time available for them to execute procedure steps and warns them if the they reach the time limit. Furthermore, the feature will increase the awareness of operators about their current situation in the procedure. This paper investigates this issue. The simplified of emergency operating procedure (EOP) of steam generator tube rupture (SGTR) accident of PWR plant is applied. In addition, the sequence of actions on each step of the procedure is modelled using multilevel flow modelling (MFM) and influenced propagation rule. The prediction of action time on each step is acquired based on similar case accidents and the Support Vector Regression. The derived time will be processed and then displayed on a CBP user interface.

  19. A calculation method for RF couplers design based on numerical simulation by microwave studio

    International Nuclear Information System (INIS)

    Wang Rong; Pei Yuanji; Jin Kai

    2006-01-01

    A numerical simulation method for coupler design is proposed. It is based on the matching procedure for the 2π/3 structure given by Dr. R.L. Kyhl. Microwave Studio EigenMode Solver is used for such numerical simulation. the simulation for a coupler has been finished with this method and the simulation data are compared with experimental measurements. The results show that this numerical simulation method is feasible for coupler design. (authors)

  20. 40 CFR 63.9915 - What test methods and other procedures must I use to demonstrate initial compliance with dioxin...

    Science.gov (United States)

    2010-07-01

    ... must I use to demonstrate initial compliance with dioxin/furan emission limits? 63.9915 Section 63.9915....9915 What test methods and other procedures must I use to demonstrate initial compliance with dioxin... limit for dioxins/furans in Table 1 to this subpart, you must follow the test methods and procedures...

  1. Hybrid PSO-ASVR-based method for data fitting in the calibration of infrared radiometer

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Sen; Li, Chengwei, E-mail: heikuanghit@163.com [School of Electrical Engineering and Automation, Harbin Institute of Technology, Harbin 150001 (China)

    2016-06-15

    The present paper describes a hybrid particle swarm optimization-adaptive support vector regression (PSO-ASVR)-based method for data fitting in the calibration of infrared radiometer. The proposed hybrid PSO-ASVR-based method is based on PSO in combination with Adaptive Processing and Support Vector Regression (SVR). The optimization technique involves setting parameters in the ASVR fitting procedure, which significantly improves the fitting accuracy. However, its use in the calibration of infrared radiometer has not yet been widely explored. Bearing this in mind, the PSO-ASVR-based method, which is based on the statistical learning theory, is successfully used here to get the relationship between the radiation of a standard source and the response of an infrared radiometer. Main advantages of this method are the flexible adjustment mechanism in data processing and the optimization mechanism in a kernel parameter setting of SVR. Numerical examples and applications to the calibration of infrared radiometer are performed to verify the performance of PSO-ASVR-based method compared to conventional data fitting methods.

  2. Comparison of measurement methods with a mixed effects procedure accounting for replicated evaluations (COM3PARE): method comparison algorithm implementation for head and neck IGRT positional verification.

    Science.gov (United States)

    Roy, Anuradha; Fuller, Clifton D; Rosenthal, David I; Thomas, Charles R

    2015-08-28

    Comparison of imaging measurement devices in the absence of a gold-standard comparator remains a vexing problem; especially in scenarios where multiple, non-paired, replicated measurements occur, as in image-guided radiotherapy (IGRT). As the number of commercially available IGRT presents a challenge to determine whether different IGRT methods may be used interchangeably, an unmet need conceptually parsimonious and statistically robust method to evaluate the agreement between two methods with replicated observations. Consequently, we sought to determine, using an previously reported head and neck positional verification dataset, the feasibility and utility of a Comparison of Measurement Methods with the Mixed Effects Procedure Accounting for Replicated Evaluations (COM3PARE), a unified conceptual schema and analytic algorithm based upon Roy's linear mixed effects (LME) model with Kronecker product covariance structure in a doubly multivariate set-up, for IGRT method comparison. An anonymized dataset consisting of 100 paired coordinate (X/ measurements from a sequential series of head and neck cancer patients imaged near-simultaneously with cone beam CT (CBCT) and kilovoltage X-ray (KVX) imaging was used for model implementation. Software-suggested CBCT and KVX shifts for the lateral (X), vertical (Y) and longitudinal (Z) dimensions were evaluated for bias, inter-method (between-subject variation), intra-method (within-subject variation), and overall agreement using with a script implementing COM3PARE with the MIXED procedure of the statistical software package SAS (SAS Institute, Cary, NC, USA). COM3PARE showed statistically significant bias agreement and difference in inter-method between CBCT and KVX was observed in the Z-axis (both p - value<0.01). Intra-method and overall agreement differences were noted as statistically significant for both the X- and Z-axes (all p - value<0.01). Using pre-specified criteria, based on intra-method agreement, CBCT was deemed

  3. An evaluation of risk methods for prioritizing fire protection features: a procedure for fire barrier penetration seals

    International Nuclear Information System (INIS)

    Dey, M.K.

    2004-01-01

    This paper generally evaluates risk methods available for prioritizing fire protection features. Risk methods involving both the use of qualitative insights, and quantitative results from a fire probabilistic risk analysis are reviewed. The applicability of these methods to develop a prioritized list of fire barrier penetration seals in a plant based on risk significance is presented as a procedure to illustrate the benefits of the methods. The paper concludes that current fire risk assessment methods can be confidently used to prioritize plant fire protection features, specifically fire barrier penetration seals. Simple prioritization schemes, using qualitative assessments and insights from fire PRA methodology may be implemented without the need for quantitative results. More elaborate prioritization schemes that allow further refinements to the categorization process may be implemented using the quantitative results of the screening processes in good fire PRAs. The use of the quantitative results from good fire PRAs provide several benefits for risk prioritization of fire protection features at plants, mainly from the plant systems analyses conducted for a fire PRA

  4. Implicit Procedural Learning in Fragile X and Down Syndrome

    Science.gov (United States)

    Bussy, G.; Charrin, E.; Brun, A.; Curie, A.; des Portes, V.

    2011-01-01

    Background: Procedural learning refers to rule-based motor skill learning and storage. It involves the cerebellum, striatum and motor areas of the frontal lobe network. Fragile X syndrome, which has been linked with anatomical abnormalities within the striatum, may result in implicit procedural learning deficit. Methods: To address this issue, a…

  5. A Bayesian nonrigid registration method to enhance intraoperative target definition in image-guided prostate procedures through uncertainty characterization

    International Nuclear Information System (INIS)

    Pursley, Jennifer; Risholm, Petter; Fedorov, Andriy; Tuncali, Kemal; Fennessy, Fiona M.; Wells, William M. III; Tempany, Clare M.; Cormack, Robert A.

    2012-01-01

    Purpose: This study introduces a probabilistic nonrigid registration method for use in image-guided prostate brachytherapy. Intraoperative imaging for prostate procedures, usually transrectal ultrasound (TRUS), is typically inferior to diagnostic-quality imaging of the pelvis such as endorectal magnetic resonance imaging (MRI). MR images contain superior detail of the prostate boundaries and provide substructure features not otherwise visible. Previous efforts to register diagnostic prostate images with the intraoperative coordinate system have been deterministic and did not offer a measure of the registration uncertainty. The authors developed a Bayesian registration method to estimate the posterior distribution on deformations and provide a case-specific measure of the associated registration uncertainty. Methods: The authors adapted a biomechanical-based probabilistic nonrigid method to register diagnostic to intraoperative images by aligning a physician's segmentations of the prostate in the two images. The posterior distribution was characterized with a Markov Chain Monte Carlo method; the maximum a posteriori deformation and the associated uncertainty were estimated from the collection of deformation samples drawn from the posterior distribution. The authors validated the registration method using a dataset created from ten patients with MRI-guided prostate biopsies who had both diagnostic and intraprocedural 3 Tesla MRI scans. The accuracy and precision of the estimated posterior distribution on deformations were evaluated from two predictive distance distributions: between the deformed central zone-peripheral zone (CZ-PZ) interface and the physician-labeled interface, and based on physician-defined landmarks. Geometric margins on the registration of the prostate's peripheral zone were determined from the posterior predictive distance to the CZ-PZ interface separately for the base, mid-gland, and apical regions of the prostate. Results: The authors observed

  6. Study on highly efficient seismic data acquisition and processing methods based on sparsity constraint

    Science.gov (United States)

    Wang, H.; Chen, S.; Tao, C.; Qiu, L.

    2017-12-01

    High-density, high-fold and wide-azimuth seismic data acquisition methods are widely used to overcome the increasingly sophisticated exploration targets. The acquisition period is longer and longer and the acquisition cost is higher and higher. We carry out the study of highly efficient seismic data acquisition and processing methods based on sparse representation theory (or compressed sensing theory), and achieve some innovative results. The theoretical principles of highly efficient acquisition and processing is studied. We firstly reveal sparse representation theory based on wave equation. Then we study the highly efficient seismic sampling methods and present an optimized piecewise-random sampling method based on sparsity prior information. At last, a reconstruction strategy with the sparsity constraint is developed; A two-step recovery approach by combining sparsity-promoting method and hyperbolic Radon transform is also put forward. The above three aspects constitute the enhanced theory of highly efficient seismic data acquisition. The specific implementation strategies of highly efficient acquisition and processing are studied according to the highly efficient acquisition theory expounded in paragraph 2. Firstly, we propose the highly efficient acquisition network designing method by the help of optimized piecewise-random sampling method. Secondly, we propose two types of highly efficient seismic data acquisition methods based on (1) single sources and (2) blended (or simultaneous) sources. Thirdly, the reconstruction procedures corresponding to the above two types of highly efficient seismic data acquisition methods are proposed to obtain the seismic data on the regular acquisition network. A discussion of the impact on the imaging result of blended shooting is discussed. In the end, we implement the numerical tests based on Marmousi model. The achieved results show: (1) the theoretical framework of highly efficient seismic data acquisition and processing

  7. Projector-based virtual reality dome environment for procedural pain and anxiety in young children with burn injuries: a pilot study.

    Science.gov (United States)

    Khadra, Christelle; Ballard, Ariane; Déry, Johanne; Paquin, David; Fortin, Jean-Simon; Perreault, Isabelle; Labbe, David R; Hoffman, Hunter G; Bouchard, Stéphane; LeMay, Sylvie

    2018-01-01

    Virtual reality (VR) is a non-pharmacological method to distract from pain during painful procedures. However, it was never tested in young children with burn injuries undergoing wound care. We aimed to assess the feasibility and acceptability of the study process and the use of VR for procedural pain management. From June 2016 to January 2017, we recruited children from 2 months to 10 years of age with burn injuries requiring a hydrotherapy session in a pediatric university teaching hospital in Montreal. Each child received the projector-based VR intervention in addition to the standard pharmacological treatment. Data on intervention and study feasibility and acceptability in addition to measures on pain (Face, Legs, Activity, Cry, Consolability scale), baseline (Modified Smith Scale) and procedural (Procedure Behavior Check List) anxiety, comfort (OCCEB-BECCO [behavioral observational scale of comfort level for child burn victims]), and sedation (Ramsay Sedation Scale) were collected before, during, and after the procedure. Data analyses included descriptive and non-parametric inferential statistics. We recruited 15 children with a mean age of 2.2±2.1 years and a mean total body surface area of 5% (±4). Mean pain score during the procedure was low (2.9/10, ±3), as was the discomfort level (2.9/10, ±2.8). Most children were cooperative, oriented, and calm. Assessing anxiety was not feasible with our sample of participants. The prototype did not interfere with the procedure and was considered useful for procedural pain management by most health care professionals. The projector-based VR is a feasible and acceptable intervention for procedural pain management in young children with burn injuries. A larger trial with a control group is required to assess its efficacy.

  8. An integrated computer-based procedure for teamwork in digital nuclear power plants.

    Science.gov (United States)

    Gao, Qin; Yu, Wenzhu; Jiang, Xiang; Song, Fei; Pan, Jiajie; Li, Zhizhong

    2015-01-01

    Computer-based procedures (CBPs) are expected to improve operator performance in nuclear power plants (NPPs), but they may reduce the openness of interaction between team members and harm teamwork consequently. To support teamwork in the main control room of an NPP, this study proposed a team-level integrated CBP that presents team members' operation status and execution histories to one another. Through a laboratory experiment, we compared the new integrated design and the existing individual CBP design. Sixty participants, randomly divided into twenty teams of three people each, were assigned to the two conditions to perform simulated emergency operating procedures. The results showed that compared with the existing CBP design, the integrated CBP reduced the effort of team communication and improved team transparency. The results suggest that this novel design is effective to optim team process, but its impact on the behavioural outcomes may be moderated by more factors, such as task duration. The study proposed and evaluated a team-level integrated computer-based procedure, which present team members' operation status and execution history to one another. The experimental results show that compared with the traditional procedure design, the integrated design reduces the effort of team communication and improves team transparency.

  9. A task based design procedure and modelling approached for industrial crystallization processes

    NARCIS (Netherlands)

    Menon, A.R.

    2006-01-01

    A synthesis-based approach to the design of crystallizers and industrial crystallization processes is introduced in this thesis. An ontology for a task-based design procedure has been developed which breaks the crystallization process into a subset of basic functions (physical tasks) which transform

  10. Activity based costing of diagnostic procedures at a nuclear medicine center of a tertiary care hospital.

    Science.gov (United States)

    Hada, Mahesh Singh; Chakravarty, Abhijit; Mukherjee, Partha

    2014-10-01

    Escalating health care expenses pose a new challenge to the health care environment of becoming more cost-effective. There is an urgent need for more accurate data on the costs of health care procedures. Demographic changes, changing morbidity profile, and the rising impact of noncommunicable diseases are emphasizing the role of nuclear medicine (NM) in the future health care environment. However, the impact of emerging disease load and stagnant resource availability needs to be balanced by a strategic drive towards optimal utilization of available healthcare resources. The aim was to ascertain the cost of diagnostic procedures conducted at the NM Department of a tertiary health care facility by employing activity based costing (ABC) method. A descriptive cross-sectional study was carried out over a period of 1 year. ABC methodology was utilized for ascertaining unit cost of different diagnostic procedures and such costs were compared with prevalent market rates for estimating cost effectiveness of the department being studied. The cost per unit procedure for various procedures varied from Rs. 869 (USD 14.48) for a thyroid scan to Rs. 11230 (USD 187.16) for a meta-iodo-benzyl-guanidine (MIBG) scan, the most cost-effective investigations being the stress thallium, technetium-99 m myocardial perfusion imaging (MPI) and MIBG scan. The costs obtained from this study were observed to be competitive when compared to prevalent market rates. ABC methodology provides precise costing inputs and should be used for all future costing studies in NM Departments.

  11. Activity based costing of diagnostic procedures at a nuclear medicine center of a tertiary care hospital

    International Nuclear Information System (INIS)

    Hada, Mahesh Singh; Chakravarty, Abhijit; Mukherjee, Partha

    2014-01-01

    Escalating health care expenses pose a new challenge to the health care environment of becoming more cost-effective. There is an urgent need for more accurate data on the costs of health care procedures. Demographic changes, changing morbidity profile, and the rising impact of noncommunicable diseases are emphasizing the role of nuclear medicine (NM) in the future health care environment. However, the impact of emerging disease load and stagnant resource availability needs to be balanced by a strategic drive towards optimal utilization of available healthcare resources. The aim was to ascertain the cost of diagnostic procedures conducted at the NM Department of a tertiary health care facility by employing activity based costing (ABC) method. A descriptive cross-sectional study was carried out over a period of 1 year. ABC methodology was utilized for ascertaining unit cost of different diagnostic procedures and such costs were compared with prevalent market rates for estimating cost effectiveness of the department being studied. The cost per unit procedure for various procedures varied from Rs. 869 (USD 14.48) for a thyroid scan to Rs. 11230 (USD 187.16) for a meta-iodo-benzyl-guanidine (MIBG) scan, the most cost-effective investigations being the stress thallium, technetium-99 m myocardial perfusion imaging (MPI) and MIBG scan. The costs obtained from this study were observed to be competitive when compared to prevalent market rates. ABC methodology provides precise costing inputs and should be used for all future costing studies in NM Departments

  12. Introducing GAMER: A Fast and Accurate Method for Ray-tracing Galaxies Using Procedural Noise

    Science.gov (United States)

    Groeneboom, N. E.; Dahle, H.

    2014-03-01

    We developed a novel approach for fast and accurate ray-tracing of galaxies using procedural noise fields. Our method allows for efficient and realistic rendering of synthetic galaxy morphologies, where individual components such as the bulge, disk, stars, and dust can be synthesized in different wavelengths. These components follow empirically motivated overall intensity profiles but contain an additional procedural noise component that gives rise to complex natural patterns that mimic interstellar dust and star-forming regions. These patterns produce more realistic-looking galaxy images than using analytical expressions alone. The method is fully parallelized and creates accurate high- and low- resolution images that can be used, for example, in codes simulating strong and weak gravitational lensing. In addition to having a user-friendly graphical user interface, the C++ software package GAMER is easy to implement into an existing code.

  13. Introducing GAMER: A fast and accurate method for ray-tracing galaxies using procedural noise

    International Nuclear Information System (INIS)

    Groeneboom, N. E.; Dahle, H.

    2014-01-01

    We developed a novel approach for fast and accurate ray-tracing of galaxies using procedural noise fields. Our method allows for efficient and realistic rendering of synthetic galaxy morphologies, where individual components such as the bulge, disk, stars, and dust can be synthesized in different wavelengths. These components follow empirically motivated overall intensity profiles but contain an additional procedural noise component that gives rise to complex natural patterns that mimic interstellar dust and star-forming regions. These patterns produce more realistic-looking galaxy images than using analytical expressions alone. The method is fully parallelized and creates accurate high- and low- resolution images that can be used, for example, in codes simulating strong and weak gravitational lensing. In addition to having a user-friendly graphical user interface, the C++ software package GAMER is easy to implement into an existing code.

  14. Introducing GAMER: A fast and accurate method for ray-tracing galaxies using procedural noise

    Energy Technology Data Exchange (ETDEWEB)

    Groeneboom, N. E.; Dahle, H., E-mail: nicolaag@astro.uio.no [Institute of Theoretical Astrophysics, University of Oslo, P.O. Box 1029 Blindern, N-0315 Oslo (Norway)

    2014-03-10

    We developed a novel approach for fast and accurate ray-tracing of galaxies using procedural noise fields. Our method allows for efficient and realistic rendering of synthetic galaxy morphologies, where individual components such as the bulge, disk, stars, and dust can be synthesized in different wavelengths. These components follow empirically motivated overall intensity profiles but contain an additional procedural noise component that gives rise to complex natural patterns that mimic interstellar dust and star-forming regions. These patterns produce more realistic-looking galaxy images than using analytical expressions alone. The method is fully parallelized and creates accurate high- and low- resolution images that can be used, for example, in codes simulating strong and weak gravitational lensing. In addition to having a user-friendly graphical user interface, the C++ software package GAMER is easy to implement into an existing code.

  15. Note n. SD3-DEM-01 regulations procedures relative to the based nuclear installations dismantling

    International Nuclear Information System (INIS)

    2003-02-01

    This note aims to define the regulations procedures relative to the safety of based nuclear installations dismantling defined by the decree of the 11 december 1963 modified. The first part describes the two main phases of a based nuclear installation life, the operating and the dismantling phase. The second part is devoted to the procedures. (A.L.B.)

  16. Calibration Methods for Reliability-Based Design Codes

    DEFF Research Database (Denmark)

    Gayton, N.; Mohamed, A.; Sørensen, John Dalsgaard

    2004-01-01

    The calibration methods are applied to define the optimal code format according to some target safety levels. The calibration procedure can be seen as a specific optimization process where the control variables are the partial factors of the code. Different methods are available in the literature...

  17. A Survey study on design procedure of Seismic Base Isolation ...

    African Journals Online (AJOL)

    Adding shear walls or braced frames can decrease the potential damage caused by earthquakes.We can isolate the structures from the ground using the Seismic Base Isolation Systems that is flexible approach to decrease the potential damage. In this research we present information on the design procedure of seismic ...

  18. A flocking based method for brain tractography.

    Science.gov (United States)

    Aranda, Ramon; Rivera, Mariano; Ramirez-Manzanares, Alonso

    2014-04-01

    We propose a new method to estimate axonal fiber pathways from Multiple Intra-Voxel Diffusion Orientations. Our method uses the multiple local orientation information for leading stochastic walks of particles. These stochastic particles are modeled with mass and thus they are subject to gravitational and inertial forces. As result, we obtain smooth, filtered and compact trajectory bundles. This gravitational interaction can be seen as a flocking behavior among particles that promotes better and robust axon fiber estimations because they use collective information to move. However, the stochastic walks may generate paths with low support (outliers), generally associated to incorrect brain connections. In order to eliminate the outlier pathways, we propose a filtering procedure based on principal component analysis and spectral clustering. The performance of the proposal is evaluated on Multiple Intra-Voxel Diffusion Orientations from two realistic numeric diffusion phantoms and a physical diffusion phantom. Additionally, we qualitatively demonstrate the performance on in vivo human brain data. Copyright © 2014 Elsevier B.V. All rights reserved.

  19. FCNN-MR: A Parallel Instance Selection Method Based on Fast Condensed Nearest Neighbor Rule

    OpenAIRE

    Lu Si; Jie Yu; Shasha Li; Jun Ma; Lei Luo; Qingbo Wu; Yongqi Ma; Zhengji Liu

    2017-01-01

    Instance selection (IS) technique is used to reduce the data size to improve the performance of data mining methods. Recently, to process very large data set, several proposed methods divide the training set into some disjoint subsets and apply IS algorithms independently to each subset. In this paper, we analyze the limitation of these methods and give our viewpoint about how to divide and conquer in IS procedure. Then, based on fast condensed nearest neighbor (FCNN) rul...

  20. Cloud-based Electronic Test Procedures, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — Test procedures are at the heart of any experimental process, especially those involving novel and complex hardware. Whether these procedures are for system...

  1. Space-partition method for the variance-based sensitivity analysis: Optimal partition scheme and comparative study

    International Nuclear Information System (INIS)

    Zhai, Qingqing; Yang, Jun; Zhao, Yu

    2014-01-01

    Variance-based sensitivity analysis has been widely studied and asserted itself among practitioners. Monte Carlo simulation methods are well developed in the calculation of variance-based sensitivity indices but they do not make full use of each model run. Recently, several works mentioned a scatter-plot partitioning method to estimate the variance-based sensitivity indices from given data, where a single bunch of samples is sufficient to estimate all the sensitivity indices. This paper focuses on the space-partition method in the estimation of variance-based sensitivity indices, and its convergence and other performances are investigated. Since the method heavily depends on the partition scheme, the influence of the partition scheme is discussed and the optimal partition scheme is proposed based on the minimized estimator's variance. A decomposition and integration procedure is proposed to improve the estimation quality for higher order sensitivity indices. The proposed space-partition method is compared with the more traditional method and test cases show that it outperforms the traditional one

  2. MR-based water content estimation in cartilage: design and validation of a method

    DEFF Research Database (Denmark)

    Shiguetomi Medina, Juan Manuel; Kristiansen, Maja Sophie; Ringgaard, Steffen

    Purpose: Design and validation of an MR-based method that allows the calculation of the water content in cartilage tissue. Methods and Materials: Cartilage tissue T1 map based water content MR sequences were used on a 37 Celsius degree stable system. The T1 map intensity signal was analyzed on 6...... cartilage samples from living animals (pig) and on 8 gelatin samples which water content was already known. For the data analysis a T1 intensity signal map software analyzer used. Finally, the method was validated after measuring and comparing 3 more cartilage samples in a living animal (pig). The obtained...... map based water content sequences can provide information that, after being analyzed using a T1-map analysis software, can be interpreted as the water contained inside a cartilage tissue. The amount of water estimated using this method was similar to the one obtained at the dry-freeze procedure...

  3. Survey of sampling-based methods for uncertainty and sensitivity analysis

    International Nuclear Information System (INIS)

    Helton, J.C.; Johnson, J.D.; Sallaberry, C.J.; Storlie, C.B.

    2006-01-01

    Sampling-based methods for uncertainty and sensitivity analysis are reviewed. The following topics are considered: (i) definition of probability distributions to characterize epistemic uncertainty in analysis inputs (ii) generation of samples from uncertain analysis inputs (iii) propagation of sampled inputs through an analysis (iv) presentation of uncertainty analysis results, and (v) determination of sensitivity analysis results. Special attention is given to the determination of sensitivity analysis results, with brief descriptions and illustrations given for the following procedures/techniques: examination of scatterplots, correlation analysis, regression analysis, partial correlation analysis, rank transformations, statistical tests for patterns based on gridding, entropy tests for patterns based on gridding, nonparametric regression analysis, squared rank differences/rank correlation coefficient test, two-dimensional Kolmogorov-Smirnov test, tests for patterns based on distance measures, top down coefficient of concordance, and variance decomposition

  4. Survey of sampling-based methods for uncertainty and sensitivity analysis.

    Energy Technology Data Exchange (ETDEWEB)

    Johnson, Jay Dean; Helton, Jon Craig; Sallaberry, Cedric J. PhD. (.; .); Storlie, Curt B. (Colorado State University, Fort Collins, CO)

    2006-06-01

    Sampling-based methods for uncertainty and sensitivity analysis are reviewed. The following topics are considered: (1) Definition of probability distributions to characterize epistemic uncertainty in analysis inputs, (2) Generation of samples from uncertain analysis inputs, (3) Propagation of sampled inputs through an analysis, (4) Presentation of uncertainty analysis results, and (5) Determination of sensitivity analysis results. Special attention is given to the determination of sensitivity analysis results, with brief descriptions and illustrations given for the following procedures/techniques: examination of scatterplots, correlation analysis, regression analysis, partial correlation analysis, rank transformations, statistical tests for patterns based on gridding, entropy tests for patterns based on gridding, nonparametric regression analysis, squared rank differences/rank correlation coefficient test, two dimensional Kolmogorov-Smirnov test, tests for patterns based on distance measures, top down coefficient of concordance, and variance decomposition.

  5. The Econometric Procedures of Specific Transaction Identification

    Directory of Open Access Journals (Sweden)

    Doszyń Mariusz

    2017-06-01

    Full Text Available The paper presents the econometric procedures of identifying specific transactions, in which atypical conditions or attributes may occur. These procedures are based on studentized and predictive residuals of the accordingly specified econometric models. The dependent variable is a unit transactional price, and explanatory variables are both the real properties’ attributes and accordingly defined artificial binary variables. The utility of the proposed method has been verified by means of a real market data base. The proposed procedures can be helpful during the property valuation process, making it possible to reject real properties that are specific (both from the point of view of the transaction conditions and the properties’ attributes and, consequently, to select an appropriate set of similar attributes that are essential for the valuation process.

  6. BPH Procedural Treatment: The Case for Value-Based Pay for Performance

    Directory of Open Access Journals (Sweden)

    Mark Stovsky

    2008-01-01

    Full Text Available The concept of “pay for performance” (P4P applied to the practice of medicine has become a major foundation in current public and private payer reimbursement strategies for both institutional and individual physician providers. “Pay for performance” programs represent a substantial shift from traditional service-based reimbursement to a system of performance-based provider payment using financial incentives to drive improvements in the quality of care. P4P strategies currently embody rudimentary structure and process (as opposed to outcomes metrics which set relatively low-performance thresholds. P4P strategies that align reimbursement allocation with “free market” type shifts in cognitive and procedural care using evidence-based data and positive reinforcement are more likely to produce large-scale improvements in quality and cost efficiency with respect to clinical urologic care. This paper reviews current paradigms and, using BPH procedural therapy outcomes, cost, and reimbursement data, makes the case for a fundamental change in perspective to value-based pay for performance as a reimbursement system with the potential to align the interests of patients, physicians, and payers and to improve global clinical outcomes while preserving free choice of clinically efficacious treatments.

  7. Standard test methods for the strong-base resins used in the recovery of uranium

    International Nuclear Information System (INIS)

    Ford, M.A.; Lombaard, L.R.

    1986-01-01

    There are no detailed specifications for the strong-base ion-exchange resins used in continuous ion-exchange plants, and it was considered that a very useful purpose would be served by the publication of a series of standard laboratory tests on which such specifications could be based. This report describes test methods that are relevant to the ion-exchange recovery of uranium. They include tests of the physical properties of strong-base resins (relative density, particle-size distribution, and moisture content) and of their chemical properties (theoretical capacity, equilibrium capacity, kinetics of loading and elution). Included are several supporting procedures that are used in conjunction with these methods

  8. Model checking as an aid to procedure design

    International Nuclear Information System (INIS)

    Zhang, Wenhu

    2001-01-01

    The OECD Halden Reactor Project has been actively working on computer assisted operating procedures for many years. The objective of the research has been to provide computerised assistance for procedure design, verification and validation, implementation and maintenance. For the verification purpose, the application of formal methods has been considered in several reports. The recent formal verification activity conducted at the Halden Project is based on using model checking to the verification of procedures. This report presents verification approaches based on different model checking techniques and tools for the formalization and verification of operating procedures. Possible problems and relative merits of the different approaches are discussed. A case study of one of the approaches is presented to show the practical application of formal verification. Application of formal verification in the traditional procedure design process can reduce the human resources involved in reviews and simulations, and hence reduce the cost of verification and validation. A discussion of the integration of the formal verification with the traditional procedure design process is given at the end of this report. (Author)

  9. Comparison of Bristow procedure and Bankart arthroscopic method as the treatment of recurrent shoulder instability

    Directory of Open Access Journals (Sweden)

    Abolghasem Zarezade

    2014-01-01

    Full Text Available Background: Anterior shoulder dislocation is the most common major joint dislocation. In patients with recurrent shoulder dislocation, surgical intervention is necessary. In this study, two methods of treatment, Bankart arthroscopic method and open Bristow procedure, were compared. Materials and Methods: This clinical trial survey had been done in the orthopedic department of Alzahra and Kashani hospitals of Isfahan during 2008-2011. Patients with recurrent anterior shoulder dislocation who were candidates for surgical treatment were randomly divided into two groups, one treated by Bankart arthroscopic technique and the other treated by Bristow method. All the patients were assessed after the surgery using the criteria of ROWE, CONSTANT, UCLA, and ASES. Data were analyzed by SPSS software. Results: Six patients (16.22% had inappropriate condition with ROWE score (score less than 75; of them, one had been treated with Bristow and five with Bankart (5.26 vs. 27.78. Nine patients (24.32% had appropriate condition, which included six from Bristow group and three treated by Bankart technique (31.58 vs. 16.67. Finally, 22 patients (59.46% showed great improvement with this score, which included 12 from Bristow and 10 from Bankart groups (63.16 vs. 55.56. According to Fisher′s exact test, there were no significant differences between the two groups (P = 0.15. Conclusion: The two mentioned techniques did not differ significantly, although some parameters such as level of performance, pain intensity, use of analgesics, and range of internal rotation showed more improvement in Bristow procedure. Therefore, if there is no contraindication for Bristow procedure, it is preferred to use this method.

  10. Procedure for recovering embanked bases of oil wells in Carmopolis - state of Sergipe - Brazil

    International Nuclear Information System (INIS)

    Alves, Mara R.F.V.

    2000-01-01

    This present work objective was to elaborate a procedure to deal with degraded areas due to petroleum mining, seeking the recovery of embanked bases of razed oil wells, in Carmopolis/SE. This procedure was a result from studies upon soil recovery and several works performed by Mining Companies, adapted to local conditions. Once defined the objectives and the future soil use, this procedure will take place in four phases: landscape re composition; soil preparation for revegetation; revegetation and management of the area. The suggested procedure application can prove its effectiveness on recovering previous soil uses (farming or local Atlantic Rain Forest). (author)

  11. Computer–Based Procedures for Nuclear Power Plant Field Workers: Preliminary Results from Two Evaluation Studies

    Energy Technology Data Exchange (ETDEWEB)

    Katya L Le Blanc; Johanna H Oxstrand

    2013-10-01

    The Idaho National Laboratory and participants from the U.S. nuclear industry are collaborating on a research effort aimed to augment the existing guidance on computer-based procedure (CBP) design with specific guidance on how to design CBP user interfaces such that they support procedure execution in ways that exceed the capabilities of paper-based procedures (PBPs) without introducing new errors. Researchers are employing an iterative process where the human factors issues and interface design principles related to CBP usage are systematically addressed and evaluated in realistic settings. This paper describes the process of developing a CBP prototype and the two studies conducted to evaluate the prototype. The results indicate that CBPs may improve performance by reducing errors, but may increase the time it takes to complete procedural tasks.

  12. Proposal of a segmentation procedure for skid resistance data

    International Nuclear Information System (INIS)

    Tejeda, S. V.; Tampier, Hernan de Solominihac; Navarro, T.E.

    2008-01-01

    Skin resistance of pavements presents a high spatial variability along a road. This pavement characteristic is directly related to wet weather accidents; therefore, it is important to identify and characterize the skid resistance of homogeneous segments along a road in order to implement proper road safety management. Several data segmentation methods have been applied to other pavement characteristics (e.g. roughness). However, no application to skin resistance data was found during the literature review for this study. Typical segmentation methods are rather too general or too specific to ensure a detailed segmentation of skid resistance data, which can be used for managing pavement performance. The main objective of this paper is to propose a procedure for segmenting skid resistance data, based on existing data segmentation methods. The procedure needs to be efficient and to fulfill road management requirements. The proposed procedure considers the Leverage method to identify outlier data, the CUSUM method to accomplish initial data segmentation and a statistical method to group consecutive segments that are statistically similar. The statistical method applies the Student's t-test of mean equities, along with analysis of variance and the Tuckey test for the multiple comparison of means. The proposed procedure was applied to a sample of skid resistance data measured with SCRIM (Side Force Coefficient Routine Investigatory Machine) on a 4.2 km section of Chilean road and was compared to conventional segmentation methods. Results showed that the proposed procedure is more efficient than the conventional segmentation procedures, achieving the minimum weighted sum of square errors (SSEp) with all the identified segments statistically different. Due to its mathematical basis, proposed procedure can be easily adapted and programmed for use in road safety management. (author)

  13. A novel three-dimensional mesh deformation method based on sphere relaxation

    International Nuclear Information System (INIS)

    Zhou, Xuan; Li, Shuixiang

    2015-01-01

    In our previous work (2013) [19], we developed a disk relaxation based mesh deformation method for two-dimensional mesh deformation. In this paper, the idea of the disk relaxation is extended to the sphere relaxation for three-dimensional meshes with large deformations. We develop a node based pre-displacement procedure to apply initial movements on nodes according to their layer indices. Afterwards, the nodes are moved locally by the improved sphere relaxation algorithm to transfer boundary deformations and increase the mesh quality. A three-dimensional mesh smoothing method is also adopted to prevent the occurrence of the negative volume of elements, and further improve the mesh quality. Numerical applications in three-dimension including the wing rotation, bending beam and morphing aircraft are carried out. The results demonstrate that the sphere relaxation based approach generates the deformed mesh with high quality, especially regarding complex boundaries and large deformations

  14. A novel three-dimensional mesh deformation method based on sphere relaxation

    Energy Technology Data Exchange (ETDEWEB)

    Zhou, Xuan [Department of Mechanics & Engineering Science, College of Engineering, Peking University, Beijing, 100871 (China); Institute of Applied Physics and Computational Mathematics, Beijing, 100094 (China); Li, Shuixiang, E-mail: lsx@pku.edu.cn [Department of Mechanics & Engineering Science, College of Engineering, Peking University, Beijing, 100871 (China)

    2015-10-01

    In our previous work (2013) [19], we developed a disk relaxation based mesh deformation method for two-dimensional mesh deformation. In this paper, the idea of the disk relaxation is extended to the sphere relaxation for three-dimensional meshes with large deformations. We develop a node based pre-displacement procedure to apply initial movements on nodes according to their layer indices. Afterwards, the nodes are moved locally by the improved sphere relaxation algorithm to transfer boundary deformations and increase the mesh quality. A three-dimensional mesh smoothing method is also adopted to prevent the occurrence of the negative volume of elements, and further improve the mesh quality. Numerical applications in three-dimension including the wing rotation, bending beam and morphing aircraft are carried out. The results demonstrate that the sphere relaxation based approach generates the deformed mesh with high quality, especially regarding complex boundaries and large deformations.

  15. A Least Square-Based Self-Adaptive Localization Method for Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Baoguo Yu

    2016-01-01

    Full Text Available In the wireless sensor network (WSN localization methods based on Received Signal Strength Indicator (RSSI, it is usually required to determine the parameters of the radio signal propagation model before estimating the distance between the anchor node and an unknown node with reference to their communication RSSI value. And finally we use a localization algorithm to estimate the location of the unknown node. However, this localization method, though high in localization accuracy, has weaknesses such as complex working procedure and poor system versatility. Concerning these defects, a self-adaptive WSN localization method based on least square is proposed, which uses the least square criterion to estimate the parameters of radio signal propagation model, which positively reduces the computation amount in the estimation process. The experimental results show that the proposed self-adaptive localization method outputs a high processing efficiency while satisfying the high localization accuracy requirement. Conclusively, the proposed method is of definite practical value.

  16. Quality-based procedures in Ontario: exploring health-care leaders' responses.

    Science.gov (United States)

    Baxter, Pamela; Cleghorn, Laura; Alvarado, Kim; Cummings, Greta; Kennedy, Deborah; McKey, Colleen; Pfaff, Kathy

    2016-01-01

    To examine health-care leaders' initial response to the implementation of orthopaedic quality based procedures (QBPs) in hospitals across Ontario, Canada. In 2012, Ontario, Canada shifted 91 hospitals to a patient-based funding (PBF) approach. This approach funds health-care organisations based on the number of patients treated with select procedures known as QBPs. An exploratory descriptive design was employed to better understand health-care leaders' early implementation experiences. Seventy organisational leaders from 20 hospitals participated in six focus groups and four interviews to discuss their initial responses to the implementation of two QBPs (primary unilateral hip replacement and primary unilateral knee replacement). Qualitative data underwent content analysis. Three key major themes emerged; (1) responding to change, (2) leading the change and (3) managing the change. Within each of these themes, barriers and benefits were identified. Leaders are accepting of PBF and QBPs. However, challenges exist that require further exploration including the need for a strong infrastructure, accurate and timely clinical and financial data, and policies to prevent unintended consequences. Implementing QBPs requires careful planning, adequate and appropriate resources, vertical and horizontal communication strategies, and policies to ensure that unintended consequences are avoided and positive outcomes achieved. © 2014 John Wiley & Sons Ltd.

  17. Application of revised procedure on determining large excess reactivity of operating reactor. Fuel addition method

    International Nuclear Information System (INIS)

    Nagao, Yoshiharu

    2002-01-01

    The fuel addition method or the neutron absorption substitution method have been used for determination of large excess multiplication factor of large sized reactors. It has been pointed out, however, that all the experimental methods are possibly not free from the substantially large systematic error up to 20%, when the value of the excess multiplication factor exceeds about 15%Δk. Then, a basic idea of a revised procedure was proposed to cope with the problem, which converts the increase of multiplication factor in an actual core to that in a virtual core by calculation, because its value is in principle defined not for the former but the latter core. This paper proves that the revised procedure is able to be applicable for large sized research and test reactors through the theoretical analyses on the measurements undertaken at the JMTRC and JMTR cores. The values of excess multiplication factor are accurately determined utilizing the whole core calculation by the Monte Carlo code MCNP4A. (author)

  18. Analysis of the IEA-R1 reactor start-up procedures - an application of the HazOp method

    International Nuclear Information System (INIS)

    Sauer, Maria Eugenia Lago Jacques

    2000-01-01

    An analysis of technological catastrophic events that took place in this century shows that human failure and vulnerability of risk management programs are the main causes for the occurrence of accidents. As an example, plants and complex systems where the interface man-machine is close, the frequency of failures tends to be higher. Thus, a comprehensive knowledge of how a specific process can be potentially hazardous is a sine qua non condition to the operators training, as well as to define and implement more efficient plans for loss prevention and risk management. A study of the IEA-R1 research reactor start-up procedures was carried out, based upon the methodology Hazard and Operability Study (HazOp). The analytical and qualitative multidisciplinary HazOp approach provided means to a comprehensive review of the reactor start-up procedures, contributing to improve the understanding of the potential hazards associated to deviations on performing this routine. The present work includes a historical summary and a detailed description of the HazOp technique, as well as case studies in the process industries and the use of expert systems in the application of the method. An analysis of 53 activities of the IEA-R1 reactor start-up procedures was made, resulting in 25 recommendations of changes covering aspects of the project, operation and safety of the reactor. Eleven recommendations have been implemented. (author)

  19. THE GHOST IN THE MACHINE? THE VALUE OF EXPERT ADVICE IN THE PRODUCTION OF EVIDENCE-BASED GUIDANCE: A MIXED METHODS STUDY OF THE NICE INTERVENTIONAL PROCEDURES PROGRAMME.

    Science.gov (United States)

    Oyebode, Oyinlola; Patrick, Hannah; Walker, Alexander; Campbell, Bruce; Powell, John

    2016-01-01

    The aim of this study was to determine the aspects of expert advice that decision makers find most useful in the development of evidence-based guidance and to identify the characteristics of experts providing the most useful advice. First, semi-structured interviews were conducted with seventeen members of the Interventional Procedures Advisory Committee of the UK's National Institute of Health and Care Excellence. Interviews examined the usefulness of expert advice during guidance development. Transcripts were analyzed inductively to identify themes. Second, data were extracted from 211 experts' questionnaires for forty-one consecutive procedures. Usefulness of advice was scored using an index developed through the qualitative work. Associations between usefulness score and characteristics of the expert advisor were investigated using univariate and multivariate analyses. Expert opinion was seen as a valued complement to empirical evidence, providing context and tacit knowledge unavailable in published literature, but helpful for interpreting it. Interviewees also valued advice on the training and experience required to perform a procedure, on patient selection criteria and the place of a procedure within a clinical management pathway. Limitations of bias in expert opinion were widely acknowledged and skepticism expressed regarding the anecdotal nature of advice on safety or efficacy outcomes. Quantitative analysis demonstrated that the most useful advice was given by clinical experts with direct personal experience of the procedure, particularly research experience. Evidence-based guidance production is often characterized as a rational, pipeline process. This ignores the valuable role that expert opinion plays in guidance development, complementing and supporting the interpretation of empirical data.

  20. PROCEDURES OF TRANSLATION ON MALIN KUNDANG’S FOLKTALE FROM WEST SUMATRA

    Directory of Open Access Journals (Sweden)

    Ni Made Arnita Yanti

    2014-05-01

    Full Text Available The objective of this study was to examine the procedures of translation on the Malin Kundang folktale translated into English. The data was collected in several steps, the first step was note taking, the further step was identifying the data and the last step was arranging the data and classifying it in the table. The analyzing method of the data through documentation method and it was randomly selected in the discussion session. Result for the procedures in the folktale revealed based on the procedures in translation, only five out of seven procedures were obtained from the text such as Literal Translation, Transposition, Modulation, Equivalence, and Adaptation.

  1. Aircraft Route Recovery Based on An Improved GRASP Method

    Directory of Open Access Journals (Sweden)

    Yang He

    2017-01-01

    Full Text Available Aircrafts maintenance, temporary airport closures are common factors that disrupt normal flight schedule. The aircraft route recovery aims to recover original schedules by some strategies, including flights swaps, and cancellations, which is a NP-hard problem. This paper proposes an improved heuristic procedure based on Greedy Random Adaptive Search Procedure (GRASP to solve this problem. The effectiveness and high global optimization capability of the heuristic is illustrated through experiments based on large-scale problems. Compared to the original one, it is shown that the improved procedure can find feasible flight recovered schedules with lower cost in a short time.

  2. Development of the system based code. v. 5. Method of margin exchange. pt. 2. Determination of quality assurance index based on a 'Vector Method'

    International Nuclear Information System (INIS)

    Asayama, Tai

    2003-03-01

    For the commercialization of fast breeder reactors, 'System Based Code', a completely new scheme of a code on structural integrity, is being developed. One of the distinguished features of the System Based Code is that it is able to determine a reasonable total margin on a structural of system, by allowing the exchanges of margins between various technical items. Detailed estimation of failure probability of a given combination of technical items and its comparison with a target value is one way to achieve this. However, simpler and easier methods that allow margin exchange without detailed calculation of failure probability are desirable in design. The authors have developed a simplified method such as a 'design factor method' from this viewpoint. This report describes a 'Vector Method', which was been newly developed. Following points are reported: 1) The Vector Method allows margin exchange evaluation on an 'equi-quality assurance plane' using vector calculation. Evaluation is easy and sufficient accuracy is achieved. The equi-quality assurance plane is obtained by a projection of an 'equi-failure probability surface in a n-dimensional space, which is calculated beforehand for typical combinations of design variables. 2) The Vector Method is considered to give the 'Quality Assurance Index Method' a probabilistic interpretation. 3) An algebraic method was proposed for the calculation of failure probabilities, which is necessary to obtain a equi-failure probability surface. This method calculates failure probabilities without using numerical methods such as Monte Carlo simulation or numerical integration. Under limited conditions, this method is quite effective compared to numerical methods. 4) An illustration of the procedure of margin exchange evaluation is given. It may be possible to use this method to optimize ISI plans; even it is not fully implemented in the System Based Code. (author)

  3. INITIATION AND CONDUCT OF ADMINISTRATIVE PROCEDURE

    Directory of Open Access Journals (Sweden)

    Milan Stipic

    2013-12-01

    Full Text Available General administrative procedure act contains legal norms that are valid for all identical cases. In addition to the general, there are special administrative procedures, customized to the specific administrative areas. Procedure initiation is regulated. Administrative procedure can be initiated at the request of the proponent and ex officio. When the official determines that the conditions for the conduct of administrative procedure are met, before making a decision, all the facts and circumstances relevant to the resolution of administrative matter have to be identified. When there are no legal requirements for the initiation of procedures, the official shall make a decision to reject the application of the party. The procedure is initiated ex officio when stipulated by law or when protection of public interest requires it. When initiating procedure ex officio, the public authority shall take into consideration the petition or other information that indicate the need to protect the public interest. In such cases the applicant is not a party, and the official is obliged to notify the applicant, if initiation of procedures is not accepted ex officio. Based on the notification, the applicant has a right to complain, including the situation when there is no response within the prescribed period of 30 days. Public authority may, therefore it is not obliged to, initiate administrative procedure by public announcement only in a situation where the parties are unknown, while it is obliged to initiate procedure by public announcement when this method of initiating the procedure is prescribed by law. Initiation of procedure with public announcement occurs in rare cases. Due to the application of efficiency and cost-effectiveness principle, two or more administrative procedures can be merged into one procedure by a conclusion. The condition for this is that the rights or obligations of the parties are based on the same legal basis and on the same or

  4. The Alaska Commercial Fisheries Water Quality Sampling Methods and Procedures Manual

    Energy Technology Data Exchange (ETDEWEB)

    Folley, G.; Pearson, L.; Crosby, C. [Alaska Dept. of Environmental Conservation, Soldotna, AK (United States); DeCola, E.; Robertson, T. [Nuka Research and Planning Group, Seldovia, AK (United States)

    2006-07-01

    A comprehensive water quality sampling program was conducted in response to the oil spill that occurred when the M/V Selendang Ayu ship ran aground near a major fishing port at Unalaska Island, Alaska in December 2004. In particular, the sampling program focused on the threat of spilled oil to the local commercial fisheries resources. Spill scientists were unable to confidently model the movement of oil away from the wreck because of limited oceanographic data. In order to determine which fish species were at risk of oil contamination, a real-time assessment of how and where the oil was moving was needed, because the wreck became a continual source of oil release for several weeks after the initial grounding. The newly developed methods and procedures used to detect whole oil during the sampling program will be presented in the Alaska Commercial Fisheries Water Quality Sampling Methods and Procedures Manual which is currently under development. The purpose of the manual is to provide instructions to spill managers while they try to determine where spilled oil has or has not been encountered. The manual will include a meaningful data set that can be analyzed in real time to assess oil movement and concentration. Sections on oil properties and processes will be included along with scientific water quality sampling methods for whole and dissolved phase oil to assess potential contamination of commercial fishery resources and gear in Alaska waters during an oil spill. The manual will present a general discussion of factors that should be considered when designing a sampling program after a spill. In order to implement Alaska's improved seafood safety measures, the spatial scope of spilled oil must be known. A water quality sampling program can provide state and federal fishery managers and food safety inspectors with important information as they identify at-risk fisheries. 11 refs., 7 figs.

  5. Procedures of water desalination with solar energy and f-chart method

    Directory of Open Access Journals (Sweden)

    Petrović Andrija A.

    2015-01-01

    Full Text Available Due to rapid population growth, and climate change caused by environmental pollution needs for drinking water are increasing while amount of freshwater are decreasing. However possible solution for freshwater scarcity can be found in water desalination procedures. In this article three representative water desalination solar powered plants are described. Except explanation of processes it is also mentioned basic advantages and disadvantages of humidification, reverse osmosis and desalination evaporation by using solar energy. Simulation of the solar desalination system is analyzed with f-chart method monthly, located on located 42 degrees north latitude.

  6. SUPPORTING THE INDUSTRY BY DEVELOPING A DESIGN GUIDANCE FOR COMPUTER-BASED PROCEDURES FOR FIELD WORKERS

    Energy Technology Data Exchange (ETDEWEB)

    Oxstrand, Johanna; LeBlanc, Katya

    2017-06-01

    The paper-based procedures currently used for nearly all activities in the commercial nuclear power industry have a long history of ensuring safe operation of the plants. However, there is potential to greatly increase efficiency and safety by improving how the human interacts with the procedures, which can be achieved through the use of computer-based procedures (CBPs). A CBP system offers a vast variety of improvements, such as context driven job aids, integrated human performance tools and dynamic step presentation. As a step toward the goal of improving procedure use performance, the U.S. Department of Energy Light Water Reactor Sustainability Program researchers, together with the nuclear industry, have been investigating the possibility and feasibility of replacing current paper-based procedures with CBPs. The main purpose of the CBP research conducted at the Idaho National Laboratory was to provide design guidance to the nuclear industry to be used by both utilities and vendors. After studying existing design guidance for CBP systems, the researchers concluded that the majority of the existing guidance is intended for control room CBP systems, and does not necessarily address the challenges of designing CBP systems for instructions carried out in the field. Further, the guidance is often presented on a high level, which leaves the designer to interpret what is meant by the guidance and how to specifically implement it. The authors developed a design guidance to provide guidance specifically tailored to instructions that are carried out in the field based.

  7. 24 CFR 1000.54 - What procedures apply to complaints arising out of any of the methods of providing for Indian...

    Science.gov (United States)

    2010-04-01

    ... 24 Housing and Urban Development 4 2010-04-01 2010-04-01 false What procedures apply to complaints arising out of any of the methods of providing for Indian preference? 1000.54 Section 1000.54 Housing and... ACTIVITIES General § 1000.54 What procedures apply to complaints arising out of any of the methods of...

  8. Application of DNA-based methods in forensic entomology.

    Science.gov (United States)

    Wells, Jeffrey D; Stevens, Jamie R

    2008-01-01

    A forensic entomological investigation can benefit from a variety of widely practiced molecular genotyping methods. The most commonly used is DNA-based specimen identification. Other applications include the identification of insect gut contents and the characterization of the population genetic structure of a forensically important insect species. The proper application of these procedures demands that the analyst be technically expert. However, one must also be aware of the extensive list of standards and expectations that many legal systems have developed for forensic DNA analysis. We summarize the DNA techniques that are currently used in, or have been proposed for, forensic entomology and review established genetic analyses from other scientific fields that address questions similar to those in forensic entomology. We describe how accepted standards for forensic DNA practice and method validation are likely to apply to insect evidence used in a death or other forensic entomological investigation.

  9. Physics-based preconditioning and the Newton-Krylov method for non-equilibrium radiation diffusion

    International Nuclear Information System (INIS)

    Mousseau, V.A.; Knoll, D.A.; Rider, W.J.

    2000-01-01

    An algorithm is presented for the solution of the time dependent reaction-diffusion systems which arise in non-equilibrium radiation diffusion applications. This system of nonlinear equations is solved by coupling three numerical methods, Jacobian-free Newton-Krylov, operator splitting, and multigrid linear solvers. An inexact Newton's method is used to solve the system of nonlinear equations. Since building the Jacobian matrix for problems of interest can be challenging, the authors employ a Jacobian-free implementation of Newton's method, where the action of the Jacobian matrix on a vector is approximated by a first order Taylor series expansion. Preconditioned generalized minimal residual (PGMRES) is the Krylov method used to solve the linear systems that come from the iterations of Newton's method. The preconditioner in this solution method is constructed using a physics-based divide and conquer approach, often referred to as operator splitting. This solution procedure inverts the scalar elliptic systems that make up the preconditioner using simple multigrid methods. The preconditioner also addresses the strong coupling between equations with local 2 x 2 block solves. The intra-cell coupling is applied after the inter-cell coupling has already been addressed by the elliptic solves. Results are presented using this solution procedure that demonstrate its efficiency while incurring minimal memory requirements

  10. Defect assessment procedures at high temperature

    International Nuclear Information System (INIS)

    Ainsworth, R.A.

    1991-01-01

    A comprehensive assessment procedure for the high-temperature response of structures is being produced. The procedure is referred to as R5 and is written as a series of step-by-step instructions in a number of volumes. This paper considers in detail those parts of R5 which address the behaviour of defects. The defect assessment procedures may be applied to defects found in service, postulated defects, or defects formed during operation as a result of creep-fatigue loading. In the last case, a method is described for deducing from endurance data the number of cycles to initiate a crack of a specified size. Under steady loading, the creep crack tip parameter C * is used to assess crack growth. Under cyclic loading, the creep crack growth during dwell periods is stiell governed by C * but crack growth due to cyclic excursions must also be included. This cyclic crack growth is described by an effective stress intensity factor range. A feature of the R5 defect assessment procedures in that they are based on simplified methods and approximate reference stress methods are described which enable C * in a component to be evaluated. It is shown by comparison with theoretical calculations and experimental data that reliable estimates of C * and the associated crack growth are obtained provided realistic creep strain rate date are used in the reference stress approximation. (orig./HP)

  11. Dermal and inhalation acute toxic class methods: test procedures and biometric evaluations for the Globally Harmonized Classification System.

    Science.gov (United States)

    Holzhütter, H G; Genschow, E; Diener, W; Schlede, E

    2003-05-01

    The acute toxic class (ATC) methods were developed for determining LD(50)/LC(50) estimates of chemical substances with significantly fewer animals than needed when applying conventional LD(50)/LC(50) tests. The ATC methods are sequential stepwise procedures with fixed starting doses/concentrations and a maximum of six animals used per dose/concentration. The numbers of dead/moribund animals determine whether further testing is necessary or whether the test is terminated. In recent years we have developed classification procedures for the oral, dermal and inhalation routes of administration by using biometric methods. The biometric approach assumes a probit model for the mortality probability of a single animal and assigns the chemical to that toxicity class for which the best concordance is achieved between the statistically expected and the observed numbers of dead/moribund animals at the various steps of the test procedure. In previous publications we have demonstrated the validity of the biometric ATC methods on the basis of data obtained for the oral ATC method in two-animal ring studies with 15 participants from six countries. Although the test procedures and biometric evaluations for the dermal and inhalation ATC methods have already been published, there was a need for an adaptation of the classification schemes to the starting doses/concentrations of the Globally Harmonized Classification System (GHS) recently adopted by the Organization for Economic Co-operation and Development (OECD). Here we present the biometric evaluation of the dermal and inhalation ATC methods for the starting doses/concentrations of the GHS and of some other international classification systems still in use. We have developed new test procedures and decision rules for the dermal and inhalation ATC methods, which require significantly fewer animals to provide predictions of toxicity classes, that are equally good or even better than those achieved by using the conventional LD(50)/LC

  12. Performance of the Seven-step Procedure in Problem-based Hospitality Management Education

    Directory of Open Access Journals (Sweden)

    Wichard Zwaal

    2016-12-01

    Full Text Available The study focuses on the seven-step procedure (SSP in problem-based learning (PBL. The way students apply the seven-step procedure will help us understand how students work in a problem-based learning curriculum. So far, little is known about how students rate the performance and importance of the different steps, the amount of time they spend on each step and the perceived quality of execution of the procedure. A survey was administered to a sample of 101 students enrolled in a problem-based hospitality management program. Results show that students consider step 6 (Collect additional information outside the group to be most important. The highest performance-rating is for step two (Define the problem and the lowest for step four (Draw a systemic inventory of explanations from step three. Step seven is classified as low in performance and high in importance implicating urgent attention. The average amount of time spent on the seven steps is 133 minutes with the largest part of the time spent on self-study outside the group (42 minutes. The assessment of the execution of a set of specific guidelines (the Blue Card did not completely match with the overall performance ratings for the seven steps. The SSP could be improved by reducing the number of steps and incorporating more attention to group dynamics.

  13. A Case Study of Policies and Procedures to Address Cyberbullying at a Technology-Based Middle School

    Science.gov (United States)

    Tate, Bettina Polite

    2017-01-01

    This qualitative case study explored the policies and procedures used to effectively address cyberbullying at a technology-based middle school. The purpose of the study was to gain an in-depth understanding of policies and procedures used to address cyberbullying at a technology-based middle school in the southern United States. The study sought…

  14. An alternative method for noise analysis using pixel variance as part of quality control procedures on digital mammography systems

    International Nuclear Information System (INIS)

    Bouwman, R; Broeders, M; Van Engen, R; Young, K; Lazzari, B; Ravaglia, V

    2009-01-01

    According to the European Guidelines for quality assured breast cancer screening and diagnosis, noise analysis is one of the measurements that needs to be performed as part of quality control procedures on digital mammography systems. However, the method recommended in the European Guidelines does not discriminate sufficiently between systems with and without additional noise besides quantum noise. This paper attempts to give an alternative and relatively simple method for noise analysis which can divide noise into electronic noise, structured noise and quantum noise. Quantum noise needs to be the dominant noise source in clinical images for optimal performance of a digital mammography system, and therefore the amount of electronic and structured noise should be minimal. For several digital mammography systems, the noise was separated into components based on the measured pixel value, standard deviation (SD) of the image and the detector entrance dose. The results showed that differences between systems exist. Our findings confirm that the proposed method is able to discriminate systems based on their noise performance and is able to detect possible quality problems. Therefore, we suggest to replace the current method for noise analysis as described in the European Guidelines by the alternative method described in this paper.

  15. Procedure for extraction of disparate data from maps into computerized data bases

    Science.gov (United States)

    Junkin, B. G.

    1979-01-01

    A procedure is presented for extracting disparate sources of data from geographic maps and for the conversion of these data into a suitable format for processing on a computer-oriented information system. Several graphic digitizing considerations are included and related to the NASA Earth Resources Laboratory's Digitizer System. Current operating procedures for the Digitizer System are given in a simplified and logical manner. The report serves as a guide to those organizations interested in converting map-based data by using a comparable map digitizing system.

  16. Interlaboratory evaluation of the AOAC method and the A-1 procedure for recovery of fecal coliforms from foods.

    Science.gov (United States)

    Andrews, W H; Wilson, C R; Poelma, P L; Bullock, L K; McClure, F D; Gentile, D E

    1981-09-01

    An interlaboratory evaluation was made of the 96 h AOAC method and the 24 h A-1 procedure for the enumeration of fecal coliforms in samples of yellow corn meal, rye flour, mung beans, raw ground beef, and raw oyster homogenate. Results indicated that the efficiency of the A-1 procedure, measured in terms of recovery of fecal coliforms, and the reproducibility of that recovery were dependent on the particular food being analyzed. Accordingly, until its efficiency can be more fully demonstrated, the A-1 procedure is recommended only as a screening procedure for fecal coliforms in foods.

  17. A power set-based statistical selection procedure to locate susceptible rare variants associated with complex traits with sequencing data.

    Science.gov (United States)

    Sun, Hokeun; Wang, Shuang

    2014-08-15

    Existing association methods for rare variants from sequencing data have focused on aggregating variants in a gene or a genetic region because of the fact that analysing individual rare variants is underpowered. However, these existing rare variant detection methods are not able to identify which rare variants in a gene or a genetic region of all variants are associated with the complex diseases or traits. Once phenotypic associations of a gene or a genetic region are identified, the natural next step in the association study with sequencing data is to locate the susceptible rare variants within the gene or the genetic region. In this article, we propose a power set-based statistical selection procedure that is able to identify the locations of the potentially susceptible rare variants within a disease-related gene or a genetic region. The selection performance of the proposed selection procedure was evaluated through simulation studies, where we demonstrated the feasibility and superior power over several comparable existing methods. In particular, the proposed method is able to handle the mixed effects when both risk and protective variants are present in a gene or a genetic region. The proposed selection procedure was also applied to the sequence data on the ANGPTL gene family from the Dallas Heart Study to identify potentially susceptible rare variants within the trait-related genes. An R package 'rvsel' can be downloaded from http://www.columbia.edu/∼sw2206/ and http://statsun.pusan.ac.kr. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  18. Gis-based procedures for hydropower potential spotting

    Energy Technology Data Exchange (ETDEWEB)

    Larentis, Dante G.; Collischonn, Walter; Tucci, Carlos E.M. [Instituto de Pesquisas Hidraulicas da UFRGS, Av. Bento Goncalves, 9500, CEP 91501-970, Caixa Postal 15029, Porto Alegre, RS (Brazil); Olivera, Francisco (Texas A and M University, Zachry Department of Civil Engineering 3136 TAMU, College Station, TX 77843-3136, US)

    2010-10-15

    The increasing demand for energy, especially from renewable and sustainable sources, spurs the development of small hydropower plants and encourages investment in new survey studies. Preliminary hydropower survey studies usually carry huge uncertainties about the technical, economic and environmental feasibility of the undeveloped potential. This paper presents a methodology for large-scale survey of hydropower potential sites to be applied in the inception phase of hydroelectric development planning. The sequence of procedures to identify hydropower sites is based on remote sensing and regional streamflow data and was automated within a GIS-based computational program: Hydrospot. The program allows spotting more potential sites along the drainage network than it would be possible in a traditional survey study, providing different types of dam-powerhouse layouts and two types (operating modes) of projects: run-of-the-river and storage projects. Preliminary results from its applications in a hydropower-developed basin in Brazil have shown Hydrospot's limitations and potentialities in giving support to the mid-to-long-term planning of the electricity sector. (author)

  19. Residual stress effects in LMFBR fracture assessment procedures

    International Nuclear Information System (INIS)

    Hooton, D.G.

    1984-01-01

    Two post-yield fracture mechanics methods, which have been developed into fully detailed failure assessment procedures for ferritic structures, have been reviewed from the point of view of the manner in which as-welded residual stress effects are incorporated, and comparisons then made with finite element and theoretical models of centre-cracked plates containing residual/thermal stresses in the form of crack-driving force curves. Applying the procedures to austenitic structures, comparisons are made in terms of failure assessment curves and it is recommended that the preferred method for the prediction of critical crack sizes in LMFBR austenitic structures containing as-welded residual stresses is the CEGB-R6 procedure based on a flow stress defined at 3% strain in the parent plate. When the prediction of failure loads in such structures is required, it is suggested that the CEGB-R6 procedure be used with residual/thermal stresses factored to give a maximum total stress of flow stress magnitude

  20. A PBOM configuration and management method based on templates

    Science.gov (United States)

    Guo, Kai; Qiao, Lihong; Qie, Yifan

    2018-03-01

    The design of Process Bill of Materials (PBOM) holds a hinge position in the process of product development. The requirements of PBOM configuration design and management for complex products are analysed in this paper, which include the reuse technique of configuration procedure and urgent management need of huge quantity of product family PBOM data. Based on the analysis, the function framework of PBOM configuration and management has been established. Configuration templates and modules are defined in the framework to support the customization and the reuse of configuration process. The configuration process of a detection sensor PBOM is shown as an illustration case in the end. The rapid and agile PBOM configuration and management can be achieved utilizing template-based method, which has a vital significance to improve the development efficiency for complex products.

  1. Imaging guided interventional procedures in paediatric uroradiology--a case based overview

    Energy Technology Data Exchange (ETDEWEB)

    Riccabona, M. E-mail: michael.riccabona@kfunigraz.ac.at; Sorantin, E.; Hausegger, K

    2002-08-01

    Objective: To describe the potential and application of interventional image guided procedures in the paediatric urinary tract. Patients and methods: The different techniques are illustrated using case reports. The examples comprise established indications such as percutaneous nephrostomy for compromised kidneys in obstructive uropathy and infection, sonographic guided renal biopsy including monitoring or treatment of complications after biopsy, and evaluation and balloon dilatation of childhood renal artery stenosis. There are new applications such as treatment of stenosis in cutaneous ureterostomy or sonographically guided catheterism for deployment of therapeutic agents. Results: Generally, the procedures are safe and successful. However, complications may occur, and peri-/post-interventional monitoring is mandatory to insure early detection and adequate management. Sometimes additional treatment such as percutaneous embolisation of a symptomatic post biopsy arterio-venous fistula, or a second biopsy for recurrent disease may become necessary. Conclusion: Imaging guided interventional procedures are performed successfully in a variety of diseases of the paediatric urinary tract. They can be considered a valuable additional modality throughout infancy and childhood.

  2. Imaging guided interventional procedures in paediatric uroradiology--a case based overview

    International Nuclear Information System (INIS)

    Riccabona, M.; Sorantin, E.; Hausegger, K.

    2002-01-01

    Objective: To describe the potential and application of interventional image guided procedures in the paediatric urinary tract. Patients and methods: The different techniques are illustrated using case reports. The examples comprise established indications such as percutaneous nephrostomy for compromised kidneys in obstructive uropathy and infection, sonographic guided renal biopsy including monitoring or treatment of complications after biopsy, and evaluation and balloon dilatation of childhood renal artery stenosis. There are new applications such as treatment of stenosis in cutaneous ureterostomy or sonographically guided catheterism for deployment of therapeutic agents. Results: Generally, the procedures are safe and successful. However, complications may occur, and peri-/post-interventional monitoring is mandatory to insure early detection and adequate management. Sometimes additional treatment such as percutaneous embolisation of a symptomatic post biopsy arterio-venous fistula, or a second biopsy for recurrent disease may become necessary. Conclusion: Imaging guided interventional procedures are performed successfully in a variety of diseases of the paediatric urinary tract. They can be considered a valuable additional modality throughout infancy and childhood

  3. EML procedures manual

    International Nuclear Information System (INIS)

    Volchok, H.L.; de Planque, G.

    1982-01-01

    This manual contains the procedures that are used currently by the Environmental Measurements Laboratory of the US Department of Energy. In addition a number of analytical methods from other laboratories have been included. These were tested for reliability at the Battelle, Pacific Northwest Laboratory under contract with the Division of Biomedical and Environmental Research of the AEC. These methods are clearly distinguished. The manual is prepared in loose leaf form to facilitate revision of the procedures and inclusion of additional procedures or data sheets. Anyone receiving the manual through EML should receive this additional material automatically. The contents are as follows: (1) general; (2) sampling; (3) field measurements; (4) general analytical chemistry; (5) chemical procedures; (6) data section; (7) specifications

  4. Incorporating mesh-insensitive structural stress into the fatigue assessment procedure of common structural rules for bulk carriers

    Directory of Open Access Journals (Sweden)

    Seong-Min Kim

    2015-01-01

    Full Text Available This study introduces a fatigue assessment procedure using mesh-insensitive structural stress method based on the Common Structural Rules for Bulk Carriers by considering important factors, such as mean stress and thickness effects. The fatigue assessment result of mesh-insensitive structural stress method have been compared with CSR procedure based on equivalent notch stress at major hot spot points in the area near the ballast hold for a 180 K bulk carrier. The possibility of implementing mesh-insensitive structural stress method in the fatigue assessment procedure for ship structures is discussed.

  5. [Spa-based water procedures and complications in the course of their performance].

    Science.gov (United States)

    Persiianova-Dubrova, A L; Badalov, N G; L'vova, N V; Krikorova, S A; Tupitsyna, Iu Iu; Uianaeva, A I; Barashkov, G N; Povazhnaia, E L

    2010-01-01

    The scope of applications of spa-based water procedures have become considerably extended during the recent years; accordingly, the frequency of complications and accidents has increased. The present review considers the possible mechanisms underlying such complications and various factors promoting their development, such as the patients' age and chronic diseases, the use of pharmaceutical products affecting thermoregulation and cardiovascular function. Special attention is given to the influence of alcohol consumption on the frequency of complications, accidents, and sudden death associated with hyperthermal procedures, the possibility of infectious diseases and measures necessary for their prevention.

  6. An automated background estimation procedure for gamma ray spectra

    International Nuclear Information System (INIS)

    Tervo, R.J.; Kennett, T.J.; Prestwich, W.V.

    1983-01-01

    An objective and simple method has been developed to estimate the background continuum in Ge gamma ray spectra. Requiring no special procedures, the method is readily automated. Based upon the inherent statistical properties of the experimental data itself, nodes, which reflect background samples are located and used to produce an estimate of the continuum. A simple procedure to interpolate between nodes is reported and a range of rather typical experimental data is presented. All information necessary to implemented this technique is given including the relevant properties of various factors involved in its development. (orig.)

  7. Implementation of PID autotuning procedure in PLC controller

    Directory of Open Access Journals (Sweden)

    Daniun Marcin

    2017-01-01

    Full Text Available In this paper, we present the automatic PID tuning procedure based on the Method of Moments and AMIGO tuning rules. The advantage of the Method of Moments is that the time constant and transport delay are estimated at the areas rather than on the individual points. This results in high resistance to the measurement noises. The sensitivity to measurement noises is a serious problem in other autotuning methods. The second advantage of this method is that it approximates plant during identification process to first order model with time delay. We combined the Method of Moments with the AMIGO tuning rules and implemented this combination as a stand-alone autotuning procedure in Siemens S7-1200 PLC controller. Next, we compared this method with two built-in PID autotuning procedures which were available in Siemens S7-1200 PLC controller. The procedure was tested for three types of plant models: with lag-dominated, balanced, and delay-dominated dynamics. We simulated the plants on a PC in Matlab R2013a. The connection between the PC and PLC was maintained through a National Instruments data acquisition board, NI PCI-6229. We conducted tests for step change in the set point, trajectory tracking, and load disturbances. To assess control quality, we used IAE index. We limited our research to PI algorithm. The results prove that proposed method was better than two built-in tuning methods provided by Siemens, oscillating between a few and even a dozen percent in most cases. The proposed method is universal and can be implemented in any PLC controller.

  8. Survey of effective doses to patients undergoing contrast-based X-ray fluoroscopy procedures in Tanzania

    International Nuclear Information System (INIS)

    Ngaile, J.E.; Msaki, P.K.; Kazema, R.R.

    2017-01-01

    The aim of this study was to assess the radiation burden imparted to patients from contrast-based X-ray fluoroscopy procedures in Tanzania. The effective doses (EDs) to patients from five contrast-based fluoroscopy procedures were obtained from four hospitals. The ED was estimated using the knowledge of the patient characteristics, patient-related exposure parameters, measurements of air kerma area product and PCXCM software. The median EDs for the barium swallow (BS), barium meal (BM), barium enema (BE), hysterosalpingography (HSG) and retrograde urethrography (RUG) were 0.50, 1.43, 2.83, 0.65 and 0.59 mSv, respectively. The median ED per hospital for the BS and BM procedures varied by factors of up to 9.9 and 4.2, respectively, while for the BE, HSG and RUG varied by factors of up to 2.3, 2.4 and 4.3, respectively. The overall differences between individual EDs across the four hospitals varied by factors of up to 53, 58.9 and 11.4 for the BS, BM and BE, respectively, while for the HSG and RUG differed by factors of up to 22 and 46.7, respectively. The mean EDs in this study were mostly lower than reported values from Spain, the UK, Ghana and Greece, while slightly higher than those reported from India. The observed wide variations of procedural protocols and patient doses within and across the hospitals; and the observed high patient doses in this study relative to those from the literature call for the need to standardize procedural protocols and optimize contrast-based fluoroscopy procedures. (authors)

  9. Light Water Reactor Sustainability Program: Computer-based procedure for field activities: results from three evaluations at nuclear power plants

    International Nuclear Information System (INIS)

    2014-01-01

    Nearly all activities that involve human interaction with the systems of a nuclear power plant are guided by procedures. The paper-based procedures (PBPs) currently used by industry have a demonstrated history of ensuring safety; however, improving procedure use could yield tremendous savings in increased efficiency and safety. One potential way to improve procedure-based activities is through the use of computer-based procedures (CBPs). Computer-based procedures provide the opportunity to incorporate context driven job aids, such as drawings, photos, just-in-time training, etc into CBP system. One obvious advantage of this capability is reducing the time spent tracking down the applicable documentation. Additionally, human performance tools can be integrated in the CBP system in such way that helps the worker focus on the task rather than the tools. Some tools can be completely incorporated into the CBP system, such as pre-job briefs, placekeeping, correct component verification, and peer checks. Other tools can be partly integrated in a fashion that reduces the time and labor required, such as concurrent and independent verification. Another benefit of CBPs compared to PBPs is dynamic procedure presentation. PBPs are static documents which limits the degree to which the information presented can be tailored to the task and conditions when the procedure is executed. The CBP system could be configured to display only the relevant steps based on operating mode, plant status, and the task at hand. A dynamic presentation of the procedure (also known as context-sensitive procedures) will guide the user down the path of relevant steps based on the current conditions. This feature will reduce the user's workload and inherently reduce the risk of incorrectly marking a step as not applicable and the risk of incorrectly performing a step that should be marked as not applicable. As part of the Department of Energy's (DOE) Light Water Reactors Sustainability Program

  10. Quality-assurance procedures: Method 5G determination of particulate emissions from wood heaters from a dilution tunnel sampling location

    Energy Technology Data Exchange (ETDEWEB)

    Ward, T.E.; Hartman, M.W.; Olin, R.C.; Rives, G.D.

    1989-06-01

    Quality-assurance procedures are contained in this comprehensive document intended to be used as an aid for wood-heater manufacturers and testing laboratories in performing particulate matter sampling of wood heaters according to EPA protocol, Method 5G. These procedures may be used in research and development, and as an aid in auditing and certification testing. A detailed, step-by-step quality assurance guide is provided to aid in the procurement and assembly of testing apparatus, to clearly describe the procedures, and to facilitate data collection and reporting. Suggested data sheets are supplied that can be used as an aid for both recordkeeping and certification applications. Throughout the document, activity matrices are provided to serve as a summary reference. Checklists are also supplied that can be used by testing personnel. Finally, for the purposes of ensuring data quality, procedures are outlined for apparatus operation, maintenance, and traceability. These procedures combined with the detailed description of the sampling and analysis protocol will help ensure the accuracy and reliability of Method 5G emission-testing results.

  11. Plastic freezer bags: a cost-effective method to protect extraction sites in laparoscopic colorectal procedures?

    Science.gov (United States)

    Huynh, Hai P; Musselman, Reilly P; Trottier, Daniel C; Soto, Claudia M; Poulin, Eric C; Mamazza, Joseph; Boushey, Robin P; Auer, Rebecca C; Moloo, Husein

    2013-10-01

    To review surgical-site infection (SSI) and retrieval-site tumor recurrence rates in laparoscopic colorectal procedures when using a plastic freezer bag as a wound protector. Laparoscopic colorectal procedures where a plastic freezer bag used as a wound protector at the extraction site were reviewed between 1991 and 2008 from a prospectively collected database. χ test was used to compare SSI and tumor recurrence rates between groups. Costing data were obtained from the operating room supplies department. A total of 936 cases with 51 (5.45%) surgical-site infections were identified. SSI rates did not differ when comparing groups based on demographic factors, diagnosis, or location of procedure. Retrieval-site tumor recurrence rate was 0.21% (1/474). Cost of plastic freezer bags including sterilization ranged from $0.25 to $3. Plastic freezer bags as wound protectors in laparoscopic colorectal procedures are cost effective and have SSI and retrieval-site tumor recurrence rates that compare favorably to published data.

  12. AGREED-UPON PROCEDURES, PROCEDURES FOR AUDITING EUROPEAN GRANTS

    Directory of Open Access Journals (Sweden)

    Daniel Petru VARTEIU

    2016-12-01

    The audit of EU-funded projects is an audit based on agreed-upon procedures, which are established by the Managing Authority or the Intermediate Body. Agreed-upon procedures can be defined as engagements made in accordance with ISRS 4400, applicable to agreed-upon procedures, where the auditor undertakes to carry out the agreed-upon procedures and issue a report on factual findings. The report provided by the auditor does not express any assurance. It allows users to form their own opinions about the conformity of the expenses with the project budget as well as the eligibility of the expenses.

  13. Development of interim test methods and procedures for determining the performance of small photovoltaic systems

    Energy Technology Data Exchange (ETDEWEB)

    McNutt, P.; Kroposki, B.; Hansen, R.; Algra, K.; DeBlasio, R. [National Renewable Energy Lab., Golden, CO (United States)

    1998-09-01

    The National Renewable Energy Laboratory (NREL) is developing tests and procedures that will determine if the configuration of a small photovoltaic (PV) system is suitable for its intended use, and if the system will perform as specified. An overview of these procedures is presented in this paper. Development of standard test procedures will allow designers, manufacturers, system integrators, users, and independent laboratories to assess the performance of PV systems under outdoor prevailing conditions. An NREL Technical Report detailing the procedures is under way, and the IEEE Standards Coordinating Committee 21 (SCC21) has established a project on this subject. The work will be submitted to the IEEE SCC21 and International Electrotechnical Commission Technical Committee 82 (IEC TC82) for consideration as a consensus standard. Certification bodies such as PowerMark and PV Global Approval Program (PVGAP) may adopt the IEC and IEEE documents when testing systems. Developing standardized test methods and procedures at NREL to evaluate the outdoor performance of PV systems will encourage product quality and promote PV standards development. Standardized tests will assure people that PV systems will perform as specified for their intended applications. As confidence in PV systems increases, the successful commercialization of PV will grow internationally.

  14. Refinement procedure for the image alignment in high-resolution electron tomography.

    Science.gov (United States)

    Houben, L; Bar Sadan, M

    2011-01-01

    High-resolution electron tomography from a tilt series of transmission electron microscopy images requires an accurate image alignment procedure in order to maximise the resolution of the tomogram. This is the case in particular for ultra-high resolution where even very small misalignments between individual images can dramatically reduce the fidelity of the resultant reconstruction. A tomographic-reconstruction based and marker-free method is proposed, which uses an iterative optimisation of the tomogram resolution. The method utilises a search algorithm that maximises the contrast in tomogram sub-volumes. Unlike conventional cross-correlation analysis it provides the required correlation over a large tilt angle separation and guarantees a consistent alignment of images for the full range of object tilt angles. An assessment based on experimental reconstructions shows that the marker-free procedure is competitive to the reference of marker-based procedures at lower resolution and yields sub-pixel accuracy even for simulated high-resolution data. Copyright © 2011 Elsevier B.V. All rights reserved.

  15. An interactive simulation-based education system for BWR emergency, procedure guidelines

    Energy Technology Data Exchange (ETDEWEB)

    Tanikawa, Naoshi; Shida, Touichi [Hitachi Ltd (Japan). Hitachi Works; Ujita, Hiroshi; Yokota, Takeshi; Kato, Kanji [Hitachi Ltd, (Japan). Energy Research Lab.

    1994-12-31

    When applying EPGs (Emergency Procedure Guidelines), an operator decides the operational procedure by predicting the change of parameters from the plant status, because EPGs are described in a symptom style for emergency conditions. Technical knowledge of the plant behavior and its operation are necessary for operator to understand the EPGs. An interactive simulation-based education system, EPG-ICAI (Intelligent Computer Assisted Instruction), has been developed for BWR plant operators to acquire the knowledge of EPGs. EPG-ICAI is designed to realize an effective education by the step-by-step study by using an interactive real time simulator and an individual education by applying an intelligent tutoring function. (orig.) (2 refs., 7 figs., 1 tab.).

  16. An interactive simulation-based education system for BWR emergency, procedure guidelines

    International Nuclear Information System (INIS)

    Tanikawa, Naoshi; Shida, Touichi; Ujita, Hiroshi; Yokota, Takeshi; Kato, Kanji

    1994-01-01

    When applying EPGs (Emergency Procedure Guidelines), an operator decides the operational procedure by predicting the change of parameters from the plant status, because EPGs are described in a symptom style for emergency conditions. Technical knowledge of the plant behavior and its operation are necessary for operator to understand the EPGs. An interactive simulation-based education system, EPG-ICAI (Intelligent Computer Assisted Instruction), has been developed for BWR plant operators to acquire the knowledge of EPGs. EPG-ICAI is designed to realize an effective education by the step-by-step study by using an interactive real time simulator and an individual education by applying an intelligent tutoring function. (orig.) (2 refs., 7 figs., 1 tab.)

  17. Highly Efficient Procedure for the Synthesis of Fructone Fragrance Using a Novel Carbon based Acid

    Directory of Open Access Journals (Sweden)

    Xuezheng Liang

    2010-08-01

    Full Text Available The novel carbon based acid has been synthesized via one-step hydrothermal carbonization of furaldehyde and hydroxyethylsulfonic acid. A highly efficient procedure for the synthesis of fructone has been developed using the novel carbon based acid. The results showed that the catalyst possessed high activity for the reaction, giving a yield of over 95%. The advantages of high activity, stability, reusability and low cost for a simple synthesis procedure and wide applicability to various diols and β-keto esters make this novel carbon based acid one of the best choices for the reaction.

  18. Algorithm for Video Summarization of Bronchoscopy Procedures

    Directory of Open Access Journals (Sweden)

    Leszczuk Mikołaj I

    2011-12-01

    Full Text Available Abstract Background The duration of bronchoscopy examinations varies considerably depending on the diagnostic and therapeutic procedures used. It can last more than 20 minutes if a complex diagnostic work-up is included. With wide access to videobronchoscopy, the whole procedure can be recorded as a video sequence. Common practice relies on an active attitude of the bronchoscopist who initiates the recording process and usually chooses to archive only selected views and sequences. However, it may be important to record the full bronchoscopy procedure as documentation when liability issues are at stake. Furthermore, an automatic recording of the whole procedure enables the bronchoscopist to focus solely on the performed procedures. Video recordings registered during bronchoscopies include a considerable number of frames of poor quality due to blurry or unfocused images. It seems that such frames are unavoidable due to the relatively tight endobronchial space, rapid movements of the respiratory tract due to breathing or coughing, and secretions which occur commonly in the bronchi, especially in patients suffering from pulmonary disorders. Methods The use of recorded bronchoscopy video sequences for diagnostic, reference and educational purposes could be considerably extended with efficient, flexible summarization algorithms. Thus, the authors developed a prototype system to create shortcuts (called summaries or abstracts of bronchoscopy video recordings. Such a system, based on models described in previously published papers, employs image analysis methods to exclude frames or sequences of limited diagnostic or education value. Results The algorithm for the selection or exclusion of specific frames or shots from video sequences recorded during bronchoscopy procedures is based on several criteria, including automatic detection of "non-informative", frames showing the branching of the airways and frames including pathological lesions. Conclusions

  19. Function Allocation in Complex Socio-Technical Systems: Procedure usage in nuclear power and the Context Analysis Method for Identifying Design Solutions (CAMIDS) Model

    Science.gov (United States)

    Schmitt, Kara Anne

    This research aims to prove that strict adherence to procedures and rigid compliance to process in the US Nuclear Industry may not prevent incidents or increase safety. According to the Institute of Nuclear Power Operations, the nuclear power industry has seen a recent rise in events, and this research claims that a contributing factor to this rise is organizational, cultural, and based on peoples overreliance on procedures and policy. Understanding the proper balance of function allocation, automation and human decision-making is imperative to creating a nuclear power plant that is safe, efficient, and reliable. This research claims that new generations of operators are less engaged and thinking because they have been instructed to follow procedures to a fault. According to operators, they were once to know the plant and its interrelations, but organizationally more importance is now put on following procedure and policy. Literature reviews were performed, experts were questioned, and a model for context analysis was developed. The Context Analysis Method for Identifying Design Solutions (CAMIDS) Model was created, verified and validated through both peer review and application in real world scenarios in active nuclear power plant simulators. These experiments supported the claim that strict adherence and rigid compliance to procedures may not increase safety by studying the industry's propensity for following incorrect procedures, and when it directly affects the outcome of safety or security of the plant. The findings of this research indicate that the younger generations of operators rely highly on procedures, and the organizational pressures of required compliance to procedures may lead to incidents within the plant because operators feel pressured into following the rules and policy above performing the correct actions in a timely manner. The findings support computer based procedures, efficient alarm systems, and skill of the craft matrices. The solution to

  20. Procedure for seismic evaluation and design of small bore piping

    International Nuclear Information System (INIS)

    Bilanin, W.; Sills, S.

    1991-01-01

    Simplified methods for the seismic design of small bore piping in nuclear power plants have teen used for many years. Various number of designers have developed unique methods to treat the large number of class 2 and 3 small bore piping systems. This practice has led to a proliferation of methods which are not standardized in the industry. These methods are generally based on enveloping the results of rigorous dynamic or conservative static analysis and result in an excessive number of supports and unrealistically high support loadings. Experience and test data have become available which warranted taking another look at the present methods for analysis of small bore piping. A recently completed Electric Power Research Institute and NCIG (a utility group) activity developed a new procedure for the seismic design and evaluation of small bore piping which provides significant safety and cost benefits. The procedure streamlines the approach to inertial stresses, which is the main feature that achieves the new benefits. Criteria in the procedure for seismic anchor movement and support design are based analysis and focus the designer on credible failure mechanisms. A walkdown of the as-constructed piping system to identify and eliminate undesirable piping features such as adverse spatial interaction is required

  1. Suprapubic cystostomy for neurogenic bladder using Lowsley retractor method: a procedure revisited.

    Science.gov (United States)

    Edokpolo, Leonard U; Foster, Harris E

    2011-11-01

    To report our experience with the Lowsley retractor method for suprapubic cystostomy (SPC) in patients with neurogenic bladder (NGB). A retrospective study was performed of 44 patients with NGB who underwent SPC with the Lowsley retractor method. The subjects were selected from 90 patients undergoing SPC by 1 surgeon from 1995 to 2010. The age, sex, indication, anesthesia type, catheter type, blood loss, fluids administered, and duration and complications were recorded. A total of 49 primary catheter placements were performed in 44 patients. A total of 23 men and 21 women were included. The etiology of NGB was spinal cord injury and multiple sclerosis in 38 subjects (86%). The mean age was 44 years (range 18-86). The cases were performed under general anesthesia, except for 8 (16%) that were successfully performed with local and monitored anesthesia. The operation time documented in 19 cases (39%) was 20.2 ± 5.5 minutes (range 11-31). The Foley catheter size ranged from 16F to 22F. The blood loss was minimal, and there were no intraoperative complications or incorrect catheter placements. One patient returned with significant hematuria 1 day after the procedure. No other minor or major complications were noted. Patients with NGB have been shown to have a greater risk of complication during percutaneous suprapubic catheter placement. SPC using the Lowsley retractor was described by Zeidman et al in 1988. Their report did not detail the patient characteristics or operative experience. To our knowledge, no other institutional experience with the technique has been reported. The present report describes the Lowsley retractor method as a quick and safe ambulatory procedure for patients with NGB. Published by Elsevier Inc.

  2. Node-based finite element method for large-scale adaptive fluid analysis in parallel environments

    International Nuclear Information System (INIS)

    Toshimitsu, Fujisawa; Genki, Yagawa

    2003-01-01

    In this paper, a FEM-based (finite element method) mesh free method with a probabilistic node generation technique is presented. In the proposed method, all computational procedures, from the mesh generation to the solution of a system of equations, can be performed fluently in parallel in terms of nodes. Local finite element mesh is generated robustly around each node, even for harsh boundary shapes such as cracks. The algorithm and the data structure of finite element calculation are based on nodes, and parallel computing is realized by dividing a system of equations by the row of the global coefficient matrix. In addition, the node-based finite element method is accompanied by a probabilistic node generation technique, which generates good-natured points for nodes of finite element mesh. Furthermore, the probabilistic node generation technique can be performed in parallel environments. As a numerical example of the proposed method, we perform a compressible flow simulation containing strong shocks. Numerical simulations with frequent mesh refinement, which are required for such kind of analysis, can effectively be performed on parallel processors by using the proposed method. (authors)

  3. Node-based finite element method for large-scale adaptive fluid analysis in parallel environments

    Energy Technology Data Exchange (ETDEWEB)

    Toshimitsu, Fujisawa [Tokyo Univ., Collaborative Research Center of Frontier Simulation Software for Industrial Science, Institute of Industrial Science (Japan); Genki, Yagawa [Tokyo Univ., Department of Quantum Engineering and Systems Science (Japan)

    2003-07-01

    In this paper, a FEM-based (finite element method) mesh free method with a probabilistic node generation technique is presented. In the proposed method, all computational procedures, from the mesh generation to the solution of a system of equations, can be performed fluently in parallel in terms of nodes. Local finite element mesh is generated robustly around each node, even for harsh boundary shapes such as cracks. The algorithm and the data structure of finite element calculation are based on nodes, and parallel computing is realized by dividing a system of equations by the row of the global coefficient matrix. In addition, the node-based finite element method is accompanied by a probabilistic node generation technique, which generates good-natured points for nodes of finite element mesh. Furthermore, the probabilistic node generation technique can be performed in parallel environments. As a numerical example of the proposed method, we perform a compressible flow simulation containing strong shocks. Numerical simulations with frequent mesh refinement, which are required for such kind of analysis, can effectively be performed on parallel processors by using the proposed method. (authors)

  4. Procedure manual for the estimation of average indoor radon-daughter concentrations using the radon grab-sampling method

    International Nuclear Information System (INIS)

    George, J.L.

    1986-04-01

    The US Department of Energy (DOE) Office of Remedial Action and Waste Technology established the Technical Measurements Center to provide standardization, calibration, comparability, verification of data, quality assurance, and cost-effectiveness for the measurement requirements of DOE remedial action programs. One of the remedial-action measurement needs is the estimation of average indoor radon-daughter concentration. One method for accomplishing such estimations in support of DOE remedial action programs is the radon grab-sampling method. This manual describes procedures for radon grab sampling, with the application specifically directed to the estimation of average indoor radon-daughter concentration (RDC) in highly ventilated structures. This particular application of the measurement method is for cases where RDC estimates derived from long-term integrated measurements under occupied conditions are below the standard and where the structure being evaluated is considered to be highly ventilated. The radon grab-sampling method requires that sampling be conducted under standard maximized conditions. Briefly, the procedure for radon grab sampling involves the following steps: selection of sampling and counting equipment; sample acquisition and processing, including data reduction; calibration of equipment, including provisions to correct for pressure effects when sampling at various elevations; and incorporation of quality-control and assurance measures. This manual describes each of the above steps in detail and presents an example of a step-by-step radon grab-sampling procedure using a scintillation cell

  5. Evaluation of occupational exposure in interventionist procedures using Monte Carlo Method

    International Nuclear Information System (INIS)

    Santos, William S.; Neves, Lucio P.; Perini, Ana P.; Caldas, Linda V.E.; Belinato, Walmir; Maia, Ana F.

    2014-01-01

    This study presents a computational model of exposure for a patient, cardiologist and nurse in a typical scenario of cardiac interventional procedures. In this case a set of conversion coefficient (CC) for effective dose (E) in terms of kerma-area product (KAP) for all individuals involved using seven different energy spectra and eight beam projections. The CC was also calculated for the entrance skin dose (ESD) normalized to the PKA for the patient. All individuals were represented by anthropomorphic phantoms incorporated in a radiation transport code based on Monte Carlo simulation. (author)

  6. Imaging of Herpes Simplex Virus Type 1 Thymidine Kinase Gene Expression with Radiolabeled 5-(2-iodovinyl)-2'-deoxyuridine (IVDU) in Liver by Hydrodynamic-based Procedure

    Energy Technology Data Exchange (ETDEWEB)

    Song, In Ho; Lee, Tae Sup; Kang, Joo Hyun; Lee, Yong Jin; Kim, Kwang Il; An, Gwang Il; Chung, Wee Sup; Cheon, Gi Jeong; Choi, Chang Woon; Lim, Sang Moo [Korea Institute of Radiological and Medical Sciences, Seoul (Korea, Republic of)

    2009-10-15

    Hydrodynamic-based procedure is a simple and effective gene delivery method to lead a high gene expression in liver tissue. Non-invasive imaging reporter gene system has been used widely with herpes simplex virus type 1 thymidine kinase (HSV1-tk) and its various substrates. In the present study, we investigated to image the expression of HSV1-tk gene with 5-(2-iodovinyl)-2'-deoxyuridine (IVDU) in mouse liver by the hydrodynamicbased procedure. HSV1-tk or enhanced green fluorescence protein (EGFP) encoded plasmid DNA was transferred into the mouse liver by hydrodynamic injection. At 24 h post-injection, RT-PCR, biodistribution, fluorescence imaging, nuclear imaging and digital wholebody autoradiography (DWBA) were performed to confirm transferred gene expression. In RT-PCR assay using mRNA from the mouse liver, specific bands of HSV1-tk and EGFP gene were observed in HSV1-tk and EGFP expressing plasmid injected mouse, respectively. Higher uptake of radiolabeled IVDU was exhibited in liver of HSV1-tk gene transferred mouse by biodistribution study. In fluorescence imaging, the liver showed specific fluorescence signal in EGFP gene transferred mouse. Gamma-camera image and DWBA results showed that radiolabeled IVDU was accumulated in the liver of HSV1-tk gene transferred mouse. In this study, hydrodynamic-based procedure was effective in liver-specific gene delivery and it could be quantified with molecular imaging methods. Therefore, co-expression of HSV1-tk reporter gene and target gene by hydrodynamic-based procedure is expected to be a useful method for the evaluation of the target gene expression level with radiolabeled IVDU.

  7. Method to determine the radioactivity of radioactive waste packages. Basic procedure of the method used to determine the radioactivity of low-level radioactive waste packages generated at nuclear power plants: 2007

    International Nuclear Information System (INIS)

    2008-03-01

    This document describes the procedures adopted in order to determine the radioactivity of low-level radioactive waste packages generated at nuclear power plants in Japan. The standards applied have been approved by the Atomic Energy Society of Japan after deliberations by the Subcommittee on the Radioactivity Verification Method for Waste Packages, the Nuclear Cycle Technical Committee, and the Standards Committee. The method for determining the radioactivity of the low-level radioactive waste packages was based on procedures approved by the Nuclear Safety Commission in 1992. The scaling factor method and other methods of determining radioactivity were then developed on the basis of various investigations conducted, drawing on extensive accumulated knowledge. Moreover, the international standards applied as common guidelines for the scaling factor method were developed by Technical Committee ISO/TC 85, Nuclear Energy, Subcommittee SC 5, Nuclear Fuel Technology. Since the application of accumulated knowledge to future radioactive waste disposal is considered to be rational and justified, such body of knowledge has been documented in a standardized form. The background to this standardization effort, the reasoning behind the determination method as applied to the measurement of radioactivity, as well as other related information, are given in the Annexes hereto. This document includes the following Annexes. Annex 1: (reference) Recorded items related to the determination of the scaling factor. Annex 2 (reference): Principles applied to the determining the radioactivity of waste packages. (author)

  8. Web-based video monitoring of CT and MRI procedures

    Science.gov (United States)

    Ratib, Osman M.; Dahlbom, Magdalena; Kho, Hwa T.; Valentino, Daniel J.; McCoy, J. Michael

    2000-05-01

    A web-based video transmission of images from CT and MRI consoles was implemented in an Intranet environment for real- time monitoring of ongoing procedures. Images captured from the consoles are compressed to video resolution and broadcasted through a web server. When called upon, the attending radiologists can view these live images on any computer within the secured Intranet network. With adequate compression, these images can be displayed simultaneously in different locations at a rate of 2 to 5 images/sec through standard LAN. The quality of the images being insufficient for diagnostic purposes, our users survey showed that they were suitable for supervising a procedure, positioning the imaging slices and for routine quality checking before completion of a study. The system was implemented at UCLA to monitor 9 CTs and 6 MRIs distributed in 4 buildings. This system significantly improved the radiologists productivity by saving precious time spent in trips between reading rooms and examination rooms. It also improved patient throughput by reducing the waiting time for the radiologists to come to check a study before moving the patient from the scanner.

  9. Numerical study of a novel procedure for installing the tower and Rotor Nacelle Assembly of offshore wind turbines based on the inverted pendulum principle

    Science.gov (United States)

    Guachamin Acero, Wilson; Gao, Zhen; Moan, Torgeir

    2017-09-01

    Current installation costs of offshore wind turbines (OWTs) are high and profit margins in the offshore wind energy sector are low, it is thus necessary to develop installation methods that are more efficient and practical. This paper presents a numerical study (based on a global response analysis of marine operations) of a novel procedure for installing the tower and Rotor Nacelle Assemblies (RNAs) on bottom-fixed foundations of OWTs. The installation procedure is based on the inverted pendulum principle. A cargo barge is used to transport the OWT assembly in a horizontal position to the site, and a medium-size Heavy Lift Vessel (HLV) is then employed to lift and up-end the OWT assembly using a special upending frame. The main advantage of this novel procedure is that the need for a huge HLV (in terms of lifting height and capacity) is eliminated. This novel method requires that the cargo barge is in the leeward side of the HLV (which can be positioned with the best heading) during the entire installation. This is to benefit from shielding effects of the HLV on the motions of the cargo barge, so the foundations need to be installed with a specific heading based on wave direction statistics of the site and a typical installation season. Following a systematic approach based on numerical simulations of actual operations, potential critical installation activities, corresponding critical events, and limiting (response) parameters are identified. In addition, operational limits for some of the limiting parameters are established in terms of allowable limits of sea states. Following a preliminary assessment of these operational limits, the duration of the entire operation, the equipment used, and weather- and water depth-sensitivity, this novel procedure is demonstrated to be viable.

  10. Assessment of dietary intake of flavouring substances within the procedure for their safety evaluation: advantages and limitations of estimates obtained by means of a per capita method.

    Science.gov (United States)

    Arcella, D; Leclercq, C

    2005-01-01

    The procedure for the safety evaluation of flavourings adopted by the European Commission in order to establish a positive list of these substances is a stepwise approach which was developed by the Joint FAO/WHO Expert Committee on Food Additives (JECFA) and amended by the Scientific Committee on Food. Within this procedure, a per capita amount based on industrial poundage data of flavourings, is calculated to estimate the dietary intake by means of the maximised survey-derived daily intake (MSDI) method. This paper reviews the MSDI method in order to check if it can provide conservative intake estimates as needed at the first steps of a stepwise procedure. Scientific papers and opinions dealing with the MSDI method were reviewed. Concentration levels reported by the industry were compared with estimates obtained with the MSDI method. It appeared that, in some cases, these estimates could be orders of magnitude (up to 5) lower than those calculated considering concentration levels provided by the industry and regular consumption of flavoured foods and beverages. A critical review of two studies which had been used to support the statement that MSDI is a conservative method for assessing exposure to flavourings among high consumers was performed. Special attention was given to the factors that affect exposure at high percentiles, such as brand loyalty and portion sizes. It is concluded that these studies may not be suitable to validate the MSDI method used to assess intakes of flavours by European consumers due to shortcomings in the assumptions made and in the data used. Exposure assessment is an essential component of risk assessment. The present paper suggests that the MSDI method is not sufficiently conservative. There is therefore a clear need for either using an alternative method to estimate exposure to flavourings in the procedure or for limiting intakes to the levels at which the safety was assessed.

  11. Data assimilation method based on the constraints of confidence region

    Science.gov (United States)

    Li, Yong; Li, Siming; Sheng, Yao; Wang, Luheng

    2018-03-01

    The ensemble Kalman filter (EnKF) is a distinguished data assimilation method that is widely used and studied in various fields including methodology and oceanography. However, due to the limited sample size or imprecise dynamics model, it is usually easy for the forecast error variance to be underestimated, which further leads to the phenomenon of filter divergence. Additionally, the assimilation results of the initial stage are poor if the initial condition settings differ greatly from the true initial state. To address these problems, the variance inflation procedure is usually adopted. In this paper, we propose a new method based on the constraints of a confidence region constructed by the observations, called EnCR, to estimate the inflation parameter of the forecast error variance of the EnKF method. In the new method, the state estimate is more robust to both the inaccurate forecast models and initial condition settings. The new method is compared with other adaptive data assimilation methods in the Lorenz-63 and Lorenz-96 models under various model parameter settings. The simulation results show that the new method performs better than the competing methods.

  12. A DATA-MINING BASED METHOD FOR THE GAIT PATTERN ANALYSIS

    Directory of Open Access Journals (Sweden)

    Marcelo Rudek

    2015-12-01

    Full Text Available The paper presents a method developed for the gait classification based on the analysis of the trajectory of the pressure centres (CoP extracted from the contact points of the feet with the ground during walking. The data acquirement is performed ba means of a walkway with embedded tactile sensors. The proposed method includes capturing procedures, standardization of data, creation of an organized repository (data warehouse, and development of a process mining. A graphical analysis is applied to looking at the footprint signature patterns. The aim is to obtain a visual interpretation of the grouping by situating it into the normal walking patterns or deviations associated with an individual way of walking. The method consists of data classification automation which divides them into healthy and non-healthy subjects in order to assist in rehabilitation treatments for the people with related mobility problems.

  13. Supplement to procedures, analysis, and comparison of groundwater velocity measurement methods for unconfined aquifers

    International Nuclear Information System (INIS)

    Zinkl, R.J.; Kearl, P.M.

    1988-09-01

    This report is a supplement to Procedures, Analysis, and Comparison of Groundwater Velocity Measurement Methods for Unconfined Aquifers and provides computer program descriptions, type curves, and calculations for the analysis of field data in determining groundwater velocity in unconfined aquifers. The computer programs analyze bail or slug tests, pumping tests, Geoflo Meter data, and borehole dilution data. Appendix A is a description of the code, instructions for using the code, an example data file, and the calculated results to allow checking the code after installation on the user's computer. Calculations, development of formulas, and correction factors for the various programs are presented in Appendices B through F. Appendix G provides a procedure for calculating transmissivity and specific yield for pumping tests performed in unconfined aquifers

  14. Method for signal conditioning and data acquisition system, based on variable amplification and feedback technique

    Energy Technology Data Exchange (ETDEWEB)

    Conti, Livio, E-mail: livio.conti@uninettunouniversity.net [Facoltà di Ingegneria, Università Telematica Internazionale Uninettuno, Corso Vittorio Emanuele II 39, 00186 Rome, Italy INFN Sezione Roma Tor Vergata, Via della Ricerca Scientifica 1, 00133 Rome (Italy); Sgrigna, Vittorio [Dipartimento di Matematica e Fisica, Università Roma Tre, 84 Via della Vasca Navale, I-00146 Rome (Italy); Zilpimiani, David [National Institute of Geophysics, Georgian Academy of Sciences, 1 M. Alexidze St., 009 Tbilisi, Georgia (United States); Assante, Dario [Facoltà di Ingegneria, Università Telematica Internazionale Uninettuno, Corso Vittorio Emanuele II 39, 00186 Rome, Italy INFN Sezione Roma Tor Vergata, Via della Ricerca Scientifica 1, 00133 Rome (Italy)

    2014-08-21

    An original method of signal conditioning and adaptive amplification is proposed for data acquisition systems of analog signals, conceived to obtain a high resolution spectrum of any input signal. The procedure is based on a feedback scheme of the signal amplification with aim at maximizing the dynamic range and resolution of the data acquisition system. The paper describes the signal conditioning, digitization, and data processing procedures applied to an a priori unknown signal in order to enucleate its amplitude and frequency content for applications in different environments: on the ground, in space, or in the laboratory. An electronic board of the conditioning module has also been constructed and described. In the paper are also discussed the main fields of application and advantages of the method with respect to those known today.

  15. Method for signal conditioning and data acquisition system, based on variable amplification and feedback technique

    International Nuclear Information System (INIS)

    Conti, Livio; Sgrigna, Vittorio; Zilpimiani, David; Assante, Dario

    2014-01-01

    An original method of signal conditioning and adaptive amplification is proposed for data acquisition systems of analog signals, conceived to obtain a high resolution spectrum of any input signal. The procedure is based on a feedback scheme of the signal amplification with aim at maximizing the dynamic range and resolution of the data acquisition system. The paper describes the signal conditioning, digitization, and data processing procedures applied to an a priori unknown signal in order to enucleate its amplitude and frequency content for applications in different environments: on the ground, in space, or in the laboratory. An electronic board of the conditioning module has also been constructed and described. In the paper are also discussed the main fields of application and advantages of the method with respect to those known today

  16. Stochastic Generalized Method of Moments

    KAUST Repository

    Yin, Guosheng; Ma, Yanyuan; Liang, Faming; Yuan, Ying

    2011-01-01

    The generalized method of moments (GMM) is a very popular estimation and inference procedure based on moment conditions. When likelihood-based methods are difficult to implement, one can often derive various moment conditions and construct the GMM objective function. However, minimization of the objective function in the GMM may be challenging, especially over a large parameter space. Due to the special structure of the GMM, we propose a new sampling-based algorithm, the stochastic GMM sampler, which replaces the multivariate minimization problem by a series of conditional sampling procedures. We develop the theoretical properties of the proposed iterative Monte Carlo method, and demonstrate its superior performance over other GMM estimation procedures in simulation studies. As an illustration, we apply the stochastic GMM sampler to a Medfly life longevity study. Supplemental materials for the article are available online. © 2011 American Statistical Association.

  17. Stochastic Generalized Method of Moments

    KAUST Repository

    Yin, Guosheng

    2011-08-16

    The generalized method of moments (GMM) is a very popular estimation and inference procedure based on moment conditions. When likelihood-based methods are difficult to implement, one can often derive various moment conditions and construct the GMM objective function. However, minimization of the objective function in the GMM may be challenging, especially over a large parameter space. Due to the special structure of the GMM, we propose a new sampling-based algorithm, the stochastic GMM sampler, which replaces the multivariate minimization problem by a series of conditional sampling procedures. We develop the theoretical properties of the proposed iterative Monte Carlo method, and demonstrate its superior performance over other GMM estimation procedures in simulation studies. As an illustration, we apply the stochastic GMM sampler to a Medfly life longevity study. Supplemental materials for the article are available online. © 2011 American Statistical Association.

  18. Effectiveness of internet-based affect induction procedures: A systematic review and meta-analysis.

    Science.gov (United States)

    Ferrer, Rebecca A; Grenen, Emily G; Taber, Jennifer M

    2015-12-01

    Procedures used to induce affect in a laboratory are effective and well-validated. Given recent methodological and technological advances in Internet research, it is important to determine whether affect can be effectively induced using Internet methodology. We conducted a meta-analysis and systematic review of prior research that has used Internet-based affect induction procedures, and examined potential moderators of the effectiveness of affect induction procedures. Twenty-six studies were included in final analyses, with 89 independent effect sizes. Affect induction procedures effectively induced general positive affect, general negative affect, fear, disgust, anger, sadness, and guilt, but did not significantly induce happiness. Contamination of other nontarget affect did not appear to be a major concern. Video inductions resulted in greater effect sizes. Overall, results indicate that affect can be effectively induced in Internet studies, suggesting an important venue for the acceleration of affective science. (PsycINFO Database Record (c) 2015 APA, all rights reserved).

  19. General procedure to initialize the cyclic soil water balance by the Thornthwaite and Mather method

    NARCIS (Netherlands)

    Dourado-Neto, D.; Lier, van Q.D.; Metselaar, K.; Reichardt, K.; Nielsen, D.R.

    2010-01-01

    The original Thornthwaite and Mather method, proposed in 1955 to calculate a climatic monthly cyclic soil water balance, is frequently used as an iterative procedure due to its low input requirements and coherent estimates of water balance components. Using long term data sets to establish a

  20. The Hierarchical Spectral Merger Algorithm: A New Time Series Clustering Procedure

    KAUST Repository

    Euá n, Carolina; Ombao, Hernando; Ortega, Joaquí n

    2018-01-01

    We present a new method for time series clustering which we call the Hierarchical Spectral Merger (HSM) method. This procedure is based on the spectral theory of time series and identifies series that share similar oscillations or waveforms

  1. Evaluation and In-House Validation of Five DNA Extraction Methods for PCR-based STR Analysis of Bloodstained Denims

    Directory of Open Access Journals (Sweden)

    Henry Perdigon

    2004-06-01

    Full Text Available One type of crime scene evidence commonly submitted for analysis is bloodstain on denim. However, chemicals (e.g., indigo used to produce denim materials may co-purify with DNA and hence, affect subsequent DNA analysis. The present study compared five methods (e.g., standard organic, organic with hydrogen peroxide (H2O2, modified FTA™, organic/Chelex®-Centricon®, and QIAamp® DNA Mini Kit-based procedures for the isolation of blood DNA from denim. A Short Tandem Repeat (STR-based analysis across two to nine STR markers, namely, HUMvWA, HUMTH01, D8S306, HUMFES/FPS, HUMDHFRP2, HUMF13A01, HUMFGA, HUMTPOX, and HUMCSF1PO, was used to evaluate successful amplification of blood DNA extracted from light indigo, dark indigo, indigo-sulfur, pure indigo, sulfur-top, and sulfur-bottom denim materials. The results of the present study support the utility of organic/Chelex®-Centricon® and QIAamp® Kit procedures in extracting PCR-amplifiable DNA from five different types of denim materials for STR analysis. Furthermore, a solid-based method using FTA™ classic cards was modified to provide a simple, rapid, safe, and cost-effective procedure for extracting blood DNA from light, dark indigo and pure indigo denim materials. However, DNA eluted from bloodstained sulfur-dyed denims (e.g., sulfur-top and sulfur-bottom using FTA™ procedure was not readily amplifiable.

  2. Knowledge management method for knowledge based BWR Core Operation Management System

    Energy Technology Data Exchange (ETDEWEB)

    Wada, Yutaka; Fukuzaki, Takaharu; Kobayashi, Yasuhiro

    1989-03-01

    A knowledge management method is proposed to support an except whose knowledge is stored in a knowledge base in the BWR Core Operation Management System. When the alterations in the operation plans are motivated by the expert after evaluating them, the method attempts to find the knowledge which must be modified and to give the expert guidances. In this way the resultant operation plans are improved by modifying values of referenced data. Using data dependency among data, which are defined and referred during inference, data to be modified are retrieved. In generating modification guidances, data reference and definition procedures are classified by syntactic analysis of knowledge. The modified data values are calculated with a sensitivity between the increment in the data to be modified and the resultant one in the performance of operation plans. The efficiency of the knowledge management by the proposed method, when applied to the knowledge based system including 500 pieces of knowledge for BWR control rod programming, is higher than that for interactive use of existing general purpose editors. (author).

  3. Knowledge management method for knowledge based BWR Core Operation Management System

    International Nuclear Information System (INIS)

    Wada, Yutaka; Fukuzaki, Takaharu; Kobayashi, Yasuhiro

    1989-01-01

    A knowledge management method is proposed to support an except whose knowledge is stored in a knowledge base in the BWR Core Operation Management System. When the alterations in the operation plans are motivated by the expert after evaluating them, the method attempts to find the knowledge which must be modified and to give the expert guidances. In this way the resultant operation plans are improved by modifying values of referenced data. Using data dependency among data, which are defined and referred during inference, data to be modified are retrieved. In generating modification guidances, data reference and definition procedures are classified by syntactic analysis of knowledge. The modified data values are calculated with a sensitivity between the increment in the data to be modified and the resultant one in the performance of operation plans. The efficiency of the knowledge management by the proposed method, when applied to the knowledge based system including 500 pieces of knowledge for BWR control rod programming, is higher than that for interactive use of existing general purpose editors. (author)

  4. Projector-based virtual reality dome environment for procedural pain and anxiety in young children with burn injuries: a pilot study

    Directory of Open Access Journals (Sweden)

    Khadra C

    2018-02-01

    Full Text Available Christelle Khadra,1,2 Ariane Ballard,1,2 Johanne Déry,1,3 David Paquin,4 Jean-Simon Fortin,5 Isabelle Perreault,6 David R Labbe,7 Hunter G Hoffman,8 Stéphane Bouchard,9 Sylvie LeMay1,2 1Faculty of Nursing, University of Montreal, Montreal, QC, Canada; 2Research Center, Centre Hospitalier Universitaire (CHU Sainte-Justine, Montreal, QC, Canada; 3Direction of Nursing, Centre Hospitalier Universitaire (CHU Sainte-Justine, Montreal, QC, Canada; 4Department in Creation and New Media, Université du Québec en Abitibi-Témiscamingue, Rouyn-Noranda, QC, Canada; 5Emergency Department, Hôpital de Granby, Granby, QC, Canada; 6Department of Surgery, Centre Hospitalier Universitaire (CHU Sainte-Justine, Montreal, QC, Canada; 7Department of Software and IT Engineering, École de Technologie Supérieure, Montreal, QC, Canada; 8Department of Mechanical Engineering, University of Washington, Seattle, WA, USA; 9Department of Psychoeducation and Psychology, Université du Québec en Outaouais, Gatineau, QC, Canada Background: Virtual reality (VR is a non-pharmacological method to distract from pain during painful procedures. However, it was never tested in young children with burn injuries undergoing wound care.Aim: We aimed to assess the feasibility and acceptability of the study process and the use of VR for procedural pain management.Methods: From June 2016 to January 2017, we recruited children from 2 months to 10 years of age with burn injuries requiring a hydrotherapy session in a pediatric university teaching hospital in Montreal. Each child received the projector-based VR intervention in addition to the standard pharmacological treatment. Data on intervention and study feasibility and acceptability in addition to measures on pain (Face, Legs, Activity, Cry, Consolability scale, baseline (Modified Smith Scale and procedural (Procedure Behavior Check List anxiety, comfort (OCCEB-BECCO [behavioral observational scale of comfort level for child burn

  5. Method selection and evaluation of midtrimester and long-term therapeutic efficiency of achalasia with three methods of interventional procedure

    International Nuclear Information System (INIS)

    Cheng Yingsheng; Yang Renjie; Li Minghua; Chen Weixiong; Shang Kezhong; Zhuang Qixin; Xu Jianrong; Chen Niwei; Zhu Yude

    2000-01-01

    Objective: To study method selection and evaluation of midtrimester and long-term therapeutic efficiency of achalasia with three methods of interventional procedure. Method: 50 cases achalasia with 30 cases performing with balloon dilation (group A) and 5 cases with permanent metallic internal stent dilation (group B) and 15 cases with temporary metallic internal stent dilation (group C) under fluoroscopy. Results: 30 cases of group A had 56 times of dilations (mean 1.9 times). The mean diameter of cardia was (2.4 +- 1.2) mm before dilation and (9.7 +- 3.0) mm after dilation. The mean dysphagia scores were 2.4 +- 1.2 grades before dilation and 1.0 +- 0.3 grades after dilation. Complications in 30 cases included chest pain (n = 9), reflux (n = 8) and bleeding (n = 3). 18(60%) of 30 cases showed dysphagia relapse during follow-up over 6 months, 18(90%) of 20 cases showed dysphagia relapse during follow-up over 12 months. 5 uncovered expandable metal stents were permanently placed in 5 cases of group B. The mean diameter of cardia was (3.2 +- 2.0) mm before dilation and (18.4 +- 1.7) mm after dilation. The mean dysphagia scores were (2.4 +- 1.1) grade before dilation and (0.4 +- 0.2) grade after dilation. Complications in 5 cases included chest pain (n = 3), reflux (n = 4), bleeding (n = 1) and hyperplasia of granulation tissue (n 2). 3(60%) in 5 cases showed dysphagia relapse during follow-up over 6 months, 1(50%) in 2 cases were dysphagia relapse during follow-up over 12 months. 15 covered expandable metal stents were temporarily placed in 15 cases of group C and drawn out at the 3-7 days via gastroscopy. The mean diameter of cardia was (3.4 +- 2.9) mm before dilation and (14.7 +- 2.9) mm after dilation. The mean dysphagia scores were (2.5 +- 1.1) grades before dilation and (0.6 +- 0.3) grades after dilation. Complications in 15 cases included chest pain (n = 3), reflux (n = 3) and bleeding (n = 2). 3(20%) in 15 cases showed dysphagia relapse during follow-up over 6

  6. Using Procedure Based on Item Response Theory to Evaluate Classification Consistency Indices in the Practice of Large-Scale Assessment

    Directory of Open Access Journals (Sweden)

    Shanshan Zhang

    2017-09-01

    Full Text Available In spite of the growing interest in the methods of evaluating the classification consistency (CC indices, only few researches are available in the field of applying these methods in the practice of large-scale educational assessment. In addition, only few studies considered the influence of practical factors, for example, the examinee ability distribution, the cut score location and the score scale, on the performance of CC indices. Using the newly developed Lee's procedure based on the item response theory (IRT, the main purpose of this study is to investigate the performance of CC indices when practical factors are taken into consideration. A simulation study and an empirical study were conducted under comprehensive conditions. Results suggested that with negatively skewed distribution, the CC indices were larger than with other distributions. Interactions occurred among ability distribution, cut score location, and score scale. Consequently, Lee's IRT procedure is reliable to be used in the field of large-scale educational assessment, and when reporting the indices, it should be treated with caution as testing conditions may vary a lot.

  7. Comparison of pre-processing methods for multiplex bead-based immunoassays.

    Science.gov (United States)

    Rausch, Tanja K; Schillert, Arne; Ziegler, Andreas; Lüking, Angelika; Zucht, Hans-Dieter; Schulz-Knappe, Peter

    2016-08-11

    High throughput protein expression studies can be performed using bead-based protein immunoassays, such as the Luminex® xMAP® technology. Technical variability is inherent to these experiments and may lead to systematic bias and reduced power. To reduce technical variability, data pre-processing is performed. However, no recommendations exist for the pre-processing of Luminex® xMAP® data. We compared 37 different data pre-processing combinations of transformation and normalization methods in 42 samples on 384 analytes obtained from a multiplex immunoassay based on the Luminex® xMAP® technology. We evaluated the performance of each pre-processing approach with 6 different performance criteria. Three performance criteria were plots. All plots were evaluated by 15 independent and blinded readers. Four different combinations of transformation and normalization methods performed well as pre-processing procedure for this bead-based protein immunoassay. The following combinations of transformation and normalization were suitable for pre-processing Luminex® xMAP® data in this study: weighted Box-Cox followed by quantile or robust spline normalization (rsn), asinh transformation followed by loess normalization and Box-Cox followed by rsn.

  8. Knowledge-based framework for procedure synthesis and its application to the emergency response in a nuclear power plant

    International Nuclear Information System (INIS)

    Sharma, D.D.

    1986-01-01

    In this dissertation a nuclear power plant operator is viewed as a knowledge based problem solver. It is shown that, in responding to an abnormal situation, an operator typically solves several problems, for example, plant status monitoring, diagnosis, sensor data validation, consequence prediction, and procedure synthesis. It is proposed that, in order to respond to unexpected situations and handle procedure failures the capability to synthesize and modify procedures dynamically or in runtime is required. A knowledge based framework for dynamically synthesizing procedures (DPS), a knowledge representation language to apply DPS framework for real problems (DPSRL), and a framework for emergency procedure synthesis (FEPS) for nuclear power plants based on DPS are developed. The DPS framework views the task of synthesis as a process of selecting predefined procedures to match the needs of the dynamically changing plant conditions. The DPSRL language provides knowledge organization and representation primitives required to support the DPS framework. Specifically, it provides mechanisms to build various plant libraries and procedures to access information from them. The capabilities and the use of DPS, DPSRL, and FEPS are demonstrated by developing an experimental expert system for a typical boiling water reactor and analyzing its performance for various selected abnormal incidents

  9. The Effect of Web-Based Teaching Method on Academic Achievement in Tourism Education

    Directory of Open Access Journals (Sweden)

    Bahadır Köksalan

    2011-12-01

    Full Text Available The purpose of this research was to determine the effects of web based teaching method on the academic achievement level of undergraduate students in Tourism Education department in the fall semester of 2009-2010 academic year. The research was carried out with 50 students (25 students in control group, 25 students in experimental group who were studying at Tourism and Travel Management; and Tourism and Hotel Management Programs in Tourism and Hotel Management Vocational High School at Gaziantep University. The research procedure included both pre-test/post-test and experimental-control group research models. While creating control and experimental groups, the researchers took into consideration neutrality, academic success, internet access, knowledge level of internet and computer use as well as of web based learning. The GTOI/SEYH 111 Communication Course was delivered both in webbased learning method and traditional face-to-face method. In the course, basic concepts, verbal-nonverbal communication, written communication with the communication issues were taught to experimental group with web-based teaching method and they were taught to control group with traditional methods (lecturing, question-answer, simulation

  10. Bootstrap-based procedures for inference in nonparametric receiver-operating characteristic curve regression analysis.

    Science.gov (United States)

    Rodríguez-Álvarez, María Xosé; Roca-Pardiñas, Javier; Cadarso-Suárez, Carmen; Tahoces, Pablo G

    2018-03-01

    Prior to using a diagnostic test in a routine clinical setting, the rigorous evaluation of its diagnostic accuracy is essential. The receiver-operating characteristic curve is the measure of accuracy most widely used for continuous diagnostic tests. However, the possible impact of extra information about the patient (or even the environment) on diagnostic accuracy also needs to be assessed. In this paper, we focus on an estimator for the covariate-specific receiver-operating characteristic curve based on direct regression modelling and nonparametric smoothing techniques. This approach defines the class of generalised additive models for the receiver-operating characteristic curve. The main aim of the paper is to offer new inferential procedures for testing the effect of covariates on the conditional receiver-operating characteristic curve within the above-mentioned class. Specifically, two different bootstrap-based tests are suggested to check (a) the possible effect of continuous covariates on the receiver-operating characteristic curve and (b) the presence of factor-by-curve interaction terms. The validity of the proposed bootstrap-based procedures is supported by simulations. To facilitate the application of these new procedures in practice, an R-package, known as npROCRegression, is provided and briefly described. Finally, data derived from a computer-aided diagnostic system for the automatic detection of tumour masses in breast cancer is analysed.

  11. A perturbation method for dark solitons based on a complete set of the squared Jost solutions

    International Nuclear Information System (INIS)

    Ao Shengmei; Yan Jiaren

    2005-01-01

    A perturbation method for dark solitons is developed, which is based on the construction and the rigorous proof of the complete set of squared Jost solutions. The general procedure solving the adiabatic solution of perturbed nonlinear Schroedinger + equation, the time-evolution equation of dark soliton parameters and a formula for calculating the first-order correction are given. The method can also overcome the difficulties resulting from the non-vanishing boundary condition

  12. Investigation of a Reinforcement-Based Toilet Training Procedure for Children with Autism.

    Science.gov (United States)

    Cicero, Frank R.; Pfadt, Al

    2002-01-01

    This study evaluated the effectiveness of a reinforcement-based toilet training intervention with three children with autism. Procedures included positive reinforcement, graduated guidance, scheduled practice trials, and forward prompting. All three children reduced urination accidents to zero and learned to request bathroom use spontaneously…

  13. Virtual rounds: simulation-based education in procedural medicine

    Science.gov (United States)

    Shaffer, David W.; Meglan, Dwight A.; Ferrell, Margaret; Dawson, Steven L.

    1999-07-01

    Computer-based simulation is a goal for training physicians in specialties where traditional training puts patients at risk. Intuitively, interactive simulation of anatomy, pathology, and therapeutic actions should lead to shortening of the learning curve for novice or inexperienced physicians. Effective transfer of knowledge acquired in simulators must be shown for such devices to be widely accepted in the medical community. We have developed an Interventional Cardiology Training Simulator which incorporates real-time graphic interactivity coupled with haptic response, and an embedded curriculum permitting rehearsal, hypertext links, personal archiving and instructor review and testing capabilities. This linking of purely technical simulation with educational content creates a more robust educational purpose for procedural simulators.

  14. Electronic Procedures for Medical Operations

    Science.gov (United States)

    2015-01-01

    Electronic procedures are replacing text-based documents for recording the steps in performing medical operations aboard the International Space Station. S&K Aerospace, LLC, has developed a content-based electronic system-based on the Extensible Markup Language (XML) standard-that separates text from formatting standards and tags items contained in procedures so they can be recognized by other electronic systems. For example, to change a standard format, electronic procedures are changed in a single batch process, and the entire body of procedures will have the new format. Procedures can be quickly searched to determine which are affected by software and hardware changes. Similarly, procedures are easily shared with other electronic systems. The system also enables real-time data capture and automatic bookmarking of current procedure steps. In Phase II of the project, S&K Aerospace developed a Procedure Representation Language (PRL) and tools to support the creation and maintenance of electronic procedures for medical operations. The goal is to develop these tools in such a way that new advances can be inserted easily, leading to an eventual medical decision support system.

  15. Displacement-Based Seismic Design Procedure for Framed Buildings with Dissipative Braces Part II: Numerical Results

    International Nuclear Information System (INIS)

    Mazza, Fabio; Vulcano, Alfonso

    2008-01-01

    For a widespread application of dissipative braces to protect framed buildings against seismic loads, practical and reliable design procedures are needed. In this paper a design procedure based on the Direct Displacement-Based Design approach is adopted, assuming the elastic lateral storey-stiffness of the damped braces proportional to that of the unbraced frame. To check the effectiveness of the design procedure, presented in an associate paper, a six-storey reinforced concrete plane frame, representative of a medium-rise symmetric framed building, is considered as primary test structure; this structure, designed in a medium-risk region, is supposed to be retrofitted as in a high-risk region, by insertion of diagonal braces equipped with hysteretic dampers. A numerical investigation is carried out to study the nonlinear static and dynamic responses of the primary and the damped braced test structures, using step-by-step procedures described in the associate paper mentioned above; the behaviour of frame members and hysteretic dampers is idealized by bilinear models. Real and artificial accelerograms, matching EC8 response spectrum for a medium soil class, are considered for dynamic analyses

  16. A procedure for identifying textile bast fibres using microscopy: Flax, nettle/ramie, hemp and jute

    International Nuclear Information System (INIS)

    Bergfjord, Christian; Holst, Bodil

    2010-01-01

    Identifying and distinguishing between natural textile fibres is an important task in both archaeology and criminology. Wool, silk and cotton fibres can readily be distinguished from the textile bast fibres flax, nettle/ramie, hemp and jute. Distinguishing between the bast fibres is, however, not easily done and methods based on surface characteristics, chemical composition and cross section size and shape are not conclusive. A conclusive method based on X-ray microdiffraction exists, but as the method requires the use of a synchrotron it is not readily available. In this paper we present a simple procedure for identifying the above mentioned textile bast fibres. The procedure is based on measuring the fibrillar orientation with polarised light microscopy and detecting the presence of calcium oxalate crystals (CaC 2 O 4 ) in association with the fibres. To demonstrate the procedure, a series of fibre samples of flax, nettle, ramie, hemp and jute were investigated. The results are presented here. An advantage of the procedure is that only a small amount of fibre material is needed.

  17. A procedure for identifying textile bast fibres using microscopy: Flax, nettle/ramie, hemp and jute

    Energy Technology Data Exchange (ETDEWEB)

    Bergfjord, Christian, E-mail: christian.bergfjord@uib.no [Institute for Physics and Technology, University of Bergen, Allegt. 55, 5007 Bergen (Norway); Holst, Bodil, E-mail: bodil.holst@uib.no [Institute for Physics and Technology, University of Bergen, Allegt. 55, 5007 Bergen (Norway)

    2010-08-15

    Identifying and distinguishing between natural textile fibres is an important task in both archaeology and criminology. Wool, silk and cotton fibres can readily be distinguished from the textile bast fibres flax, nettle/ramie, hemp and jute. Distinguishing between the bast fibres is, however, not easily done and methods based on surface characteristics, chemical composition and cross section size and shape are not conclusive. A conclusive method based on X-ray microdiffraction exists, but as the method requires the use of a synchrotron it is not readily available. In this paper we present a simple procedure for identifying the above mentioned textile bast fibres. The procedure is based on measuring the fibrillar orientation with polarised light microscopy and detecting the presence of calcium oxalate crystals (CaC{sub 2}O{sub 4}) in association with the fibres. To demonstrate the procedure, a series of fibre samples of flax, nettle, ramie, hemp and jute were investigated. The results are presented here. An advantage of the procedure is that only a small amount of fibre material is needed.

  18. An Alternative and Rapid Method for the Extraction of Nucleic Acids from Ixodid Ticks by Potassium Acetate Procedure

    Directory of Open Access Journals (Sweden)

    Islay Rodríguez

    2014-08-01

    Full Text Available Four variants of the potassium acetate procedure for DNA extraction from ixodid ticks at different stage of their life cycles were evaluated and compared with phenol-chloroform and ammonium hydroxide methods. The most rapid and most efficient variant was validated in the DNA extraction procedure from the engorged ticks collected from bovine, canine as well as from house ticks for the screening of Borrelia burgdorferi sensu lato, Anaplasma spp. and Babesia spp. The ammonium hydroxide procedure was used for non-engorged ticks. All the variants were efficient and allowed obtaining PCR-quality material according to the specific amplification of 16S rRNA gene fragment of the original tick. DNA extracted from the ticks under the study was tested by multiplex PCR for the screening of tick-borne pathogens. Anaplasma spp. and Babesia spp. amplification products were obtained from 29/48 extracts. Ammonium hydroxide protocol was not efficient for two extracts. Detection of amplification products from the PCR indicated that DNA had been successfully extracted. The potassium acetate procedure could be an alternative, rapid, and reliable method for DNA extraction from the ixodid ticks, mainly for poorly-resourced laboratories.

  19. Influence of Freezing and Storage Procedure on Human Urine Samples in NMR-Based Metabolomics

    Directory of Open Access Journals (Sweden)

    Burkhard Luy

    2013-04-01

    Full Text Available It is consensus in the metabolomics community that standardized protocols should be followed for sample handling, storage and analysis, as it is of utmost importance to maintain constant measurement conditions to identify subtle biological differences. The aim of this work, therefore, was to systematically investigate the influence of freezing procedures and storage temperatures and their effect on NMR spectra as a potentially disturbing aspect for NMR-based metabolomics studies. Urine samples were collected from two healthy volunteers, centrifuged and divided into aliquots. Urine aliquots were frozen either at −20 °C, on dry ice, at −80 °C or in liquid nitrogen and then stored at −20 °C, −80 °C or in liquid nitrogen vapor phase for 1–5 weeks before NMR analysis. Results show spectral changes depending on the freezing procedure, with samples frozen on dry ice showing the largest deviations. The effect was found to be based on pH differences, which were caused by variations in CO2 concentrations introduced by the freezing procedure. Thus, we recommend that urine samples should be frozen at −20 °C and transferred to lower storage temperatures within one week and that freezing procedures should be part of the publication protocol.

  20. Influence of Freezing and Storage Procedure on Human Urine Samples in NMR-Based Metabolomics.

    Science.gov (United States)

    Rist, Manuela J; Muhle-Goll, Claudia; Görling, Benjamin; Bub, Achim; Heissler, Stefan; Watzl, Bernhard; Luy, Burkhard

    2013-04-09

    It is consensus in the metabolomics community that standardized protocols should be followed for sample handling, storage and analysis, as it is of utmost importance to maintain constant measurement conditions to identify subtle biological differences. The aim of this work, therefore, was to systematically investigate the influence of freezing procedures and storage temperatures and their effect on NMR spectra as a potentially disturbing aspect for NMR-based metabolomics studies. Urine samples were collected from two healthy volunteers, centrifuged and divided into aliquots. Urine aliquots were frozen either at -20 °C, on dry ice, at -80 °C or in liquid nitrogen and then stored at -20 °C, -80 °C or in liquid nitrogen vapor phase for 1-5 weeks before NMR analysis. Results show spectral changes depending on the freezing procedure, with samples frozen on dry ice showing the largest deviations. The effect was found to be based on pH differences, which were caused by variations in CO2 concentrations introduced by the freezing procedure. Thus, we recommend that urine samples should be frozen at -20 °C and transferred to lower storage temperatures within one week and that freezing procedures should be part of the publication protocol.

  1. Optimized in vitro procedure for assessing the cytocompatibility of magnesium-based biomaterials.

    Science.gov (United States)

    Jung, Ole; Smeets, Ralf; Porchetta, Dario; Kopp, Alexander; Ptock, Christoph; Müller, Ute; Heiland, Max; Schwade, Max; Behr, Björn; Kröger, Nadja; Kluwe, Lan; Hanken, Henning; Hartjen, Philip

    2015-09-01

    Magnesium (Mg) is a promising biomaterial for degradable implant applications that has been extensively studied in vitro and in vivo in recent years. In this study, we developed a procedure that allows an optimized and uniform in vitro assessment of the cytocompatibility of Mg-based materials while respecting the standard protocol DIN EN ISO 10993-5:2009. The mouse fibroblast line L-929 was chosen as the preferred assay cell line and MEM supplemented with 10% FCS, penicillin/streptomycin and 4mM l-glutamine as the favored assay medium. The procedure consists of (1) an indirect assessment of effects of soluble Mg corrosion products in material extracts and (2) a direct assessment of the surface compatibility in terms of cell attachment and cytotoxicity originating from active corrosion processes. The indirect assessment allows the quantification of cell-proliferation (BrdU-assay), viability (XTT-assay) as well as cytotoxicity (LDH-assay) of the mouse fibroblasts incubated with material extracts. Direct assessment visualizes cells attached to the test materials by means of live-dead staining. The colorimetric assays and the visual evaluation complement each other and the combination of both provides an optimized and simple procedure for assessing the cytocompatibility of Mg-based biomaterials in vitro. Copyright © 2015 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved.

  2. Stress and deflection analyses of floating roofs based on a load-modifying method

    International Nuclear Information System (INIS)

    Sun Xiushan; Liu Yinghua; Wang Jianbin; Cen Zhangzhi

    2008-01-01

    This paper proposes a load-modifying method for the stress and deflection analyses of floating roofs used in cylindrical oil storage tanks. The formulations of loads and deformations are derived according to the equilibrium analysis of floating roofs. Based on these formulations, the load-modifying method is developed to conduct a geometrically nonlinear analysis of floating roofs with the finite element (FE) simulation. In the procedure with the load-modifying method, the analysis is carried out through a series of iterative computations until a convergence is achieved within the error tolerance. Numerical examples are given to demonstrate the validity and reliability of the proposed method, which provides an effective and practical numerical solution to the design and analysis of floating roofs

  3. Recommended method for measurement of absorbency of superabsorbent polymers in cement-based materials

    DEFF Research Database (Denmark)

    Esteves, Luis Pedro

    2015-01-01

    —laser diffraction particle size analysis, and it allows an easy and reliable measurement of the absorbency of superabsorbent polymers. It is shown in detail how both the definition of the exposure liquid and the definition of the system of SAP particles can be selected so that absorbency can be experimentally...... so that the properties of concrete with superabsorbent polymers can be better controlled in practice. In this paper, a technique that can be potentially used as a standard method is developed. The method is based on a measurement technique validated through an international standard procedure...

  4. Using plant procedures as the basis for conducting a job and task analysis

    International Nuclear Information System (INIS)

    Haynes, F.H.; Ruth, B.W.

    1985-01-01

    Plant procedures were selected, by Northeast Utilities (NU), as the basis for conducting Job and Task Analyses (JTA). The resultant JTA was used to design procedure based simulator training programs for Millstone 1, 2, and Connecticut Yankee. The task listings were both plant specific and exhibited excellent correlation to INPO's generic PWR and BWR task analyses. Using the procedures based method enabled us to perform the JTA using plant and training staff. This proved cost effective in terms of both time and money. Learning objectives developed from the JTA were easily justified and correlated directly to job performance within the context of the plant procedures. In addition, the analysis generated a comprehensive review of plant procedures and, conversely, the plant's normal procedure revision process generated an automatic trigger for updating the task data

  5. Computerized procedures system

    Science.gov (United States)

    Lipner, Melvin H.; Mundy, Roger A.; Franusich, Michael D.

    2010-10-12

    An online data driven computerized procedures system that guides an operator through a complex process facility's operating procedures. The system monitors plant data, processes the data and then, based upon this processing, presents the status of the current procedure step and/or substep to the operator. The system supports multiple users and a single procedure definition supports several interface formats that can be tailored to the individual user. Layered security controls access privileges and revisions are version controlled. The procedures run on a server that is platform independent of the user workstations that the server interfaces with and the user interface supports diverse procedural views.

  6. An expert system-based aid for analysis of Emergency Operating Procedures in NPPs

    International Nuclear Information System (INIS)

    Jakubowski, Z.; Beraha, D.

    1996-01-01

    Emergency Operating Procedures (EOPs) generally and an accident management (AM) particularly play a significant part in the safety philosophy on NPPs since many years. A better methodology for development and validation of EOPs is desired. A prototype of an Emergency Operating Procedures Analysis System (EOPAS), which has been developed at GRS, is presented in the paper. The hardware configuration and software organisation of the system is briefly reviewed. The main components of the system such as the knowledge base of an expert system and the engineering simulator are described. (author)

  7. Automatic segmentation of rotational x-ray images for anatomic intra-procedural surface generation in atrial fibrillation ablation procedures.

    Science.gov (United States)

    Manzke, Robert; Meyer, Carsten; Ecabert, Olivier; Peters, Jochen; Noordhoek, Niels J; Thiagalingam, Aravinda; Reddy, Vivek Y; Chan, Raymond C; Weese, Jürgen

    2010-02-01

    Since the introduction of 3-D rotational X-ray imaging, protocols for 3-D rotational coronary artery imaging have become widely available in routine clinical practice. Intra-procedural cardiac imaging in a computed tomography (CT)-like fashion has been particularly compelling due to the reduction of clinical overhead and ability to characterize anatomy at the time of intervention. We previously introduced a clinically feasible approach for imaging the left atrium and pulmonary veins (LAPVs) with short contrast bolus injections and scan times of approximately 4 -10 s. The resulting data have sufficient image quality for intra-procedural use during electro-anatomic mapping (EAM) and interventional guidance in atrial fibrillation (AF) ablation procedures. In this paper, we present a novel technique to intra-procedural surface generation which integrates fully-automated segmentation of the LAPVs for guidance in AF ablation interventions. Contrast-enhanced rotational X-ray angiography (3-D RA) acquisitions in combination with filtered-back-projection-based reconstruction allows for volumetric interrogation of LAPV anatomy in near-real-time. An automatic model-based segmentation algorithm allows for fast and accurate LAPV mesh generation despite the challenges posed by image quality; relative to pre-procedural cardiac CT/MR, 3-D RA images suffer from more artifacts and reduced signal-to-noise. We validate our integrated method by comparing 1) automatic and manual segmentations of intra-procedural 3-D RA data, 2) automatic segmentations of intra-procedural 3-D RA and pre-procedural CT/MR data, and 3) intra-procedural EAM point cloud data with automatic segmentations of 3-D RA and CT/MR data. Our validation results for automatically segmented intra-procedural 3-D RA data show average segmentation errors of 1) approximately 1.3 mm compared with manual 3-D RA segmentations 2) approximately 2.3 mm compared with automatic segmentation of pre-procedural CT/MR data and 3

  8. Supporting plant operation through computer-based procedures

    International Nuclear Information System (INIS)

    Martinez, Victor; Medrano, Javier; Mendez, Julio

    2014-01-01

    Digital Systems are becoming more important in controlling and monitoring nuclear power plant operations. The capabilities of these systems provide additional functions as well as support operators in making decisions and avoiding errors. Regarding Operation Support Systems, an important way of taking advantage of these features is using computer-based procedures (CBPs) tools that enhance the plant operation. Integrating digital systems in analogue controls at nuclear power plants in operation becomes an extra challenge, in contrast to the integration of Digital Control Systems in new nuclear power plants. Considering the potential advantages of using this technology, Tecnatom has designed and developed a CBP platform taking currently operating nuclear power plants as its design basis. The result is a powerful tool which combines the advantages of CBPs and the conventional analogue control systems minimizing negative effects during plant operation and integrating operation aid-systems to support operators. (authors)

  9. Rapid filtration separation-based sample preparation method for Bacillus spores in powdery and environmental matrices.

    Science.gov (United States)

    Isabel, Sandra; Boissinot, Maurice; Charlebois, Isabelle; Fauvel, Chantal M; Shi, Lu-E; Lévesque, Julie-Christine; Paquin, Amélie T; Bastien, Martine; Stewart, Gale; Leblanc, Eric; Sato, Sachiko; Bergeron, Michel G

    2012-03-01

    Authorities frequently need to analyze suspicious powders and other samples for biothreat agents in order to assess environmental safety. Numerous nucleic acid detection technologies have been developed to detect and identify biowarfare agents in a timely fashion. The extraction of microbial nucleic acids from a wide variety of powdery and environmental samples to obtain a quality level adequate for these technologies still remains a technical challenge. We aimed to develop a rapid and versatile method of separating bacteria from these samples and then extracting their microbial DNA. Bacillus atrophaeus subsp. globigii was used as a simulant of Bacillus anthracis. We studied the effects of a broad variety of powdery and environmental samples on PCR detection and the steps required to alleviate their interference. With a benchmark DNA extraction procedure, 17 of the 23 samples investigated interfered with bacterial lysis and/or PCR-based detection. Therefore, we developed the dual-filter method for applied recovery of microbial particles from environmental and powdery samples (DARE). The DARE procedure allows the separation of bacteria from contaminating matrices that interfere with PCR detection. This procedure required only 2 min, while the DNA extraction process lasted 7 min, for a total of sample preparation procedure allowed the recovery of cleaned bacterial spores and relieved detection interference caused by a wide variety of samples. Our procedure was easily completed in a laboratory facility and is amenable to field application and automation.

  10. Hybrid Pixel-Based Method for Cardiac Ultrasound Fusion Based on Integration of PCA and DWT

    Directory of Open Access Journals (Sweden)

    Samaneh Mazaheri

    2015-01-01

    Full Text Available Medical image fusion is the procedure of combining several images from one or multiple imaging modalities. In spite of numerous attempts in direction of automation ventricle segmentation and tracking in echocardiography, due to low quality images with missing anatomical details or speckle noises and restricted field of view, this problem is a challenging task. This paper presents a fusion method which particularly intends to increase the segment-ability of echocardiography features such as endocardial and improving the image contrast. In addition, it tries to expand the field of view, decreasing impact of noise and artifacts and enhancing the signal to noise ratio of the echo images. The proposed algorithm weights the image information regarding an integration feature between all the overlapping images, by using a combination of principal component analysis and discrete wavelet transform. For evaluation, a comparison has been done between results of some well-known techniques and the proposed method. Also, different metrics are implemented to evaluate the performance of proposed algorithm. It has been concluded that the presented pixel-based method based on the integration of PCA and DWT has the best result for the segment-ability of cardiac ultrasound images and better performance in all metrics.

  11. Test Procedure for Axially Loaded Piles in Sand

    DEFF Research Database (Denmark)

    Thomassen, Kristina

    The test procedure described in the following is used when examining the effects of static or cyclic loading on the skin friction of an axially loaded pile in dense sand. The pile specimen is only loaded in tension to avoid any contribution from the base resistance. The pile dimensions are chosen...... to resemble full scale dimension of piles used in offshore pile foundations today. In this report is given a detailed description of the soil preparation and pile installation procedures as well data acquisition methods....

  12. Civil Procedure In Denmark

    DEFF Research Database (Denmark)

    Werlauff, Erik

    scientific activities conducted by the author, partly based on the author's experience as a member, through a number of years, of the Danish Standing Committee on Procedural Law (Retsplejeraadet), which on a continuous basis evaluates the need for civil procedural reforms in Denmark, and finally also based......The book contains an up-to-date survey of Danish civil procedure after the profound Danish procedural reforms in 2007. It deals with questions concerning competence and function of Danish courts, commencement and preparation of civil cases, questions of evidence and burden of proof, international...... procedural questions, including relations to the Brussels I Regulation and Denmark's participation in this Regulation via a parallel convention with the EU countries, impact on Danish civil procedure of the convention on human rights, preparation and pronouncement of judgment and verdict, questions of appeal...

  13. Improved non-dimensional dynamic influence function method based on tow-domain method for vibration analysis of membranes

    Directory of Open Access Journals (Sweden)

    SW Kang

    2015-02-01

    Full Text Available This article introduces an improved non-dimensional dynamic influence function method using a sub-domain method for efficiently extracting the eigenvalues and mode shapes of concave membranes with arbitrary shapes. The non-dimensional dynamic influence function method (non-dimensional dynamic influence function method, which was developed by the authors in 1999, gives highly accurate eigenvalues for membranes, plates, and acoustic cavities, compared with the finite element method. However, it needs the inefficient procedure of calculating the singularity of a system matrix in the frequency range of interest for extracting eigenvalues and mode shapes. To overcome the inefficient procedure, this article proposes a practical approach to make the system matrix equation of the concave membrane of interest into a form of algebraic eigenvalue problem. It is shown by several case studies that the proposed method has a good convergence characteristics and yields very accurate eigenvalues, compared with an exact method and finite element method (ANSYS.

  14. A multiparameter chaos control method based on OGY approach

    International Nuclear Information System (INIS)

    Souza de Paula, Aline; Amorim Savi, Marcelo

    2009-01-01

    Chaos control is based on the richness of responses of chaotic behavior and may be understood as the use of tiny perturbations for the stabilization of a UPO embedded in a chaotic attractor. Since one of these UPO can provide better performance than others in a particular situation the use of chaos control can make this kind of behavior to be desirable in a variety of applications. The OGY method is a discrete technique that considers small perturbations promoted in the neighborhood of the desired orbit when the trajectory crosses a specific surface, such as a Poincare section. This contribution proposes a multiparameter semi-continuous method based on OGY approach in order to control chaotic behavior. Two different approaches are possible with this method: coupled approach, where all control parameters influences system dynamics although they are not active; and uncoupled approach that is a particular case where control parameters return to the reference value when they become passive parameters. As an application of the general formulation, it is investigated a two-parameter actuation of a nonlinear pendulum control employing coupled and uncoupled approaches. Analyses are carried out considering signals that are generated by numerical integration of the mathematical model using experimentally identified parameters. Results show that the procedure can be a good alternative for chaos control since it provides a more effective UPO stabilization than the classical single-parameter approach.

  15. Individual-specific transgenerational marking of fish populations based on a barium dual-isotope procedure.

    Science.gov (United States)

    Huelga-Suarez, Gonzalo; Moldovan, Mariella; Garcia-Valiente, America; Garcia-Vazquez, Eva; Alonso, J Ignacio Garcia

    2012-01-03

    The present study focuses on the development and evaluation of an individual-specific transgenerational marking procedure using two enriched barium isotopes, (135)Ba and (137)Ba, mixed at a given and selectable molar ratio. The method is based on the deconvolution of the isotope patterns found in the sample into four molar contribution factors: natural xenon (Xe nat), natural barium (Ba nat), Ba135, and Ba137. The ratio of molar contributions between Ba137 and Ba135 is constant and independent of the contribution of natural barium in the sample. This procedure was tested in brown trout ( Salmo trutta ) kept in captivity. Trout were injected with three different Ba137/Ba135 isotopic signatures ca. 7 months and 7 days before spawning to compare the efficiency of the marking procedure at long and short term, respectively. The barium isotopic profiles were measured in the offspring by means of inductively coupled plasma mass spectrometry. Each of the three different isotopic signatures was unequivocally identified in the offspring in both whole eggs and larvae. For 9 month old offspring, the characteristic barium isotope signatures could also be detected in the otoliths even in the presence of a high and variable amount of barium of natural isotope abundance. In conclusion, it can be stated that the proposed dual-isotope marking is inheritable and can be detected after both long-term and short-term marking. Furthermore, the dual-isotope marking can be made individual-specific, so that it allows identification of offspring from a single individual or a group of individuals within a given fish group. © 2011 American Chemical Society

  16. Estimation of the flow resistances exerted in coronary arteries using a vessel length-based method.

    Science.gov (United States)

    Lee, Kyung Eun; Kwon, Soon-Sung; Ji, Yoon Cheol; Shin, Eun-Seok; Choi, Jin-Ho; Kim, Sung Joon; Shim, Eun Bo

    2016-08-01

    Flow resistances exerted in the coronary arteries are the key parameters for the image-based computer simulation of coronary hemodynamics. The resistances depend on the anatomical characteristics of the coronary system. A simple and reliable estimation of the resistances is a compulsory procedure to compute the fractional flow reserve (FFR) of stenosed coronary arteries, an important clinical index of coronary artery disease. The cardiac muscle volume reconstructed from computed tomography (CT) images has been used to assess the resistance of the feeding coronary artery (muscle volume-based method). In this study, we estimate the flow resistances exerted in coronary arteries by using a novel method. Based on a physiological observation that longer coronary arteries have more daughter branches feeding a larger mass of cardiac muscle, the method measures the vessel lengths from coronary angiogram or CT images (vessel length-based method) and predicts the coronary flow resistances. The underlying equations are derived from the physiological relation among flow rate, resistance, and vessel length. To validate the present estimation method, we calculate the coronary flow division over coronary major arteries for 50 patients using the vessel length-based method as well as the muscle volume-based one. These results are compared with the direct measurements in a clinical study. Further proving the usefulness of the present method, we compute the coronary FFR from the images of optical coherence tomography.

  17. Cognition and procedure representational requirements for predictive human performance models

    Science.gov (United States)

    Corker, K.

    1992-01-01

    Models and modeling environments for human performance are becoming significant contributors to early system design and analysis procedures. Issues of levels of automation, physical environment, informational environment, and manning requirements are being addressed by such man/machine analysis systems. The research reported here investigates the close interaction between models of human cognition and models that described procedural performance. We describe a methodology for the decomposition of aircrew procedures that supports interaction with models of cognition on the basis of procedures observed; that serves to identify cockpit/avionics information sources and crew information requirements; and that provides the structure to support methods for function allocation among crew and aiding systems. Our approach is to develop an object-oriented, modular, executable software representation of the aircrew, the aircraft, and the procedures necessary to satisfy flight-phase goals. We then encode in a time-based language, taxonomies of the conceptual, relational, and procedural constraints among the cockpit avionics and control system and the aircrew. We have designed and implemented a goals/procedures hierarchic representation sufficient to describe procedural flow in the cockpit. We then execute the procedural representation in simulation software and calculate the values of the flight instruments, aircraft state variables and crew resources using the constraints available from the relationship taxonomies. The system provides a flexible, extensible, manipulative and executable representation of aircrew and procedures that is generally applicable to crew/procedure task-analysis. The representation supports developed methods of intent inference, and is extensible to include issues of information requirements and functional allocation. We are attempting to link the procedural representation to models of cognitive functions to establish several intent inference methods

  18. A procedure for searching the equilibrium core of a research reactor

    International Nuclear Information System (INIS)

    Bakri Arbie; Liem Peng Hong; Prayoto

    1996-01-01

    A procedure for searching the equilibrium core of a research reactor has been proposed. The searching procedure is based on the relaxation method and has been implemented in Batan-EQUIL-2D in-core fuel management code. The few-group neutron diffusion theory in 2-D, X-Y, and R-Z reactor geometries are adopted as the framework of the code. The successful applicability of the procedure for obtaining the new RSG-GAS (MPR-30) silicide equilibrium core was shown. (author)

  19. A Hamiltonian-based derivation of Scaled Boundary Finite Element Method for elasticity problems

    International Nuclear Information System (INIS)

    Hu Zhiqiang; Lin Gao; Wang Yi; Liu Jun

    2010-01-01

    The Scaled Boundary Finite Method (SBFEM) is a semi-analytical solution approach for solving partial differential equation. For problem in elasticity, the governing equations can be obtained by mechanically based formulation, Scaled-boundary-transformation-based formulation and principle of virtual work. The governing equations are described in the frame of Lagrange system and the unknowns are displacements. But in the solution procedure, the auxiliary variables are introduced and the equations are solved in the state space. Based on the observation that the duality system to solve elastic problem proposed by W.X. Zhong is similar to the above solution approach, the discretization of the SBFEM and the duality system are combined to derive the governing equations in the Hamilton system by introducing the dual variables in this paper. The Precise Integration Method (PIM) used in Duality system is also an efficient method for the solution of the governing equations of SBFEM in displacement and boundary stiffness matrix especially for the case which results some numerical difficulties in the usually uses the eigenvalue method. Numerical examples are used to demonstrate the validity and effectiveness of the PIM for solution of boundary static stiffness.

  20. Multiple predictor smoothing methods for sensitivity analysis.

    Energy Technology Data Exchange (ETDEWEB)

    Helton, Jon Craig; Storlie, Curtis B.

    2006-08-01

    The use of multiple predictor smoothing methods in sampling-based sensitivity analyses of complex models is investigated. Specifically, sensitivity analysis procedures based on smoothing methods employing the stepwise application of the following nonparametric regression techniques are described: (1) locally weighted regression (LOESS), (2) additive models, (3) projection pursuit regression, and (4) recursive partitioning regression. The indicated procedures are illustrated with both simple test problems and results from a performance assessment for a radioactive waste disposal facility (i.e., the Waste Isolation Pilot Plant). As shown by the example illustrations, the use of smoothing procedures based on nonparametric regression techniques can yield more informative sensitivity analysis results than can be obtained with more traditional sensitivity analysis procedures based on linear regression, rank regression or quadratic regression when nonlinear relationships between model inputs and model predictions are present.

  1. Multiple predictor smoothing methods for sensitivity analysis

    International Nuclear Information System (INIS)

    Helton, Jon Craig; Storlie, Curtis B.

    2006-01-01

    The use of multiple predictor smoothing methods in sampling-based sensitivity analyses of complex models is investigated. Specifically, sensitivity analysis procedures based on smoothing methods employing the stepwise application of the following nonparametric regression techniques are described: (1) locally weighted regression (LOESS), (2) additive models, (3) projection pursuit regression, and (4) recursive partitioning regression. The indicated procedures are illustrated with both simple test problems and results from a performance assessment for a radioactive waste disposal facility (i.e., the Waste Isolation Pilot Plant). As shown by the example illustrations, the use of smoothing procedures based on nonparametric regression techniques can yield more informative sensitivity analysis results than can be obtained with more traditional sensitivity analysis procedures based on linear regression, rank regression or quadratic regression when nonlinear relationships between model inputs and model predictions are present

  2. Development of advanced MCR task analysis methods

    International Nuclear Information System (INIS)

    Na, J. C.; Park, J. H.; Lee, S. K.; Kim, J. K.; Kim, E. S.; Cho, S. B.; Kang, J. S.

    2008-07-01

    This report describes task analysis methodology for advanced HSI designs. Task analyses was performed by using procedure-based hierarchical task analysis and task decomposition methods. The results from the task analysis were recorded in a database. Using the TA results, we developed static prototype of advanced HSI and human factors engineering verification and validation methods for an evaluation of the prototype. In addition to the procedure-based task analysis methods, workload estimation based on the analysis of task performance time and analyses for the design of information structure and interaction structures will be necessary

  3. Environmental Sciences Division Toxicology Laboratory standard operating procedures

    International Nuclear Information System (INIS)

    Kszos, L.A.; Stewart, A.J.; Wicker, L.F.; Logsdon, G.M.

    1989-09-01

    This document was developed to provide the personnel working in the Environmental Sciences Division's Toxicology Laboratory with documented methods for conducting toxicity tests. The document consists of two parts. The first part includes the standard operating procedures (SOPs) that are used by the laboratory in conducting toxicity tests. The second part includes reference procedures from the US Environmental Protection Agency document entitled Short-Term Methods for Estimating the Chronic Toxicity of Effluents and Receiving Waters to Freshwater Organisms, upon which the Toxicology Laboratory's SOPs are based. Five of the SOPs include procedures for preparing Ceriodaphnia survival and reproduction test. These SOPs include procedures for preparing Ceriodaphnia food (SOP-3), maintaining Ceriodaphnia cultures (SOP-4), conducting the toxicity test (SOP-13), analyzing the test data (SOP-13), and conducting a Ceriodaphnia reference test (SOP-15). Five additional SOPs relate specifically to the fathead minnow (Pimephales promelas) larval survival and growth test: methods for preparing fathead minnow larvae food (SOP-5), maintaining fathead minnow cultures (SOP-6), conducting the toxicity test (SOP-9), analyzing the test data (SOP-12), and conducting a fathead minnow reference test (DOP-14). The six remaining SOPs describe methods that are used with either or both tests: preparation of control/dilution water (SOP-1), washing of glassware (SOP-2), collection and handling of samples (SOP-7), preparation of samples (SOP-8), performance of chemical analyses (SOP-11), and data logging and care of technical notebooks (SOP-16)

  4. A Comparison of Exposure Control Procedures in CAT Systems Based on Different Measurement Models for Testlets

    Science.gov (United States)

    Boyd, Aimee M.; Dodd, Barbara; Fitzpatrick, Steven

    2013-01-01

    This study compared several exposure control procedures for CAT systems based on the three-parameter logistic testlet response theory model (Wang, Bradlow, & Wainer, 2002) and Masters' (1982) partial credit model when applied to a pool consisting entirely of testlets. The exposure control procedures studied were the modified within 0.10 logits…

  5. A rapid method for soil cement design : Louisiana slope value method.

    Science.gov (United States)

    1964-03-01

    The current procedure used by the Louisiana Department of Highways for laboratory design of cement stabilized soil base and subbase courses is taken from standard AASHO test methods, patterned after Portland Cement Association criteria. These methods...

  6. A new procedure for estimating the cell temperature of a high concentrator photovoltaic grid connected system based on atmospheric parameters

    International Nuclear Information System (INIS)

    Fernández, Eduardo F.; Almonacid, Florencia

    2015-01-01

    Highlights: • Concentrating grid-connected systems are working at maximum power point. • The operating cell temperature is inherently lower than at open circuit. • Two novel methods for estimating the cell temperature are proposed. • Both predict the operating cell temperature from atmospheric parameters. • Experimental results show that both methods perform effectively. - Abstract: The working cell temperature of high concentrator photovoltaic systems is a crucial parameter when analysing their performance and reliability. At the same time, due to the special features of this technology, the direct measurement of the cell temperature is very complex and is usually obtained by using different indirect methods. High concentrator photovoltaic modules in a system are operating at maximum power since they are connected to an inverter. So that, their cell temperature is lower than the cell temperature of a module at open-circuit voltage since an important part of the light power density is converted into electricity. In this paper, a procedure for indirectly estimating the cell temperature of a high concentrator photovoltaic system from atmospheric parameters is addressed. Therefore, this new procedure has the advantage that is valid for estimating the cell temperature of a system at any location of interest if the atmospheric parameters are available. To achieve this goal, two different methods are proposed: one based on simple mathematical relationships and another based on artificial intelligent techniques. Results show that both methods predicts the cell temperature of a module connected to an inverter with a low margin of error with a normalised root mean square error lower or equal than 3.3%, an absolute root mean square error lower or equal than 2 °C, a mean absolute error lower or equal then 1.5 °C, and a mean bias error and a mean relative error almost equal to 0%

  7. Improved Genetic Algorithm-Based Unit Commitment Considering Uncertainty Integration Method

    Directory of Open Access Journals (Sweden)

    Kyu-Hyung Jo

    2018-05-01

    Full Text Available In light of the dissemination of renewable energy connected to the power grid, it has become necessary to consider the uncertainty in the generation of renewable energy as a unit commitment (UC problem. A methodology for solving the UC problem is presented by considering various uncertainties, which are assumed to have a normal distribution, by using a Monte Carlo simulation. Based on the constructed scenarios for load, wind, solar, and generator outages, a combination of scenarios is found that meets the reserve requirement to secure the power balance of the power grid. In those scenarios, the uncertainty integration method (UIM identifies the best combination by minimizing the additional reserve requirements caused by the uncertainty of power sources. An integration process for uncertainties is formulated for stochastic unit commitment (SUC problems and optimized by the improved genetic algorithm (IGA. The IGA is composed of five procedures and finds the optimal combination of unit status at the scheduled time, based on the determined source data. According to the number of unit systems, the IGA demonstrates better performance than the other optimization methods by applying reserve repairing and an approximation process. To account for the result of the proposed method, various UC strategies are tested with a modified 24-h UC test system and compared.

  8. 3D printing for construction: a procedural and material-based approach

    Directory of Open Access Journals (Sweden)

    A. Nadal

    2017-06-01

    Full Text Available 3D printing for construction is stagnated at an early stage of development, especially regarding material optimization and procedural issues. These limitations are due to the specific knowledge that these technologies imply, the total cost of the machinery involved, and the lack of clear procedural guidelines. This paper presents a methodology that aims at overcoming these limitations through a workflow that allows for the ease of use of 6-axis robotic arms. A technique for the optimization of material usage is presented. A test case that shows the integration the design-to-fabrication process combining Integrated Robotic Systems (IRS and Additive Layer Manufacturing (ALM techniques is discussed. A structure-based approach to material optimization and smart infill patterning is introduced. A 0.4 x 0.4 x 1.5 m test part is shown as technological demonstrator.

  9. A Fourier-based textural feature extraction procedure

    Science.gov (United States)

    Stromberg, W. D.; Farr, T. G.

    1986-01-01

    A procedure is presented to discriminate and characterize regions of uniform image texture. The procedure utilizes textural features consisting of pixel-by-pixel estimates of the relative emphases of annular regions of the Fourier transform. The utility and derivation of the features are described through presentation of a theoretical justification of the concept followed by a heuristic extension to a real environment. Two examples are provided that validate the technique on synthetic images and demonstrate its applicability to the discrimination of geologic texture in a radar image of a tropical vegetated area.

  10. Application of insights from the IREP analyses to the IREP procedures guide

    International Nuclear Information System (INIS)

    Carlson, D.D.; Murphy, J.A.; Young, J.

    1982-01-01

    One of the objectives of the Interim Reliability Evaluation Program (IREP) was to prepare a set of procedures based on experience gained in the study for use in future IREP-type analyses. The current analyses used a set of procedures and, over the course of the program, a concerted effort was made to develop insights which could improve these procedures. Insights have been gained into the organization and content of th procedures guide, into the performance and management of an IREP analysis, and into the methods to be used in the analysis

  11. Microstructure based procedure for process parameter control in rolling of aluminum thin foils

    Science.gov (United States)

    Johannes, Kronsteiner; Kabliman, Evgeniya; Klimek, Philipp-Christoph

    2018-05-01

    In present work, a microstructure based procedure is used for a numerical prediction of strength properties for Al-Mg-Sc thin foils during a hot rolling process. For this purpose, the following techniques were developed and implemented. At first, a toolkit for a numerical analysis of experimental stress-strain curves obtained during a hot compression testing by a deformation dilatometer was developed. The implemented techniques allow for the correction of a temperature increase in samples due to adiabatic heating and for the determination of a yield strength needed for the separation of the elastic and plastic deformation regimes during numerical simulation of multi-pass hot rolling. At the next step, an asymmetric Hot Rolling Simulator (adjustable table inlet/outlet height as well as separate roll infeed) was developed in order to match the exact processing conditions of a semi-industrial rolling procedure. At each element of a finite element mesh the total strength is calculated by in-house Flow Stress Model based on evolution of mean dislocation density. The strength values obtained by numerical modelling were found in a reasonable agreement with results of tensile tests for thin Al-Mg-Sc foils. Thus, the proposed simulation procedure might allow to optimize the processing parameters with respect to the microstructure development.

  12. A procedure for effective Dancoff factor calculation

    International Nuclear Information System (INIS)

    Milosevic, M.

    2001-01-01

    In this paper, a procedure for Dancoff factors calculation based on equivalence principle and its application in the SCALE-4.3 code system is described. This procedure is founded on principle of conservation of neutron absorption for resolved resonance range in a heterogeneous medium and an equivalent medium consisted of an infinite array of two-region pin cells, where the presence of other fuel rods is taken into account through a Dancoff factor. The neutron absorption in both media is obtained using a fine-group elastic slowing-down calculation. This procedure is implemented in a design oriented lattice physics code, which is applicable for any geometry where the method of collision probability is possible to apply to get a flux solution. Proposed procedure was benchmarked for recent exercise that represents a system with a fuel double heterogeneity, i.e., fuel in solid form (pellets) surrounded by fissile material in solution, and for a 5x5 irregular pressurised water reactor assembly, which requires different Dancoff factors. (author)

  13. Generalized procedures for determining inspection sample sizes (related to quantitative measurements). Vol. 1: Detailed explanations

    International Nuclear Information System (INIS)

    Jaech, J.L.; Lemaire, R.J.

    1986-11-01

    Generalized procedures have been developed to determine sample sizes in connection with the planning of inspection activities. These procedures are based on different measurement methods. They are applied mainly to Bulk Handling Facilities and Physical Inventory Verifications. The present report attempts (i) to assign to appropriate statistical testers (viz. testers for gross, partial and small defects) the measurement methods to be used, and (ii) to associate the measurement uncertainties with the sample sizes required for verification. Working papers are also provided to assist in the application of the procedures. This volume contains the detailed explanations concerning the above mentioned procedures

  14. Recent advances in ratio primary reference measurement procedures (definitive methods) and their use in certification of reference materials and controlling assigned values in proficiency testing

    International Nuclear Information System (INIS)

    Dybczyñski, R.S.; Polkowska-Motrenko, H.; Chajduk, E.; Danko, B.; Pyszynska, M.

    2014-01-01

    The idea of definitive methods based on radiochemical neutron activation analysis (RNAA), consists in combination of neutron activation with the highly selective and quantitative post-irradiation isolation of the desired radionuclide by column chromatography followed by γ-ray spectrometric measurement. The principles of construction of such methods, which were devised in the Institute of Nuclear Chemistry and Technology, are reminded and the significance of these methods for analytical quality assurance is emphasized. According to VIM 3 nomenclature these methods may be called: ratio primary reference measurement procedures (RPRMPs). RPRMP for the determination of Se is briefly presented and its use for checking the accuracy of 'assigned values' established by expert laboratories in some proficiency tests, is demonstrated

  15. A Novel 3D Imaging Method for Airborne Downward-Looking Sparse Array SAR Based on Special Squint Model

    Directory of Open Access Journals (Sweden)

    Xiaozhen Ren

    2014-01-01

    Full Text Available Three-dimensional (3D imaging technology based on antenna array is one of the most important 3D synthetic aperture radar (SAR high resolution imaging modes. In this paper, a novel 3D imaging method is proposed for airborne down-looking sparse array SAR based on the imaging geometry and the characteristic of echo signal. The key point of the proposed algorithm is the introduction of a special squint model in cross track processing to obtain accurate focusing. In this special squint model, point targets with different cross track positions have different squint angles at the same range resolution cell, which is different from the conventional squint SAR. However, after theory analysis and formulation deduction, the imaging procedure can be processed with the uniform reference function, and the phase compensation factors and algorithm realization procedure are demonstrated in detail. As the method requires only Fourier transform and multiplications and thus avoids interpolations, it is computationally efficient. Simulations with point scatterers are used to validate the method.

  16. Series expansion of the modified Einstein Procedure

    Science.gov (United States)

    Seema Chandrakant Shah-Fairbank

    2009-01-01

    This study examines calculating total sediment discharge based on the Modified Einstein Procedure (MEP). A new procedure based on the Series Expansion of the Modified Einstein Procedure (SEMEP) has been developed. This procedure contains four main modifications to MEP. First, SEMEP solves the Einstein integrals quickly and accurately based on a series expansion. Next,...

  17. A point-value enhanced finite volume method based on approximate delta functions

    Science.gov (United States)

    Xuan, Li-Jun; Majdalani, Joseph

    2018-02-01

    We revisit the concept of an approximate delta function (ADF), introduced by Huynh (2011) [1], in the form of a finite-order polynomial that holds identical integral properties to the Dirac delta function when used in conjunction with a finite-order polynomial integrand over a finite domain. We show that the use of generic ADF polynomials can be effective at recovering and generalizing several high-order methods, including Taylor-based and nodal-based Discontinuous Galerkin methods, as well as the Correction Procedure via Reconstruction. Based on the ADF concept, we then proceed to formulate a Point-value enhanced Finite Volume (PFV) method, which stores and updates the cell-averaged values inside each element as well as the unknown quantities and, if needed, their derivatives on nodal points. The sharing of nodal information with surrounding elements saves the number of degrees of freedom compared to other compact methods at the same order. To ensure conservation, cell-averaged values are updated using an identical approach to that adopted in the finite volume method. Here, the updating of nodal values and their derivatives is achieved through an ADF concept that leverages all of the elements within the domain of integration that share the same nodal point. The resulting scheme is shown to be very stable at successively increasing orders. Both accuracy and stability of the PFV method are verified using a Fourier analysis and through applications to the linear wave and nonlinear Burgers' equations in one-dimensional space.

  18. Hesitant Fuzzy Thermodynamic Method for Emergency Decision Making Based on Prospect Theory.

    Science.gov (United States)

    Ren, Peijia; Xu, Zeshui; Hao, Zhinan

    2017-09-01

    Due to the timeliness of emergency response and much unknown information in emergency situations, this paper proposes a method to deal with the emergency decision making, which can comprehensively reflect the emergency decision making process. By utilizing the hesitant fuzzy elements to represent the fuzziness of the objects and the hesitant thought of the experts, this paper introduces the negative exponential function into the prospect theory so as to portray the psychological behaviors of the experts, which transforms the hesitant fuzzy decision matrix into the hesitant fuzzy prospect decision matrix (HFPDM) according to the expectation-levels. Then, this paper applies the energy and the entropy in thermodynamics to take the quantity and the quality of the decision values into account, and defines the thermodynamic decision making parameters based on the HFPDM. Accordingly, a whole procedure for emergency decision making is conducted. What is more, some experiments are designed to demonstrate and improve the validation of the emergency decision making procedure. Last but not the least, this paper makes a case study about the emergency decision making in the firing and exploding at Port Group in Tianjin Binhai New Area, which manifests the effectiveness and practicability of the proposed method.

  19. Dynamic analysis of large structures with uncertain parameters based on coupling component mode synthesis and perturbation method

    Directory of Open Access Journals (Sweden)

    D. Sarsri

    2016-03-01

    Full Text Available This paper presents a methodological approach to compute the stochastic eigenmodes of large FE models with parameter uncertainties based on coupling of second order perturbation method and component mode synthesis methods. Various component mode synthesis methods are used to optimally reduce the size of the model. The statistical first two moments of dynamic response of the reduced system are obtained by the second order perturbation method. Numerical results illustrating the accuracy and efficiency of the proposed coupled methodological procedures for large FE models with uncertain parameters are presented.

  20. The Next Step in Deployment of Computer Based Procedures For Field Workers: Insights And Results From Field Evaluations at Nuclear Power Plants

    Energy Technology Data Exchange (ETDEWEB)

    Oxstrand, Johanna; Le Blanc, Katya L.; Bly, Aaron

    2015-02-01

    followed along with the computer-based procedure. After each conducted functional test the operators were asked a series of questions designed to provide feedback on the feasibility to use a CBP system in the plant and the general user experience of the CBP system. This paper will describe the field evaluation and its results in detail. For example, the result shows that the context driven job aids and the incorporated human performance tools are much liked by the auxiliary operators. The paper will describe and present initial findings from a second field evaluation conducted at second nuclear utility. For this field evaluation a preventive maintenance work order for the HVAC system was used. In addition, there will be a description of the method and objective of two field evaluations planned to be conducted late 2014 or early 2015.

  1. System analysis procedures for conducting PSA of nuclear power plants

    International Nuclear Information System (INIS)

    Lee, Yoon Hwan; Jeong, Won Dae; Kim, Tae Un; Kim, Kil You; Han, Sang Hoon; Chang, Seung Chul; Sung, Tae Yong; Yang, Jun Eon; Kang, Dae Il; Park, Jin Hee; Hwang, Mi Jeong; Jin, Young Ho.

    1997-03-01

    This document, the Probabilistic Safety Assessment(PSA) procedures guide for system analysis, is intended to provide the guidelines to analyze the target of system consistently and technically in the performance of PSA for nuclear power plants(NPPs). The guide has been prepared in accordance with the procedures and techniques for fault tree analysis(FTA) used in system analysis. Normally the main objective of system analysis is to assess the reliability of system modeled by Event Tree Analysis(ETA). A variety of analytical techniques can be used for the system analysis, however, FTA method is used in this procedures guide. FTA is the method used for representing the failure logic of plant systems deductively using AND, OR or NOT gates. The fault tree should reflect all possible failure modes that may contribute to the system unavailability. This should include contributions due to the mechanical failures of the components, Common Cause Failures (CCFs), human errors and outages for testing and maintenance. After the construction of fault tree is completed, system unavailability is calculated with the CUT module of KIRAP, and the qualitative and quantitative analysis is performed through the process as above stated. As above mentioned, the procedures for system analysis is based on PSA procedures and methods which has been applied to the safety assessments of constructing NPPs in the country. Accordingly, the method of FTA stated in this procedures guide will be applicable to PSA for the NPPs to be constructed in the future. (author). 6 tabs., 11 figs., 7 refs

  2. System analysis procedures for conducting PSA of nuclear power plants

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Yoon Hwan; Jeong, Won Dae; Kim, Tae Un; Kim, Kil You; Han, Sang Hoon; Chang, Seung Chul; Sung, Tae Yong; Yang, Jun Eon; Kang, Dae Il; Park, Jin Hee; Hwang, Mi Jeong; Jin, Young Ho

    1997-03-01

    This document, the Probabilistic Safety Assessment(PSA) procedures guide for system analysis, is intended to provide the guidelines to analyze the target of system consistently and technically in the performance of PSA for nuclear power plants(NPPs). The guide has been prepared in accordance with the procedures and techniques for fault tree analysis(FTA) used in system analysis. Normally the main objective of system analysis is to assess the reliability of system modeled by Event Tree Analysis(ETA). A variety of analytical techniques can be used for the system analysis, however, FTA method is used in this procedures guide. FTA is the method used for representing the failure logic of plant systems deductively using AND, OR or NOT gates. The fault tree should reflect all possible failure modes that may contribute to the system unavailability. This should include contributions due to the mechanical failures of the components, Common Cause Failures (CCFs), human errors and outages for testing and maintenance. After the construction of fault tree is completed, system unavailability is calculated with the CUT module of KIRAP, and the qualitative and quantitative analysis is performed through the process as above stated. As above mentioned, the procedures for system analysis is based on PSA procedures and methods which has been applied to the safety assessments of constructing NPPs in the country. Accordingly, the method of FTA stated in this procedures guide will be applicable to PSA for the NPPs to be constructed in the future. (author). 6 tabs., 11 figs., 7 refs.

  3. [Evaluation of ergonomic load of clinical nursing procedures].

    Science.gov (United States)

    Yan, P; Zhang, L; Li, F Y; Yang, Y; Wang, Y N; Huang, A M; Dai, Y L; Yao, H

    2017-08-20

    Objective: To evaluate the ergonomic load of clinical nursing procedures and to provide evidence for the prevention and management of work-related musculoskeletal disorders (WMSDs) in nurses. Methods: Based on the nursing unit characteristics and the common departments involving patient-turning procedures, 552 nurses were selected from 6 clinical departments from July to September, 2016. The ergonomic load of four types of patient-turning procedures, i.e., turning the patient's body, changing the bed linen of in-bed patients, moving patients, and chest physiotherapy, was evaluated by the on-site inspectors and self-evaluated by the operators using the Quick Exposure Check. The exposure value, exposure level, and exposure rate of WMSDs were assessed based on the procedure-related physical loads on the back, shoulders/arms, wrists/hands and neck, as well as the loads from work rhythm and work pressure. Results: All surveyed subjects were females who were aged mostly between 26-30 years (49.46%) , with a mean age of 29.66±5.28 years. These nurses were mainly from the Department of Infection (28.99%) and Spine Surgery (21.56%) . There were significant differences in the back, shoulders/arms, neck, work rhythm, and work pressure scores between different nursing procedures ( F =16.613, 5.884, 3.431, 3.222, and 5.085, respectively; P nursing procedures resulted in high to intermediate physical load in nurses. Procedures with high to low level of WMSDs exposure were patient turning (72.69%) , bed linen changing (67.15%) , patient transfer (65.82%) , and chest physiotherapy (58.34%) . In particular, patient turning was considered as very high-risk procedure, whereas others were considered as high-risk procedures. Conclusion: Patient-turning nursing procedures result in high ergonomic load in the operators. Therefore, more focus should be placed on the ergonomics of the caretakers and nurses.

  4. Is Office-Based Surgery Safe? Comparing Outcomes of 183,914 Aesthetic Surgical Procedures Across Different Types of Accredited Facilities.

    Science.gov (United States)

    Gupta, Varun; Parikh, Rikesh; Nguyen, Lyly; Afshari, Ashkan; Shack, R Bruce; Grotting, James C; Higdon, K Kye

    2017-02-01

    There has been a dramatic rise in office-based surgery. However, due to wide variations in regulatory standards, the safety of office-based aesthetic surgery has been questioned. This study compares complication rates of cosmetic surgery performed at office-based surgical suites (OBSS) to ambulatory surgery centers (ASCs) and hospitals. A prospective cohort of patients undergoing cosmetic surgery between 2008 and 2013 were identified from the CosmetAssure database (Birmingham, AL). Patients were grouped by type of accredited facility where the surgery was performed: OBSS, ASC, or hospital. The primary outcome was the incidence of major complication(s) requiring emergency room visit, hospital admission, or reoperation within 30 days postoperatively. Potential risk factors including age, gender, body mass index (BMI), smoking, diabetes, type of procedure, and combined procedures were reviewed. Of the 129,007 patients (183,914 procedures) in the dataset, the majority underwent the procedure at ASCs (57.4%), followed by hospitals (26.7%) and OBSS (15.9%). Patients operated in OBSS were less likely to undergo combined procedures (30.3%) compared to ASCs (31.8%) and hospitals (35.3%, P procedures. Plastic surgeons should continue to triage their patients carefully based on other significant comorbidities that were not measured in this present study. LEVEL OF EVIDENCE 3. © 2016 The American Society for Aesthetic Plastic Surgery, Inc. Reprints and permission: journals.permissions@oup.com.

  5. High-precision reflectivity measurements: improvements in the calibration procedure

    Science.gov (United States)

    Jupe, Marco; Grossmann, Florian; Starke, Kai; Ristau, Detlev

    2003-05-01

    The development of high quality optical components is heavily depending on precise characterization procedures. The reflectance and transmittance of laser components are the most important parameters for advanced laser applications. In the industrial fabrication of optical coatings, quality management is generally insured by spectral photometric methods according to ISO/DIS 15386 on a medium level of accuracy. Especially for high reflecting mirrors, a severe discrepancy in the determination of the absolute reflectivity can be found for spectral photometric procedures. In the first part of the CHOCLAB project, a method for measuring reflectance and transmittance with an enhanced precision was developed, which is described in ISO/WD 13697. In the second part of the CHOCLAB project, the evaluation and optimization for the presented method is scheduled. Within this framework international Round-Robin experiment is currently in progress. During this Round-Robin experiment, distinct deviations could be observed between the results of high precision measurement facilities of different partners. Based on the extended experiments, the inhomogeneity of the sample reflectivity was identified as one important origin for the deviation. Consequently, this inhomogeneity is also influencing the calibration procedure. Therefore, a method was developed that allows the calibration of the chopper blade using always the same position on the reference mirror. During the investigations, the homogeneity of several samples was characterized by a surface mapping procedure for 1064 nm. The measurement facility was extended to the additional wavelength 532 nm and a similar set-up was assembled at 10.6 μm. The high precision reflectivity procedure at the mentioned wavelengths is demonstrated for exemplary measurements.

  6. Measurement method of the distribution coefficient on the sorption process. Basic procedure of the method relevant to the barrier materials used for the deep geological disposal: 2006

    International Nuclear Information System (INIS)

    2006-08-01

    This standard was approved by Atomic Energy Society of Japan after deliberation of the Subcommittee on the Radioactive Waste Management, the Nuclear Cycle Technical Committee and the Standard Committee, and after obtaining about 600 comments from specialists of about 30 persons. This document defines the basic measurement procedure of the distribution coefficient (hereafter referred as Kd) to judge the reliability, reproducibility and applications and to provide the requirements for inter-comparison of Kd for a variety of barrier materials used for deep geological disposal of radioactive wastes. The basic measurement procedure of Kd is standardized, following the preceded standard, 'Measurement Method of the Distribution Coefficient on the Sorption Process - Basic Procedure of Batch Method Relevant to the Barrier Materials Used for the Shallow Land Disposal: 2002 (hereafter referred as Standard for the Shallow Land Disposal)', and considering recent progress after its publication and specific issues to the deep geological disposal. (J.P.N.)

  7. Procedural confidence in hospital based practitioners: implications for the training and practice of doctors at all grades

    Directory of Open Access Journals (Sweden)

    Tsagkaraki Petroula A

    2009-01-01

    Full Text Available Abstract Background Medical doctors routinely undertake a number of practical procedures and these should be performed competently. The UK Postgraduate Medical Education and Training Board (PMETB curriculum lists the procedures trainees should be competent in. We aimed to describe medical practitioner's confidence in their procedural skills, and to define which practical procedures are important in current medical practice. Methods A cross sectional observational study was performed measuring procedural confidence in 181 hospital practitioners at all grades from 2 centres in East Anglia, England. Results Both trainees and consultants provide significant service provision. SpR level doctors perform the widest range and the highest median number of procedures per year. Most consultants perform few if any procedures, however some perform a narrow range at high volume. Cumulative confidence for the procedures tested peaks in the SpR grade. Five key procedures (central line insertion, lumbar puncture, pleural aspiration, ascitic aspiration, and intercostal drain insertion are the most commonly performed, are seen as important generic skills, and correspond to the total number of procedures for which confidence can be maintained. Key determinants of confidence are gender, number of procedures performed in the previous year and total number of procedures performed. Conclusion The highest volume of service requirement is for six procedures. The procedural confidence is dependent upon gender, number of procedures performed in the previous year and total number of procedures performed. This has implications for those designing the training curriculum and with regards the move to shorten the duration of training.

  8. Design, Development and Evaluation of Collaborative Team Training Method in Virtual Worlds for Time-Critical Medical Procedures

    Science.gov (United States)

    Khanal, Prabal

    2014-01-01

    Medical students acquire and enhance their clinical skills using various available techniques and resources. As the health care profession has move towards team-based practice, students and trainees need to practice team-based procedures that involve timely management of clinical tasks and adequate communication with other members of the team.…

  9. New methods of magnet-based instrumentation for NOTES.

    Science.gov (United States)

    Magdeburg, Richard; Hauth, Daniel; Kaehler, Georg

    2013-12-01

    Laparoscopic surgery has displaced open surgery as the standard of care for many clinical conditions. NOTES has been described as the next surgical frontier with the objective of incision-free abdominal surgery. The principal challenge of NOTES procedures is the loss of triangulation and instrument rigidity, which is one of the fundamental concepts of laparoscopic surgery. To overcome these problems necessitates the development of new instrumentation. material and methods: We aimed to assess the use of a very simple combination of internal and external magnets that might allow the vigorous multiaxial traction/counter-traction required in NOTES procedures. The magnet retraction system consisted of an external magnetic assembly and either small internal magnets attached by endoscopic clips to the designated tissue (magnet-clip-approach) or an endoscopic grasping forceps in a magnetic deflector roll (magnet-trocar-approach). We compared both methods regarding precision, time and efficacy by performing transgastric partial uterus resections with better results for the magnet-trocar-approach. This proof-of-principle animal study showed that the combination of external and internal magnets generates sufficient coupling forces at clinically relevant abdominal wall thicknesses, making them suitable for use and evaluation in NOTES procedures, and provides the vigorous multiaxial traction/counter-traction required by the lack of additional abdominal trocars.

  10. Design of a micro-irrigation system based on the control volume method

    Directory of Open Access Journals (Sweden)

    Chasseriaux G.

    2006-01-01

    Full Text Available A micro-irrigation system design based on control volume method using the back step procedure is presented in this study. The proposed numerical method is simple and consists of delimiting an elementary volume of the lateral equipped with an emitter, called « control volume » on which the conservation equations of the fl uid hydrodynamicʼs are applied. Control volume method is an iterative method to calculate velocity and pressure step by step throughout the micro-irrigation network based on an assumed pressure at the end of the line. A simple microcomputer program was used for the calculation and the convergence was very fast. When the average water requirement of plants was estimated, it is easy to choose the sum of the average emitter discharge as the total average fl ow rate of the network. The design consists of exploring an economical and effi cient network to deliver uniformly the input fl ow rate for all emitters. This program permitted the design of a large complex network of thousands of emitters very quickly. Three subroutine programs calculate velocity and pressure at a lateral pipe and submain pipe. The control volume method has already been tested for lateral design, the results from which were validated by other methods as fi nite element method, so it permits to determine the optimal design for such micro-irrigation network

  11. Cost analysis of radiological interventional procedures and reimbursement within a clinic

    International Nuclear Information System (INIS)

    Strotzer, M.; Voelk, M.; Lenhart, M.; Fruend, R.; Feuerbach, S.

    2002-01-01

    Purpose: Analysis of costs for vascular radiological interventions on a per patient basis and comparison with reimbursement based on GOAe(Gebuehrenordnung fuer Aerzte) and DKG-NT (Deutsche Krankenhausgesellschaft-Nebenkostentarif). Material and Methods: The ten procedures most frequently performed within 12 months were evaluated. Personnel costs were derived from precise costs per hour and estimated procedure time for each intervention. Costs for medical devices were included. Reimbursement based on GOAewas calculated using the official conversion factor of 0.114 DM for each specific relative value unit and a multiplication factor of 1.0. The corresponding conversion factor for DKG-NT, determined by the DKG, was 0.168 DM. Results: A total of 832 interventional procedures were included. Marked differences between calculated costs and reimbursement rates were found. Regarding the ten most frequently performed procedures, there was a deficit of 1.06 million DM according GOAedata (factor 1.0) and 0.787 million DM according DKG-NT. The percentage of reimbursement was only 34.2 (GOAe; factor 1.0) and 51.3 (DKG-NT), respectively. Conclusion: Reimbursement of radiological interventional procedures based on GOAeand DKG-NT data is of limited value for economic controlling purposes within a hospital. (orig.) [de

  12. Evaluation of DNA Extraction Methods Suitable for PCR-based Detection and Genotyping of Clostridium botulinum

    DEFF Research Database (Denmark)

    Auricchio, Bruna; Anniballi, Fabrizio; Fiore, Alfonsina

    2013-01-01

    in terms of cost, time, labor, and supplies. Eleven botulinum toxin–producing clostridia strains and 25 samples (10 food, 13 clinical, and 2 environmental samples) naturally contaminated with botulinum toxin–producing clostridia were used to compare 4 DNA extraction procedures: Chelex® 100 matrix, Phenol......Sufficient quality and quantity of extracted DNA is critical to detecting and performing genotyping of Clostridium botulinum by means of PCR-based methods. An ideal extraction method has to optimize DNA yield, minimize DNA degradation, allow multiple samples to be extracted, and be efficient...

  13. Solving groundwater flow problems by conjugate-gradient methods and the strongly implicit procedure

    Science.gov (United States)

    Hill, Mary C.

    1990-01-01

    The performance of the preconditioned conjugate-gradient method with three preconditioners is compared with the strongly implicit procedure (SIP) using a scalar computer. The preconditioners considered are the incomplete Cholesky (ICCG) and the modified incomplete Cholesky (MICCG), which require the same computer storage as SIP as programmed for a problem with a symmetric matrix, and a polynomial preconditioner (POLCG), which requires less computer storage than SIP. Although POLCG is usually used on vector computers, it is included here because of its small storage requirements. In this paper, published comparisons of the solvers are evaluated, all four solvers are compared for the first time, and new test cases are presented to provide a more complete basis by which the solvers can be judged for typical groundwater flow problems. Based on nine test cases, the following conclusions are reached: (1) SIP is actually as efficient as ICCG for some of the published, linear, two-dimensional test cases that were reportedly solved much more efficiently by ICCG; (2) SIP is more efficient than other published comparisons would indicate when common convergence criteria are used; and (3) for problems that are three-dimensional, nonlinear, or both, and for which common convergence criteria are used, SIP is often more efficient than ICCG, and is sometimes more efficient than MICCG.

  14. A procedure for assessing seismic hazard generated by Vrancea earthquakes and its application. III. A method for developing isoseismal and isoacceleration maps. Applications

    International Nuclear Information System (INIS)

    Enescu, D.; Enescu, B.D.

    2007-01-01

    A method for developing isoseismal and isoacceleration maps assumedly valid for future strong earthquakes (M GR > 6.7) is described as constituting the third stage of a procedure for assessing the seismic hazard generated by Vrancea earthquakes. The method relies on the results of the former two stages given by Enescu et al., and on further developments that are presented in this paper. Moreover, it is based on instrument recording data. Major earthquakes taking place in Vrancea (November 10, 1940 - M GR 7.4, March 4, 1977 - M GR = 7.2 and the strongest possible) were examined as a way to test the method. The method is also applied for an earthquake of magnitude M GR = 6.7. Given the successful results of the tests, the method can by used for predicting isoseismal and isoacceleration maps for future Vrancea earthquakes of various magnitudes M GR ≥ 6.7. (authors)

  15. Treatment of intervertebral disc degenerative disease using percutaneous nucleotomy–an overview of less invasive procedures

    Directory of Open Access Journals (Sweden)

    Miran Jeromel

    2014-04-01

    Full Text Available Background: Less invasive treatment methods for intervertebral disc disease and decompression of neural structures as a consequence of contained disc herniation represent an alternative to surgical procedure. Percutaneus nucleotomy uses a percutaneous approach to the intervertebral disc. The article presents the evolution of numerous procedureds in clinical practice.Methods: Percutaneous nucleoplasty is a fluoroscopy-guided procedure which enables controlled and safe entrance into the intervertebral disc. The procedure is performed under strict aseptic conditions, using a local anaesthesia with the patient under analgosedation. Based on the principle of therapeutic intradiscal action, the procedures can be divided into three groups: chemical (chemonucleolysis with chimopapain, alcohol, ozone, mechanical (automated percutaneous lumbar discectomy – APLD, arthroscopic discectomy and thermical methods (laser, radiofrequency ablation, intradiscal electrothermal annuloplasty – IDET, Coblation®.Results: Percutaneous nucleotomy by the majority of the mentioned procedures results in a therapeutic effect (reduction of pain and decompression of neural structures. Fast recovery represents a major advantage of less invasive treatment.Conclusions: Less invasive method (nucleotomy using different procedures represents a successful alternative approach to surgical discectomy. Proper patient selection and safe technique are mandatory in order to achieve a good clinical outcome.

  16. A RTS-based method for direct and consistent calculating intermittent peak cooling loads

    International Nuclear Information System (INIS)

    Chen Tingyao; Cui, Mingxian

    2010-01-01

    The RTS method currently recommended by ASHRAE Handbook is based on continuous operation. However, most of air-conditioning systems, if not all, in commercial buildings, are intermittently operated in practice. The application of the current RTS method to intermittent air-conditioning in nonresidential buildings could result in largely underestimated design cooling loads, and inconsistently sized air-conditioning systems. Improperly sized systems could seriously deteriorate the performance of system operation and management. Therefore, a new method based on both the current RTS method and the principles of heat transfer has been developed. The first part of the new method is the same as the current RTS method in principle, but its calculation procedure is simplified by the derived equations in a close form. The technical data available in the current RTS method can be utilized to compute zone responses to a change in space air temperature so that no efforts are needed for regenerating new technical data. Both the overall RTS coefficients and the hourly cooling loads computed in the first part are used to estimate the additional peak cooling load due to a change from continuous operation to intermittent operation. It only needs one more step after the current RTS method to determine the intermittent peak cooling load. The new RTS-based method has been validated by EnergyPlus simulations. The root mean square deviation (RMSD) between the relative additional peak cooling loads (RAPCLs) computed by the two methods is 1.8%. The deviation of the RAPCL varies from -3.0% to 5.0%, and the mean deviation is 1.35%.

  17. Partial report and other sampling procedures overestimate the duration of iconic memory.

    Science.gov (United States)

    Appelman, I B

    1980-03-01

    In three experiments, subjects estimated the duration of a brief visual image (iconic memory) either directly by adjusting onset of a click to offset of the visual image, or indirectly with a Sperling partial report (sampling) procedure. The results indicated that partial report and other sampling procedures may reflect other brief phenomena along with iconic memory. First, the partial report procedure yields a greater estimate of the duration of iconic memory than the more direct click method. Second, the partial report estimate of the duration of iconic memory is affected if the subject is required to simultaneously retain a list of distractor items (memory load), while the click method estimate of the duration of iconic memory is not affected by a memory load. Finally, another sampling procedure based on visual cuing yields different estimates of the duration of iconic memory depending on how many items are cued. It was concluded that partial report and other sampling procedures overestimate the duration of iconic memory.

  18. A procedure for the assessment of low frequency noise complaints.

    Science.gov (United States)

    Moorhouse, Andy T; Waddington, David C; Adams, Mags D

    2009-09-01

    The development and application of a procedure for the assessment of low frequency noise (LFN) complaints are described. The development of the assessment method included laboratory tests addressing low frequency hearing threshold and the effect on acceptability of fluctuation, and field measurements complemented with interview-based questionnaires. Environmental health departments then conducted a series of six trials with genuine "live" LFN complaints to test the workability and usefulness of the procedure. The procedure includes guidance notes and a pro-forma report with step-by-step instructions. It does not provide a prescriptive indicator of nuisance but rather gives a systematic procedure to help environmental health practitioners to form their own opinion. Examples of field measurements and application of the procedure are presented. The procedure and examples are likely to be of particular interest to environmental health practitioners involved in the assessment of LFN complaints.

  19. Dynamic alarm response procedures

    International Nuclear Information System (INIS)

    Martin, J.; Gordon, P.; Fitch, K.

    2006-01-01

    The Dynamic Alarm Response Procedure (DARP) system provides a robust, Web-based alternative to existing hard-copy alarm response procedures. This paperless system improves performance by eliminating time wasted looking up paper procedures by number, looking up plant process values and equipment and component status at graphical display or panels, and maintenance of the procedures. Because it is a Web-based system, it is platform independent. DARP's can be served from any Web server that supports CGI scripting, such as Apache R , IIS R , TclHTTPD, and others. DARP pages can be viewed in any Web browser that supports Javascript and Scalable Vector Graphics (SVG), such as Netscape R , Microsoft Internet Explorer R , Mozilla Firefox R , Opera R , and others. (authors)

  20. Characterization Test Procedures for Intersection Collision Avoidance Systems Based on Vehicle-to-Vehicle Communications

    Science.gov (United States)

    2015-12-01

    Characterization test procedures have been developed to quantify the performance of intersection collision avoidance (ICA) systems based on vehicle-to-vehicle communications. These systems warn the driver of an imminent crossing-path collision at a r...

  1. A competency based selection procedure for Dutch postgraduate GP training: a pilot study on validity and reliability.

    Science.gov (United States)

    Vermeulen, Margit I; Tromp, Fred; Zuithoff, Nicolaas P A; Pieters, Ron H M; Damoiseaux, Roger A M J; Kuyvenhoven, Marijke M

    2014-12-01

    Abstract Background: Historically, semi-structured interviews (SSI) have been the core of the Dutch selection for postgraduate general practice (GP) training. This paper describes a pilot study on a newly designed competency-based selection procedure that assesses whether candidates have the competencies that are required to complete GP training. The objective was to explore reliability and validity aspects of the instruments developed. The new selection procedure comprising the National GP Knowledge Test (LHK), a situational judgement tests (SJT), a patterned behaviour descriptive interview (PBDI) and a simulated encounter (SIM) was piloted alongside the current procedure. Forty-seven candidates volunteered in both procedures. Admission decision was based on the results of the current procedure. Study participants did hardly differ from the other candidates. The mean scores of the candidates on the LHK and SJT were 21.9 % (SD 8.7) and 83.8% (SD 3.1), respectively. The mean self-reported competency scores (PBDI) were higher than the observed competencies (SIM): 3.7(SD 0.5) and 2.9(SD 0.6), respectively. Content-related competencies showed low correlations with one another when measured with different instruments, whereas more diverse competencies measured by a single instrument showed strong to moderate correlations. Moreover, a moderate correlation between LHK and SJT was found. The internal consistencies (intraclass correlation, ICC) of LHK and SJT were poor while the ICC of PBDI and SIM showed acceptable levels of reliability. Findings on content validity and reliability of these new instruments are promising to realize a competency based procedure. Further development of the instruments and research on predictive validity should be pursued.

  2. Systematic Design of the Lead-Lag Network Method for Active Damping in LCL-Filter Based Three Phase Converters

    DEFF Research Database (Denmark)

    Alzola, Rafael Pena; Liserre, Marco; Blaabjerg, Frede

    2014-01-01

    ) nor its rationale has been explained. Thus, in this paper a straightforward procedure is developed to tune the lead-lag network with the help of software tools. The rationale of this procedure, based on the capacitor current feedback, is elucidated. Stability is studied by means of the root locus......Three-phase active rectifiers guarantee sinusoidal input currents and unity power factor at the price of a high switching frequency ripple. To adopt an LCL-filter, instead of an L-filter, allows using reduced values for the inductances and so preserving dynamics. However, stability problems can...... without using dissipative elements but, sometimes, needing additional sensors. This solution has been addressed in many publications. The lead-lag network method is one of the first reported procedures and continues being in use. However, neither there is a direct tuning procedure (without trial and error...

  3. Establishing the analytical procedure for acetate in water by ion chromatography method

    International Nuclear Information System (INIS)

    Nguyen Thi Hong Thinh; Ha Lan Anh; Vo Thi Anh

    2015-01-01

    In recent studies of contamination sources of arsenic, ammonium, iron, organic carbon in groundwater, acetate is measured a lot because it is the main decomposition product of organic compounds from sediment into groundwater. In order to better support for the study of the origin and mobilization mechanism of the pollutants, acetate was studied analysis method in Isotopes Hydrology Laboratory using ion chromatography technique. Project Researchers used Ion Chromatography system - DX-600 including IonPac ICE-AS1 column for separating acetate and conductivity detector CD 25 to quantify acetate in water samples. The study results showed that project team has successfully developed analytical procedures of acetate in water with acetate’s retention time is 12 minutes, limit of detection (LOD) of the method was 0.01 ppm. The accuracy of the method was established by calculating the precision and bias of 10 analysis times of a standard sample at content levels 1 ppm and 8 ppm. The results of the 10 measurements are satisfiable about precision and bias with repeated standard deviation coefficient CVR were 1.3% and 0.2% and the recoveries R were 99.92% and 101.72%. (author)

  4. Package of procedures for the decision of optimization tasks by the method of branches and borders

    OpenAIRE

    Nestor, Natalia

    2012-01-01

    The practical aspects of realization of method of branches and borders are examined. The structure of package of procedures is pointed for implementation of basic operations at the decision of optimization tasks. A package is projected as a programmatic kernel which can be used for the various tasks of exhaustive search with returning.

  5. Procedure generation and verification

    International Nuclear Information System (INIS)

    Sheely, W.F.

    1986-01-01

    The Department of Energy has used Artificial Intelligence of ''AI'' concepts to develop two powerful new computer-based techniques to enhance safety in nuclear applications. The Procedure Generation System, and the Procedure Verification System, can be adapted to other commercial applications, such as a manufacturing plant. The Procedure Generation System can create a procedure to deal with the off-normal condition. The operator can then take correct actions on the system in minimal time. The Verification System evaluates the logic of the Procedure Generator's conclusions. This evaluation uses logic techniques totally independent of the Procedure Generator. The rapid, accurate generation and verification of corrective procedures can greatly reduce the human error, possible in a complex (stressful/high stress) situation

  6. Identification of Delamination in Concrete Slabs by SIBIE Procedure

    International Nuclear Information System (INIS)

    Yamada, M.; Yagi, Y.; Ohtsu, M.

    2017-01-01

    The Impact-Echo method is known as a non-destructive testing for concrete structures. The technique is based on the use of low-frequency elastic waves that propagate in concrete to determine the thickness and to detect internal flaws in concrete. The presence and locations of defects in concrete are estimated from identifying peak frequencies in the frequency spectra, which are responsible for the resonance due to time-of-flight from the defects. In practical applications, however, obtained spectra include so many peak frequencies that it is fairly difficult to identify the defects correctly. In order to improve the Impact-Echo method, Stack Imaging of spectral amplitudes Based on Impact-Echo (SIBIE) procedure is developed as an imaging technique applied to the Impact-Echo data, where defects in concrete are identified visually at the cross-section. In this study, the SIBIE procedure is applied to identify the delamination in a concrete slab. It is demonstrated that the delamination can be identified with reasonable accuracy. (paper)

  7. Image-guided procedures in brain biopsy.

    Science.gov (United States)

    Fujita, K; Yanaka, K; Meguro, K; Narushima, K; Iguchi, M; Nakai, Y; Nose, T

    1999-07-01

    Image-guided procedures, such as computed tomography (CT)-guided stereotactic and ultrasound-guided methods, can assist neurosurgeons in localizing the relevant pathology. The characteristics of image-guided procedures are important for their appropriate use, especially in brain biopsy. This study reviewed the results of various image-guided brain biopsies to ascertain the advantages and disadvantages. Brain biopsies assisted by CT-guided stereotactic, ultrasound-guided, Neuronavigator-guided, and the combination of ultrasound and Neuronavigator-guided procedures were carried out in seven, eight, one, and three patients, respectively. Four patients underwent open biopsy without a guiding system. Twenty of 23 patients had a satisfactory diagnosis after the initial biopsy. Three patients failed to have a definitive diagnosis after the initial procedure, one due to insufficient volume sampling after CT-guided procedure, and two due to localization failure by ultrasound because the lesions were nonechogenic. All patients who underwent biopsy using the combination of ultrasound and Neuronavigator-guided methods had a satisfactory result. The CT-guided procedure provided an efficient method of approaching any intracranial target and was appropriate for the diagnosis of hypodense lesions, but tissue sampling was sometimes not sufficient to achieve a satisfactory diagnosis. The ultrasound-guided procedure was suitable for the investigation of hyperdense lesions, but was difficult to localize nonechogenic lesions. The combination of ultrasound and Neuronavigator methods improved the diagnostic accuracy even in nonechogenic lesions such as malignant lymphoma. Therefore, it is essential to choose the most appropriate guiding method for brain biopsy according to the radiological nature of the lesions.

  8. 40 CFR 60.583 - Test methods and procedures.

    Science.gov (United States)

    2010-07-01

    ... (CONTINUED) STANDARDS OF PERFORMANCE FOR NEW STATIONARY SOURCES Standards of Performance for Flexible Vinyl... following procedures: (1) Determine and record the VOC content and amount of each ink used at the print head...' formulation data along with plant blending records (if plant blending is done) may be used to determine VOC...

  9. HOLISTIC PROCEDURE WORK ORGANIZATION AND LABOR SKILLS

    Directory of Open Access Journals (Sweden)

    Marianela Bermejo-Salmon

    2016-01-01

    Full Text Available The purpose of this article is the development of a procedure with a holistic focus for the treatment of the elements that conform the profile of work organization in the United of Base Enterprise Mastor of the Telex Company of Santiago de Cuba. Different methods and techniques are used such as: historical-logical, analysis and synthesis, induction and deduction, the Delphi method, functional analysis, integrated focus or ¨Holistic¨, experts’ approach, briefcase or portfolio of evidences, surveys, and observation of the acting, among others. The procedure establishes two key moments for the study of the process given by the level of competitions and its profile. The competence tasks were identified and the rules and standards tasks from which a direct link between the performances evaluations and formations action is established, in correspondence with the elements that integrate the process of work organization with an integrative character. 

  10. Assessment of health-care waste disposal methods using a VIKOR-based fuzzy multi-criteria decision making method.

    Science.gov (United States)

    Liu, Hu-Chen; Wu, Jing; Li, Ping

    2013-12-01

    Nowadays selection of the appropriate treatment method in health-care waste (HCW) management has become a challenge task for the municipal authorities especially in developing countries. Assessment of HCW disposal alternatives can be regarded as a complicated multi-criteria decision making (MCDM) problem which requires consideration of multiple alternative solutions and conflicting tangible and intangible criteria. The objective of this paper is to present a new MCDM technique based on fuzzy set theory and VIKOR method for evaluating HCW disposal methods. Linguistic variables are used by decision makers to assess the ratings and weights for the established criteria. The ordered weighted averaging (OWA) operator is utilized to aggregate individual opinions of decision makers into a group assessment. The computational procedure of the proposed framework is illustrated through a case study in Shanghai, one of the largest cities of China. The HCW treatment alternatives considered in this study include "incineration", "steam sterilization", "microwave" and "landfill". The results obtained using the proposed approach are analyzed in a comparative way. Copyright © 2013. Published by Elsevier Ltd.

  11. Procedures simulation in Nuclear Power Plants (SIMPROC); Codigo para la simulacion de procedimientos humanos en centrales nucleares: SIMPROC

    Energy Technology Data Exchange (ETDEWEB)

    Izquierdo, J. M.; Hortal, J.; Sanchez, M.; Melendez, E.; Queral, C.; Ibanez, L.; Gil, J.; Fernandez, I.; Murcia, S.; Gomez, J.

    2010-07-01

    SIMPROC, a procedure simulator support in rules, is an Indizen product whose procedure computerization is based in XML. It's a compliment for the traditional methods of procedures evaluation and it's connected to SCAIS (Code System for an Integrate Security Analysis).

  12. Improving the efficiency of aerodynamic shape optimization procedures

    Science.gov (United States)

    Burgreen, Greg W.; Baysal, Oktay; Eleshaky, Mohamed E.

    1992-01-01

    The computational efficiency of an aerodynamic shape optimization procedure which is based on discrete sensitivity analysis is increased through the implementation of two improvements. The first improvement involves replacing a grid point-based approach for surface representation with a Bezier-Bernstein polynomial parameterization of the surface. Explicit analytical expressions for the grid sensitivity terms are developed for both approaches. The second improvement proposes the use of Newton's method in lieu of an alternating direction implicit (ADI) methodology to calculate the highly converged flow solutions which are required to compute the sensitivity coefficients. The modified design procedure is demonstrated by optimizing the shape of an internal-external nozzle configuration. A substantial factor of 8 decrease in computational time for the optimization process was achieved by implementing both of the design improvements.

  13. COMPANY VALUATION METHODS BASED ON PATRIMONY

    Directory of Open Access Journals (Sweden)

    SUCIU GHEORGHE

    2013-02-01

    Full Text Available The methods used for the company valuation can be divided into 3 main groups: methods based on patrimony,methods based on financial performance, methods based both on patrimony and on performance. The companyvaluation methods based on patrimony are implemented taking into account the balance sheet or the financialstatement. The financial statement refers to that type of balance in which the assets are arranged according to liquidity,and the liabilities according to their financial maturity date. The patrimonial methods are based on the principle thatthe value of the company equals that of the patrimony it owns. From a legal point of view, the patrimony refers to allthe rights and obligations of a company. The valuation of companies based on their financial performance can be donein 3 ways: the return value, the yield value, the present value of the cash flows. The mixed methods depend both onpatrimony and on financial performance or can make use of other methods.

  14. Use of walk through procedures to minimize seismic upgrade for nuclear facilities

    International Nuclear Information System (INIS)

    Djordjevic, W.

    1985-01-01

    The seismic evaluation Walk Through Procedure briefly described in this paper can be used to dramatically reduce seismic evaluation efforts. The procedure makes maximum use of existing extensive generic data bases which define seismic fragility or ruggedness while at the same time utilize component and site specific field inspections of each component and its environs. It is recommended that the Walk Through procedure outlined herein be considered as a primary method of evaluating the seismic capabilities of nuclear facilities in the future. 7 references, 4 figures

  15. Using salivary cortisol to measure the effects of a Wilbarger protocol-based procedure on sympathetic arousal: a pilot study.

    Science.gov (United States)

    Kimball, Judith G; Lynch, Keara M; Stewart, Kelli C; Williams, Nicole E; Thomas, Meghan A; Atwood, Kam D

    2007-01-01

    This study investigated changes in salivary cortisol, the stress hormone, after administration of a procedure based on the Wilbarger protocol to children diagnosed with sensory defensiveness (SD), a type of sensory modulation dysfunction. Using a single-subject design across participants, we studied 4 boys with SD ages 3 to 5 years. Each participant completed four sessions consisting of the collection of a saliva sample, administration of a procedure based on the Wilbarger protocol, 15 min of quiet neutral activities to allow time for any changes in cortisol level to manifest in the saliva, and the second collection of saliva. Saliva samples were analyzed using enzyme-linked immunosorbent assay (ELISA). Salivary cortisol levels in all participants changed after each of four applications of a procedure based on the Wilbarger protocol. The cortisol levels of 2 children whose levels were relatively higher on pretest decreased at each posttest. The levels of 1 child whose cortisol was higher on pretest three times decreased those three times and increased the one time the pretest cortisol was lower. The levels of 1 child who had the lowest cortisol levels of any of the children increased each time. Therefore, in all participants, cortisol moved in the direction of modulation. In these 4 boys, a procedure based on the Wilbarger protocol modulated cortisol levels toward a middle range. This pilot study indicates that there is an association between sympathetic nervous system response and the Wilbarger protocol-based procedure, as indicated by salivary cortisol levels.

  16. Based on Penalty Function Method

    Directory of Open Access Journals (Sweden)

    Ishaq Baba

    2015-01-01

    Full Text Available The dual response surface for simultaneously optimizing the mean and variance models as separate functions suffers some deficiencies in handling the tradeoffs between bias and variance components of mean squared error (MSE. In this paper, the accuracy of the predicted response is given a serious attention in the determination of the optimum setting conditions. We consider four different objective functions for the dual response surface optimization approach. The essence of the proposed method is to reduce the influence of variance of the predicted response by minimizing the variability relative to the quality characteristics of interest and at the same time achieving the specific target output. The basic idea is to convert the constraint optimization function into an unconstraint problem by adding the constraint to the original objective function. Numerical examples and simulations study are carried out to compare performance of the proposed method with some existing procedures. Numerical results show that the performance of the proposed method is encouraging and has exhibited clear improvement over the existing approaches.

  17. [A retrieval method of drug molecules based on graph collapsing].

    Science.gov (United States)

    Qu, J W; Lv, X Q; Liu, Z M; Liao, Y; Sun, P H; Wang, B; Tang, Z

    2018-04-18

    To establish a compact and efficient hypergraph representation and a graph-similarity-based retrieval method of molecules to achieve effective and efficient medicine information retrieval. Chemical structural formula (CSF) was a primary search target as a unique and precise identifier for each compound at the molecular level in the research field of medicine information retrieval. To retrieve medicine information effectively and efficiently, a complete workflow of the graph-based CSF retrieval system was introduced. This system accepted the photos taken from smartphones and the sketches drawn on tablet personal computers as CSF inputs, and formalized the CSFs with the corresponding graphs. Then this paper proposed a compact and efficient hypergraph representation for molecules on the basis of analyzing factors that directly affected the efficiency of graph matching. According to the characteristics of CSFs, a hierarchical collapsing method combining graph isomorphism and frequent subgraph mining was adopted. There was yet a fundamental challenge, subgraph overlapping during the collapsing procedure, which hindered the method from establishing the correct compact hypergraph of an original CSF graph. Therefore, a graph-isomorphism-based algorithm was proposed to select dominant acyclic subgraphs on the basis of overlapping analysis. Finally, the spatial similarity among graphical CSFs was evaluated by multi-dimensional measures of similarity. To evaluate the performance of the proposed method, the proposed system was firstly compared with Wikipedia Chemical Structure Explorer (WCSE), the state-of-the-art system that allowed CSF similarity searching within Wikipedia molecules dataset, on retrieval accuracy. The system achieved higher values on mean average precision, discounted cumulative gain, rank-biased precision, and expected reciprocal rank than WCSE from the top-2 to the top-10 retrieved results. Specifically, the system achieved 10%, 1.41, 6.42%, and 1

  18. A study on Requirements of Data Base Translator for APR1400 Computerized Procedure System at Shin-Hanul unit 1 and 2

    International Nuclear Information System (INIS)

    Seong, Nokyu; Lee, Sungjin

    2015-01-01

    The CPS is one of the Man Machine Interface (MMI) resources and the CPS can directly display plant graphic objects which are in the Digital Control System (DCS). And the CPS can send a request to DCS to provide DCS screen which is called step support display through DCS link button on a computerized procedure. The procedure writers can insert DCS graphic information to computerized procedure through data base which is provided by CPS Editing System (CPSES). The data base which is provided by CPSES conforms to the naming rule of DCS graphic objects. The naming rule of DCS graphic objects is defined by vendor thus status of DCS graphic objects which are in computerized procedure at Shin-Kori plant cannot be displayed on CPS at Shin-Hanul plant. To use computerized procedure which is written by other plant procedure writer, DCS graphic objects shall be translated by its plant data base. This paper introduces requirements of data base translator to reduce translation and re-inserting graphic objects burden. This paper introduces the requirements of data base translator of CPSES for APR1400 CPS at Shin-Hanul unit 1 and 2. The translator algorithms shall be tested to update data base of CPSES effectively. The prototype of translator is implemented and is being tested using real plant DB. This translator can be applied to Shin- Hanul unit1 and 2 through software V and V

  19. A study on Requirements of Data Base Translator for APR1400 Computerized Procedure System at Shin-Hanul unit 1 and 2

    Energy Technology Data Exchange (ETDEWEB)

    Seong, Nokyu; Lee, Sungjin [KHNP, Daejeon (Korea, Republic of)

    2015-05-15

    The CPS is one of the Man Machine Interface (MMI) resources and the CPS can directly display plant graphic objects which are in the Digital Control System (DCS). And the CPS can send a request to DCS to provide DCS screen which is called step support display through DCS link button on a computerized procedure. The procedure writers can insert DCS graphic information to computerized procedure through data base which is provided by CPS Editing System (CPSES). The data base which is provided by CPSES conforms to the naming rule of DCS graphic objects. The naming rule of DCS graphic objects is defined by vendor thus status of DCS graphic objects which are in computerized procedure at Shin-Kori plant cannot be displayed on CPS at Shin-Hanul plant. To use computerized procedure which is written by other plant procedure writer, DCS graphic objects shall be translated by its plant data base. This paper introduces requirements of data base translator to reduce translation and re-inserting graphic objects burden. This paper introduces the requirements of data base translator of CPSES for APR1400 CPS at Shin-Hanul unit 1 and 2. The translator algorithms shall be tested to update data base of CPSES effectively. The prototype of translator is implemented and is being tested using real plant DB. This translator can be applied to Shin- Hanul unit1 and 2 through software V and V.

  20. Comparison of different cleanup procedures for oil crops based on the development of a trace analytical method for the determination of pyraclostrobin and epoxiconazole.

    Science.gov (United States)

    Pan, Xinglu; Dong, Fengshou; Xu, Jun; Liu, Xingang; Cheng, Youpu; Chen, Zenglong; Liu, Na; Chen, Xixi; Tao, Yan; Zheng, Yongquan

    2014-12-01

    The effects of different cleanup procedures in removing high-molecular-mass lipids and natural colorants from oil-crop extracts, including dispersive solid-phase extraction, low-temperature precipitation and gel permeation chromatography, were studied. The pigment removal, lipid quantity, and matrix effects of the three cleanup methods were evaluated. Results indicated that the gel permeation chromatography method is the most effective way to compare the dispersive solid-phase extraction and low-temperature precipitation. Pyraclostrobin and epoxiconazole applied extensively in oil-crop production were selected as typical pesticides to study and a trace analytical method was developed by gel permeation chromatography and ultra high performance liquid chromatography with tandem mass spectrometry. Average recoveries of the target pesticides at three levels (10, 50, and 100 μg/kg) were in the range of 74.7-96.8% with relative standard deviation values below 9.2%. The limits of detection did not exceed 0.46 μg/kg, whereas the limits of quantification were below 1.54 μg/kg and much lower than maximum residue limit in all matrices. This study may provide the essential data for optimizing the analytical method of pesticides in oil-crop samples. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  1. METHOD OF SOFTWARE-BASED COMPENSATION OF TECHNOLOGICAL VARIATION IN CHROMATICITY COORDINATES OF LCD PANELS

    Directory of Open Access Journals (Sweden)

    I. O. Zharinov

    2015-05-01

    Full Text Available Subject of research. The problem of software-based compensation of technological variation in chromaticity coordinates of liquid crystal panels is considered. A method of software-based compensation of technological variation in chromaticity coordinates is proposed. The method provides the color reproduction characteristics of the series-produced samples on-board indication equipment corresponding to the sample equipment, which is taken as the standard. Method. Mathematical calculation of the profile is performed for the given model of the liquid crystal panel. The coefficients that correspond to the typical values of the chromaticity coordinates for the vertices of the triangle color coverage constitute a reference mathematical model of the plate LCD panel from a specific manufacturer. At the stage of incoming inspection the sample of the liquid crystal panel, that is to be implemented within indication equipment, is mounted on the lighting test unit, where Nokia-Test control is provided by the formation of the RGB codes for display the image of a homogeneous field in the red, green, blue and white. The measurement of the (x,y-chromaticity coordinates in red, green, blue and white colors is performed using a colorimeter with the known value of absolute error. Instead of using lighting equipment, such measurements may be carried out immediately on the sample indication equipment during customizing procedure. The measured values are used to calculate individual LCD-panel profile coefficients through the use of Grassman's transformation, establishing mutual relations between the XYZ-color coordinates and RGB codes to be used for displaying the image on the liquid crystal panel. The obtained coefficients are to be set into the memory of the graphics controller together with the functional software and then used for image displaying. Main results. The efficiency of the proposed method of software-based compensation for technological variation of

  2. A procedure for estimating the dose modifying effect of chemotherapy on radiation response

    International Nuclear Information System (INIS)

    Hao, Y.; Keane, T.

    1994-01-01

    A procedure based on a logistic regression model was used to estimate the dose-modifying effect of chemotherapy on the response of normal tissues to radiation. The DEF in the proposed procedure is expressed as a function of logistic regression coefficients, response levels and values of covariates in the model. The proposed procedure is advantageous as it allows consideration of both the response levels and the values of covariates in calculating the DEF. A plot of the DEF against the response or a covariate describes how the DEF varies with the response levels or the covariate values. Confidence intervals of the DEF were obtained based on the normal approximation of the distribution of the estimated DEF and on a non-parametric Bootstrap method. An example is given to illustrate the proposed procedure. (Author)

  3. Pretreatment procedures applied to samples to be analysed by neutron activation analysis at CDTN/CNEN

    International Nuclear Information System (INIS)

    Francisco, Dovenir; Menezes, Maria Angela de Barros Correia

    2009-01-01

    The neutron activation technique - using several methods - has been applied in 80% of the analytical demand of Division for Reactor and Analytical Techniques at CDTN/CNEN, Belo Horizonte, Minas Gerais. This scenario emphasizes the responsibility of the Laboratory to provide and assure the quality of the measurements. The first step to assure the results quality is the preparation of the samples. Therefore, this paper describes the experimental procedures adopted at CDTN/CNEN in order to uniform conditions of analysis and to avoid contaminations by elements present everywhere. Some of the procedures are based on methods described in the literature; others are based on many years of experience preparing samples from many kinds of matrices. The procedures described are related to geological material - soil, sediment, rock, gems, clay, archaeological ceramics and ore - biological materials - hair, fish, plants, food - water, etc. Analytical results in sediment samples are shown as n example pointing out the efficiency of the experimental procedure. (author)

  4. 40 CFR 60.4212 - What test methods and other procedures must I use if I am an owner or operator of a stationary CI...

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 6 2010-07-01 2010-07-01 false What test methods and other procedures... Requirements for Owners and Operators § 60.4212 What test methods and other procedures must I use if I am an... CFR 94.8, as applicable, must not exceed the NTE numerical requirements, rounded to the same number of...

  5. 48 CFR 17.504 - Ordering procedures.

    Science.gov (United States)

    2010-10-01

    ... METHODS AND CONTRACT TYPES SPECIAL CONTRACTING METHODS Interagency Acquisitions Under the Economy Act 17.504 Ordering procedures. (a) Before placing an Economy Act order for supplies or services with another...; see also 6.302 for procedures to follow where using other than full and open competition.) The...

  6. Transition to Office-based Obstetric and Gynecologic Procedures: Safety, Technical, and Financial Considerations.

    Science.gov (United States)

    Peacock, Lisa M; Thomassee, May E; Williams, Valerie L; Young, Amy E

    2015-06-01

    Office-based surgery is increasingly desired by patients and providers due to ease of access, overall efficiency, reimbursement, and satisfaction. The adoption of office-based surgery requires careful consideration of safety, efficacy, cost, and feasibility within a providers practice. This article reviews the currently available data regarding patient and provider satisfaction as well as practical considerations of staffing, equipment, and supplies. To aid the practitioner, issues of office-based anesthesia and safety with references to currently available national guidelines and protocols are provided. Included is a brief review of billing, coding, and reimbursement. Technical procedural aspects with information and recommendations are summarized.

  7. New Procedure to Develop Lumped Kinetic Models for Heavy Fuel Oil Combustion

    KAUST Repository

    Han, Yunqing

    2016-09-20

    A new procedure to develop accurate lumped kinetic models for complex fuels is proposed, and applied to the experimental data of the heavy fuel oil measured by thermogravimetry. The new procedure is based on the pseudocomponents representing different reaction stages, which are determined by a systematic optimization process to ensure that the separation of different reaction stages with highest accuracy. The procedure is implemented and the model prediction was compared against that from a conventional method, yielding a significantly improved agreement with the experimental data. © 2016 American Chemical Society.

  8. Gadolinium Use in Spine Pain Management Procedures for Patients with Contrast Allergies: Results in 527 Procedures

    International Nuclear Information System (INIS)

    Safriel, Yair; Ang, Roberto; Ali, Muhammed

    2008-01-01

    Introduction. To review the safety and efficacy of gadolinium in spine pain management procedures in patients at high risk for a contrast reaction and who are not suitable candidates for the use of standard non-ionic contrast. Methods. We reviewed records over a 61-month period of all image-guided spinal pain management procedures where patients had allergies making them unsuitable candidates for standard non-ionic contrast and where gadolinium was used to confirm needle tip placement prior to injection of medication. Results. Three hundred and four outpatients underwent 527 procedures. A spinal needle was used in all but 41 procedures. Gadolinium was visualized using portable C-arm fluoroscopy in vivo allowing for confirmation of needle tip location. The gadolinium dose ranged from 0.2 to 10 ml per level. The highest dose received by one patient was 15.83 ml intradiscally during a three-level discogram. Three hundred and one patients were discharged without complication or known delayed complications. One patient had documented intrathecal injection but without sequelae and 2 patients who underwent cervical procedures experienced seizures requiring admission to the intensive care unit. Both the latter patients were discharged without any further complications. Conclusion. Based on our experience we recommend using gadolinium judiciously for needle tip confirmation. We feel more confident using gadolinium in the lumbar spine and in cervical nerve blocks. Gadolinium should probably not be used as an injectate volume expander. The indications for gadolinium use in cervical needle-guided spine procedures are less clear and use of a blunt-tipped needle should be considered

  9. Evaluation of a Delphi technique based expert judgement method for LCA valuation - DELPHI II

    Energy Technology Data Exchange (ETDEWEB)

    Virtanen, Y.; Torkkeli, S. [VTT Chemical Technology, Espoo (Finland). Environmental Technology; Wilson, B. [Landbank Environmental Research and Consulting, London (United Kingdom)

    1999-07-01

    Because of the complexity and trade-offs between different points of the life cycles of the analysed systems, a method which measures the environmental damage caused by each intervention is needed in order to make a choice between the products. However, there is no commonly agreed methodology for this particular purpose. In most of the methods the valuation is implicitly or explicitly based on economic criteria. For various reasons, however, economically obtained criteria do not necessarily reflect ecological arguments correctly. Thus, there is a need for new, ecologically based valuation methods. One such approach is the expert judgement method, based on the Delphi technique, which rejects the economic basis in favour of the judgements of a group of environmental experts. However, it is not self evident that the expert judgement based environmental rating of interventions will be essentially more correct and certain than other methods. In this study the method was evaluated at different points of the procedure in order to obtain a picture of the quality of the indexes produced. The evaluation was based on an actual Delphi study made in 1995-1996 in Finland, Sweden and Norway. The main questions addressed were the significance of the results and the operational quality of the Delphi procedure. The results obtained by applying the expert method indexes were also compared with the results obtained with other valuation methods for the background life cycle inventory of the case study. Additional material included feedback data from panellists of the case study, collected with a questionnaire. The questionnaire data was analysed to identify major dimensions in the criteria for evaluating interventions and correlation of the final indexes of the Delphi I study with these dimensions. The rest of the questionnaire material was used to document panellists' opinions and experiences of the Delphi process, familiarity with the environmental impacts of various

  10. Evaluation of a Delphi technique based expert judgement method for LCA valuation - DELPHI II

    International Nuclear Information System (INIS)

    Virtanen, Y.; Torkkeli, S.

    1999-01-01

    Because of the complexity and trade-offs between different points of the life cycles of the analysed systems, a method which measures the environmental damage caused by each intervention is needed in order to make a choice between the products. However, there is no commonly agreed methodology for this particular purpose. In most of the methods the valuation is implicitly or explicitly based on economic criteria. For various reasons, however, economically obtained criteria do not necessarily reflect ecological arguments correctly. Thus, there is a need for new, ecologically based valuation methods. One such approach is the expert judgement method, based on the Delphi technique, which rejects the economic basis in favour of the judgements of a group of environmental experts. However, it is not self evident that the expert judgement based environmental rating of interventions will be essentially more correct and certain than other methods. In this study the method was evaluated at different points of the procedure in order to obtain a picture of the quality of the indexes produced. The evaluation was based on an actual Delphi study made in 1995-1996 in Finland, Sweden and Norway. The main questions addressed were the significance of the results and the operational quality of the Delphi procedure. The results obtained by applying the expert method indexes were also compared with the results obtained with other valuation methods for the background life cycle inventory of the case study. Additional material included feedback data from panellists of the case study, collected with a questionnaire. The questionnaire data was analysed to identify major dimensions in the criteria for evaluating interventions and correlation of the final indexes of the Delphi I study with these dimensions. The rest of the questionnaire material was used to document panellists' opinions and experiences of the Delphi process, familiarity with the environmental impacts of various interventions

  11. Cloud-Based Electronic Test Procedures, Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — Procedures are critical to experimental tests as they describe the specific steps necessary to efficiently and safely carry out a test in a repeatable fashion. The...

  12. Summative Mass Analysis of Algal Biomass - Integration of Analytical Procedures: Laboratory Analytical Procedure (LAP)

    Energy Technology Data Exchange (ETDEWEB)

    Laurens, Lieve M. L.

    2016-01-13

    This procedure guides the integration of laboratory analytical procedures to measure algal biomass constituents in an unambiguous manner and ultimately achieve mass balance closure for algal biomass samples. Many of these methods build on years of research in algal biomass analysis.

  13. Procedures for the production of poly-zirconium-compound (PZC) based chromatographic 99mTc generator to be available for clinical application

    International Nuclear Information System (INIS)

    Le Van So

    2006-01-01

    Two procedures - Column post-loading and Column pre-loading procedures - for the preparation of PZC based chromatographic Tc-99m generators were described in detail. In-process documentation, flow-chart of process for the individual procedures, specific Tc-99m generator designs and pictorially illustrative description of Tc-99m generator production process were systematically reported. The column pre-loading procedure was highly evaluated as a competent technology for the preparation of PZC based Tc-99m chromatographic generator of high performance using (n, γ) 99 Mo of low specific radioactivity produced on low power research reactors. (author)

  14. Developing best practices teaching procedures for skinfold assessment: observational examination using the Think Aloud method.

    Science.gov (United States)

    Holmstrup, Michael E; Verba, Steven D; Lynn, Jeffrey S

    2015-12-01

    Skinfold assessment is valid and economical; however, it has a steep learning curve, and many programs only include one exposure to the technique. Increasing the number of exposures to skinfold assessment within an undergraduate curriculum would likely increase skill proficiency. The present study combined observational and Think Aloud methodologies to quantify procedural and cognitive characteristics of skinfold assessment. It was hypothesized that 1) increased curricular exposure to skinfold assessment would improve proficiency and 2) the combination of an observational and Think Aloud analysis would provide quantifiable areas of emphasis for instructing skinfold assessment. Seventy-five undergraduates with varied curricular exposure performed a seven-site skinfold assessment on a test subject while expressing their thoughts aloud. A trained practitioner recorded procedural observations, with transcripts generated from audio recordings to capture cognitive information. Skinfold measurements were compared with a criterion value, and bias scores were generated. Participants whose total bias fell within ±3.5% of the criterion value were proficient, with the remainder nonproficient. An independent-samples t-test was used to compare procedural and cognitive observations across experience and proficiency groups. Additional curricular exposure improved performance of skinfold assessment in areas such as the measurement of specific sites (e.g., chest, abdomen, and thigh) and procedural (e.g., landmark identification) and cognitive skills (e.g., complete site explanation). Furthermore, the Think Aloud method is a valuable tool for determining curricular strengths and weaknesses with skinfold assessment and as a pedagogical tool for individual instruction and feedback in the classroom. Copyright © 2015 The American Physiological Society.

  15. A flexible method for residual stress measurement of spray coated layers by laser made hole drilling and SLM based beam steering

    Science.gov (United States)

    Osten, W.; Pedrini, G.; Weidmann, P.; Gadow, R.

    2015-08-01

    A minimum invasive but high resolution method for residual stress analysis of ceramic coatings made by thermal spraycoating using a pulsed laser for flexible hole drilling is described. The residual stresses are retrieved by applying the measured surface data for a model-based reconstruction procedure. While the 3D deformations and the profile of the machined area are measured with digital holography, the residual stresses are calculated by FE analysis. To improve the sensitivity of the method, a SLM is applied to control the distribution and the shape of the holes. The paper presents the complete measurement and reconstruction procedure and discusses the advantages and challenges of the new technology.

  16. Simulation-based training for cardiology procedures: Are we any further forward in evidencing real-world benefits?

    Science.gov (United States)

    Harrison, Christopher M; Gosai, Jivendra N

    2017-04-01

    Simulation-based training as an educational tool for healthcare professionals continues to grow in sophistication, scope, and usage. There have been a number of studies demonstrating the utility of the technique, and it is gaining traction as part of the training curricula for the next generation of cardiologists. In this review, we focus on the recent literature for the efficacy of simulation for practical procedures specific to cardiology, focusing on transesophageal echocardiography, cardiac catheterization, coronary angioplasty, and electrophysiology. A number of studies demonstrated improved performance by those trained using SBT when compared to other methods, although evidence of this leading to an improvement in patient outcomes remains scarce. We discuss this evidence, and the implications for practice for training in cardiology. Copyright © 2017 Elsevier Inc. All rights reserved.

  17. New procedure to design low radar cross section near perfect isotropic and homogeneous triangular carpet cloaks.

    Science.gov (United States)

    Sharifi, Zohreh; Atlasbaf, Zahra

    2016-10-01

    A new design procedure for near perfect triangular carpet cloaks, fabricated based on only isotropic homogeneous materials, is proposed. This procedure enables us to fabricate a cloak with simple metamaterials or even without employing metamaterials. The proposed procedure together with an invasive weed optimization algorithm is used to design carpet cloaks based on quasi-isotropic metamaterial structures, Teflon and AN-73. According to the simulation results, the proposed cloaks have good invisibility properties against radar, especially monostatic radar. The procedure is a new method to derive isotropic and homogeneous parameters from transformation optics formulas so we do not need to use complicated structures to fabricate the carpet cloaks.

  18. Assessment procedure and probability determination methods of aircraft crash events in siting for nuclear power plants

    International Nuclear Information System (INIS)

    Zheng Qiyan; Zhang Lijun; Huang Weiqi; Yin Qingliao

    2010-01-01

    Assessment procedure of aircraft crash events in siting for nuclear power plants, and the methods of probability determination in two different stages of prelimi- nary screening and detailed evaluation are introduced in this paper. Except for general air traffic, airport operations and aircraft in the corridor, the probability of aircraft crash by military operation in the military airspaces is considered here. (authors)

  19. Methods in Logic Based Control

    DEFF Research Database (Denmark)

    Christensen, Georg Kronborg

    1999-01-01

    Desing and theory of Logic Based Control systems.Boolean Algebra, Karnaugh Map, Quine McClusky's algorithm. Sequential control design. Logic Based Control Method, Cascade Control Method. Implementation techniques: relay, pneumatic, TTL/CMOS,PAL and PLC- and Soft_PLC implementation. PLC...

  20. Efficiency of performing pulmonary procedures in a shared endoscopy unit: procedure time, turnaround time, delays, and procedure waiting time.

    Science.gov (United States)

    Verma, Akash; Lee, Mui Yok; Wang, Chunhong; Hussein, Nurmalah B M; Selvi, Kalai; Tee, Augustine

    2014-04-01

    The purpose of this study was to assess the efficiency of performing pulmonary procedures in the endoscopy unit in a large teaching hospital. A prospective study from May 20 to July 19, 2013, was designed. The main outcome measures were procedure delays and their reasons, duration of procedural steps starting from patient's arrival to endoscopy unit, turnaround time, total case durations, and procedure wait time. A total of 65 procedures were observed. The most common procedure was BAL (61%) followed by TBLB (31%). Overall procedures for 35 (53.8%) of 65 patients were delayed by ≥ 30 minutes, 21/35 (60%) because of "spillover" of the gastrointestinal and surgical cases into the time block of pulmonary procedure. Time elapsed between end of pulmonary procedure and start of the next procedure was ≥ 30 minutes in 8/51 (16%) of cases. In 18/51 (35%) patients there was no next case in the room after completion of the pulmonary procedure. The average idle time of the room after the end of pulmonary procedure and start of next case or end of shift at 5:00 PM if no next case was 58 ± 53 minutes. In 17/51 (33%) patients the room's idle time was >60 minutes. A total of 52.3% of patients had the wait time >2 days and 11% had it ≥ 6 days, reason in 15/21 (71%) being unavailability of the slot. Most pulmonary procedures were delayed due to spillover of the gastrointestinal and surgical cases into the block time allocated to pulmonary procedures. The most common reason for difficulty encountered in scheduling the pulmonary procedure was slot unavailability. This caused increased procedure waiting time. The strategies to reduce procedure delays and turnaround times, along with improved scheduling methods, may have a favorable impact on the volume of procedures performed in the unit thereby optimizing the existing resources.

  1. A procedure for multi-objective optimization of tire design parameters

    Directory of Open Access Journals (Sweden)

    Nikola Korunović

    2015-04-01

    Full Text Available The identification of optimal tire design parameters for satisfying different requirements, i.e. tire performance characteristics, plays an essential role in tire design. In order to improve tire performance characteristics, formulation and solving of multi-objective optimization problem must be performed. This paper presents a multi-objective optimization procedure for determination of optimal tire design parameters for simultaneous minimization of strain energy density at two distinctive zones inside the tire. It consists of four main stages: pre-analysis, design of experiment, mathematical modeling and multi-objective optimization. Advantage of the proposed procedure is reflected in the fact that multi-objective optimization is based on the Pareto concept, which enables design engineers to obtain a complete set of optimization solutions and choose a suitable tire design. Furthermore, modeling of the relationships between tire design parameters and objective functions based on multiple regression analysis minimizes computational and modeling effort. The adequacy of the proposed tire design multi-objective optimization procedure has been validated by performing experimental trials based on finite element method.

  2. Development of a testing methodology for computerized procedure system based on JUnit framework and MFM

    International Nuclear Information System (INIS)

    Qin, Wei

    2004-02-01

    Paper Based Procedure (PBP) and Computerized Procedure System (CPS) are studied to demonstrate that it is necessary to develop CPS in Nuclear Power Plant (NPP) Instrumentation and Control (I and C) system. Computerized procedure system is actually a software system. All the desired and undesired properties of a software system can be described and evaluated as software qualities. Generally, software qualities can be categorized into product quality and process quality. In order to achieve product quality, the process quality of a software system should also be considered and achieved. Characteristics of CPS will be described to analyse the product and process of an example CPS: ImPRO. At the same time, several main product and process issues will be analysed from Verification and Validation (V and V) point of view. It is concluded and suggested that V and V activities can also be regarded as a software development process, this point of view then is applied to the V and V activities of ImPRO as a systematic approach of testing of ImPRO. To support and realize this approach, suitable testing technologies and testing strategies are suggested based on JUnit framework and Multi-level Flow Modeling (MFM)

  3. Absolute cosine-based SVM-RFE feature selection method for prostate histopathological grading.

    Science.gov (United States)

    Sahran, Shahnorbanun; Albashish, Dheeb; Abdullah, Azizi; Shukor, Nordashima Abd; Hayati Md Pauzi, Suria

    2018-04-18

    Feature selection (FS) methods are widely used in grading and diagnosing prostate histopathological images. In this context, FS is based on the texture features obtained from the lumen, nuclei, cytoplasm and stroma, all of which are important tissue components. However, it is difficult to represent the high-dimensional textures of these tissue components. To solve this problem, we propose a new FS method that enables the selection of features with minimal redundancy in the tissue components. We categorise tissue images based on the texture of individual tissue components via the construction of a single classifier and also construct an ensemble learning model by merging the values obtained by each classifier. Another issue that arises is overfitting due to the high-dimensional texture of individual tissue components. We propose a new FS method, SVM-RFE(AC), that integrates a Support Vector Machine-Recursive Feature Elimination (SVM-RFE) embedded procedure with an absolute cosine (AC) filter method to prevent redundancy in the selected features of the SV-RFE and an unoptimised classifier in the AC. We conducted experiments on H&E histopathological prostate and colon cancer images with respect to three prostate classifications, namely benign vs. grade 3, benign vs. grade 4 and grade 3 vs. grade 4. The colon benchmark dataset requires a distinction between grades 1 and 2, which are the most difficult cases to distinguish in the colon domain. The results obtained by both the single and ensemble classification models (which uses the product rule as its merging method) confirm that the proposed SVM-RFE(AC) is superior to the other SVM and SVM-RFE-based methods. We developed an FS method based on SVM-RFE and AC and successfully showed that its use enabled the identification of the most crucial texture feature of each tissue component. Thus, it makes possible the distinction between multiple Gleason grades (e.g. grade 3 vs. grade 4) and its performance is far superior to

  4. Challenging the in-vivo assessment of biomechanical properties of the uterine cervix: A critical analysis of ultrasound based quasi-static procedures.

    Science.gov (United States)

    Maurer, M M; Badir, S; Pensalfini, M; Bajka, M; Abitabile, P; Zimmermann, R; Mazza, E

    2015-06-25

    Measuring the stiffness of the uterine cervix might be useful in the prediction of preterm delivery, a still unsolved health issue of global dimensions. Recently, a number of clinical studies have addressed this topic, proposing quantitative methods for the assessment of the mechanical properties of the cervix. Quasi-static elastography, maximum compressibility using ultrasound and aspiration tests have been applied for this purpose. The results obtained with the different methods seem to provide contradictory information about the physiologic development of cervical stiffness during pregnancy. Simulations and experiments were performed in order to rationalize the findings obtained with ultrasound based, quasi-static procedures. The experimental and computational results clearly illustrate that standardization of quasi-static elastography leads to repeatable strain values, but for different loading forces. Since force cannot be controlled, this current approach does not allow the distinction between a globally soft and stiff cervix. It is further shown that introducing a reference elastomer into the elastography measurement might overcome the problem of force standardization, but a careful mechanical analysis is required to obtain reliable stiffness values for cervical tissue. In contrast, the maximum compressibility procedure leads to a repeatable, semi-quantitative assessment of cervical consistency, due to the nonlinear nature of the mechanical behavior of cervical tissue. The evolution of cervical stiffness in pregnancy obtained with this procedure is in line with data from aspiration tests. Copyright © 2015 Elsevier Ltd. All rights reserved.

  5. Aerial photogrammetry procedure optimized for micro uav

    Directory of Open Access Journals (Sweden)

    T. Anai

    2014-06-01

    Full Text Available This paper proposes the automatic aerial photogrammetry procedure optimized for Micro UAV that has ability of autonomous flight. The most important goal of our proposed method is the reducing the processing cost for fully automatic reconstruction of DSM from a large amount of image obtained from Micro UAV. For this goal, we have developed automatic corresponding point generation procedure using feature point tracking algorithm considering position and attitude information, which obtained from onboard GPS-IMU integrated on Micro UAV. In addition, we have developed the automatic exterior orientation and registration procedure from the automatic generated corresponding points on each image and position and attitude information from Micro UAV. Moreover, in order to reconstruct precise DSM, we have developed the area base matching process which considering edge information. In this paper, we describe processing flow of our automatic aerial photogrammetry. Moreover, the accuracy assessment is also described. Furthermore, some application of automatic reconstruction of DSM will be desired.

  6. Efficient generalized Golub-Kahan based methods for dynamic inverse problems

    Science.gov (United States)

    Chung, Julianne; Saibaba, Arvind K.; Brown, Matthew; Westman, Erik

    2018-02-01

    We consider efficient methods for computing solutions to and estimating uncertainties in dynamic inverse problems, where the parameters of interest may change during the measurement procedure. Compared to static inverse problems, incorporating prior information in both space and time in a Bayesian framework can become computationally intensive, in part, due to the large number of unknown parameters. In these problems, explicit computation of the square root and/or inverse of the prior covariance matrix is not possible, so we consider efficient, iterative, matrix-free methods based on the generalized Golub-Kahan bidiagonalization that allow automatic regularization parameter and variance estimation. We demonstrate that these methods for dynamic inversion can be more flexible than standard methods and develop efficient implementations that can exploit structure in the prior, as well as possible structure in the forward model. Numerical examples from photoacoustic tomography, space-time deblurring, and passive seismic tomography demonstrate the range of applicability and effectiveness of the described approaches. Specifically, in passive seismic tomography, we demonstrate our approach on both synthetic and real data. To demonstrate the scalability of our algorithm, we solve a dynamic inverse problem with approximately 43 000 measurements and 7.8 million unknowns in under 40 s on a standard desktop.

  7. Human Detection System by Fusing Depth Map-Based Method and Convolutional Neural Network-Based Method

    Directory of Open Access Journals (Sweden)

    Anh Vu Le

    2017-01-01

    Full Text Available In this paper, the depth images and the colour images provided by Kinect sensors are used to enhance the accuracy of human detection. The depth-based human detection method is fast but less accurate. On the other hand, the faster region convolutional neural network-based human detection method is accurate but requires a rather complex hardware configuration. To simultaneously leverage the advantages and relieve the drawbacks of each method, one master and one client system is proposed. The final goal is to make a novel Robot Operation System (ROS-based Perception Sensor Network (PSN system, which is more accurate and ready for the real time application. The experimental results demonstrate the outperforming of the proposed method compared with other conventional methods in the challenging scenarios.

  8. Next generation of procedural skills curriculum development: Proficiency-based progression

    Directory of Open Access Journals (Sweden)

    Richard M. Satava

    2015-01-01

    Conclusion: The FRS use a new process (full life-cycle curriculum development with proficiency-based progression which can be used in order to develop any quantitative procedural curriculum, through generic templates that have been developed. Such an approach will dramatically decrease the cost, time and effort to develop a new specific curriculum, while producing uniformity in approach, inter-operability among different curricula and consistency in objective assessment. This process is currently online, open source and freely available, to encourage the adoption of a scholarly and rigorous approach to curriculum development which is flexible enough to be adopted and adapted to most technical skills curriculum needs.

  9. A PTV method based on ultrasound imaging and feature tracking in a low-concentration sediment-laden flow

    Science.gov (United States)

    Ma, Zhimin; Hu, Wenbin; Zhao, Xiaohong; Tao, Weiliang

    2018-02-01

    This study aims to provide a particle tracking velocimetry (PTV) method based on ultrasound imaging and feature-tracking in a low-concentration sediment-laden flow. A phased array probe is used to generate a 2D ultrasound image at different times. Then, the feature points are extracted to be tracked instead of the centroids of the particle image. In order to better identify the corresponding feature point, each feature is described by an oriented angle and its location. Then, a statistical interpolation procedure is used to yield the displacement vector on the desired grid point. Finally a correction procedure is adopted because the ultrasound image is sequentially acquired line by line through the field of view. A simple test experiment was carried out to evaluate the performance. The ultrasound PTV system was applied to a sediment-laden flow with a low concentration of 1‰, and the speed was up to 10 cm s-1. In comparison to optical particle image velocimetry (PIV), ultrasound imaging does not have a limitation in optical access. The feature-tracking method does not have a binarisation and segmentation procedure, which can result in overlapping particles or a serious loss of particle data. The feature-tracking algorithm improves the peak locking effect and measurement accuracy. Thus, the ultrasound PTV algorithm is a feasible alternative and is significantly more robust against gradients than the correlation-based PIV algorithms in a low-concentration sediment-laden fluid.

  10. Development of a Web-based CANDU Core Management Procedure Automation System

    International Nuclear Information System (INIS)

    Lee, Sanghoon; Kim, Eunggon; Park, Daeyou; Yeom, Choongsub; Suh, Hyungbum; Kim, Sungmin

    2006-01-01

    CANDU reactor core needs efficient core management to increase safety, stability, high performance as well as to decrease operational cost. The most characteristic feature of CANDU is so called 'on-power refueling' i.e., there is no shutdown during refueling in opposition to that of PWR. Although this on-power refueling increases the efficiency of the plant, it requires heavy operational task and difficulties in real time operation such as regulating power distribution, burnup distribution, LZC statistics, the position of control devices and so on. To enhance the CANDU core management, there are several approaches to help operator and reduce difficulties, one of them is the COMOS (CANDU Core On-line Monitoring System). It has developed as an online core surveillance system based on the standard incre instrumentation and the numerical analysis codes such as RFSP (Reactor Fueling Simulation Program). As the procedure is getting more complex and the number of programs is increased, it is required that integrated and cooperative system. So, KHNP and IAE have been developing a new web-based system which can support effective and accurate reactor operational environment called COMPAS that means CANDU cOre Management Procedure Automation System. To ensure development of successful system, several steps of identifying requirements have been performed and Software Requirement Specification (SRS) document was developed. In this paper we emphasis on the how to keep consistency between the requirements and system products by applying requirement traceability methodology

  11. Comparison between two meshless methods based on collocation technique for the numerical solution of four-species tumor growth model

    Science.gov (United States)

    Dehghan, Mehdi; Mohammadi, Vahid

    2017-03-01

    As is said in [27], the tumor-growth model is the incorporation of nutrient within the mixture as opposed to being modeled with an auxiliary reaction-diffusion equation. The formulation involves systems of highly nonlinear partial differential equations of surface effects through diffuse-interface models [27]. Simulations of this practical model using numerical methods can be applied for evaluating it. The present paper investigates the solution of the tumor growth model with meshless techniques. Meshless methods are applied based on the collocation technique which employ multiquadrics (MQ) radial basis function (RBFs) and generalized moving least squares (GMLS) procedures. The main advantages of these choices come back to the natural behavior of meshless approaches. As well as, a method based on meshless approach can be applied easily for finding the solution of partial differential equations in high-dimension using any distributions of points on regular and irregular domains. The present paper involves a time-dependent system of partial differential equations that describes four-species tumor growth model. To overcome the time variable, two procedures will be used. One of them is a semi-implicit finite difference method based on Crank-Nicolson scheme and another one is based on explicit Runge-Kutta time integration. The first case gives a linear system of algebraic equations which will be solved at each time-step. The second case will be efficient but conditionally stable. The obtained numerical results are reported to confirm the ability of these techniques for solving the two and three-dimensional tumor-growth equations.

  12. Handbook of radiologic procedures

    International Nuclear Information System (INIS)

    Hedgcock, M.

    1986-01-01

    This book is organized around radiologic procedures with each discussed from the points of view of: indications, contraindications, materials, method of procedures and complications. Covered in this book are: emergency radiology chest radiology, bone radiology, gastrointestinal radiology, GU radiology, pediatric radiology, computerized tomography, neuroradiology, visceral and peripheral angiography, cardiovascular radiology, nuclear medicine, lymphangiography, and mammography

  13. Towards a generalization procedure for WRF mesoscale wind climatologies

    DEFF Research Database (Denmark)

    Hahmann, Andrea N.; Casso, P.; Campmany, E.

    We present a method for generalizing wind climatologies generated from mesoscale model output (e.g. the Weather, Research and Forecasting (WRF) model.) The generalization procedure is based on Wind Atlas framework of WAsP and KAMM/WAsP, and been extensively in wind resources assessment in DTU Wind...... generalized wind climatologies estimated by the microscale model WAsP and the methodology presented here. For the Danish wind measurements the mean absolute error in the ‘raw’ wind speeds is 9.2%, while the mean absolute error in the generalized wind speeds is 4.1%. The generalization procedure has been...

  14. A new method to test rock abrasiveness based on physico-mechanical and structural properties of rocks

    Directory of Open Access Journals (Sweden)

    V.N. Oparin

    2015-06-01

    Full Text Available A new method to test rock abrasiveness is proposed based upon the dependence of rock abrasiveness on their structural and physico-mechanical properties. The article describes the procedure of presentation of properties that govern rock abrasiveness on a canonical scale by dimensionless components, and the integrated estimation of the properties by a generalized index. The obtained results are compared with the known classifications of rock abrasiveness.

  15. Acidic deposition: State of science and technology. Report 14. Methods for projecting future changes in surface water acid-base chemistry. Final report

    International Nuclear Information System (INIS)

    Thornton, K.W.; Marmorek, D.; Ryan, P.F.; Heltcher, K.; Robinson, D.

    1990-09-01

    The objectives of the report are to: critically evaluate methods for projecting future effects of acidic deposition on surface water acid-base chemistry; review and evaluate techniques and procedures for analyzing projection uncertainty; review procedures for estimating regional lake and stream population attributes; review the U.S. Environmental Protection Agency (EPA) Direct/Delayed Response Project (DDRP) methodology for projecting the effects of acidic deposition on future changes in surface water acid-base chemistry; and present the models, uncertainty estimators, population estimators, and proposed approach selected to project the effects of acidic deposition on future changes in surface water acid-base chemistry in the NAPAP 1990 Integrated Assessment and discuss the selection rationale

  16. Sample processing procedures and radiocarbon dating

    International Nuclear Information System (INIS)

    Svetlik, Ivo; Tomaskova, Lenka; Dreslerova, Dagmar

    2010-01-01

    The article outlines radiocarbon dating routines and highlights the potential and limitations of this method. The author's institutions have been jointly running a conventional radiocarbon dating laboratory using the international CRL code. A procedure based on the synthesis of benzene is used. Small samples are sent abroad for dating because no AMS instrumentation is available in the Czech Republic so far. Our laboratory plans to introduce routines for the processing of milligram samples and preparation of graphitized targets for AMS

  17. Primer on consumer marketing research : procedures, methods, and tools

    Science.gov (United States)

    1994-03-01

    The Volpe Center developed a marketing research primer which provides a guide to the approach, procedures, and research tools used by private industry in predicting consumer response. The final two chapters of the primer focus on the challenges of do...

  18. 3.1.SUIT. Draft EA procedure applicable to historical areas active conservation

    DEFF Research Database (Denmark)

    Algreen-Ussing, Gregers; Wedebrunn, Ola

    2002-01-01

    This document is a preliminary draft for an Environmental Impact Assessment procedure. Its aim is to provide draft guidelines for the assessment of likely significant effects of urban development projects on the urban environment including material assets and cultural heritage. This procedure...... is intended as a way to propose and ensure an active conservation policy for urban historical areas. It is based on state-of-the-art methods and the knowledge of the experts involved in the SUIT project. It is also based on the guidelines presenting the grid of analysis to be used by stakeholders in a joint...

  19. Human factor analysis related to new symptom based procedures used by control room crews during treatment of emergency states

    International Nuclear Information System (INIS)

    Holy, J.

    1999-01-01

    New symptom based emergency procedures have been developed for Nuclear Power Plant Dukovany in the Czech Republic. As one point of the process of verification and validation of the procedures, a specific effort was devoted to detailed analysis of the procedures from human factors and human reliability point of view. The course and results of the analysis are discussed in this article. Although the analyzed procedures have been developed for one specific plant of WWER-440/213 type, most of the presented results may be valid for many other procedures recently developed for semi-automatic control of those technological units which are operated under measurable level of risk. (author)

  20. Three-phase short circuit calculation method based on pre-computed surface for doubly fed induction generator

    Science.gov (United States)

    Ma, J.; Liu, Q.

    2018-02-01

    This paper presents an improved short circuit calculation method, based on pre-computed surface to determine the short circuit current of a distribution system with multiple doubly fed induction generators (DFIGs). The short circuit current, injected into power grid by DFIG, is determined by low voltage ride through (LVRT) control and protection under grid fault. However, the existing methods are difficult to calculate the short circuit current of DFIG in engineering practice due to its complexity. A short circuit calculation method, based on pre-computed surface, was proposed by developing the surface of short circuit current changing with the calculating impedance and the open circuit voltage. And the short circuit currents were derived by taking into account the rotor excitation and crowbar activation time. Finally, the pre-computed surfaces of short circuit current at different time were established, and the procedure of DFIG short circuit calculation considering its LVRT was designed. The correctness of proposed method was verified by simulation.

  1. A fast network solution by the decoupled procedure during short-term dynamic processes in power systems

    Energy Technology Data Exchange (ETDEWEB)

    Popovic, D P; Stefanovic, M D [Nikola Tesla Inst., Belgrade (YU). Power System Dept.

    1990-01-01

    A simple, fast and reliable decoupled procedure for solving the network problems during short-term dynamic processes in power systems is presented. It is based on the Newton-Raphson method applied to the power balance equations, which include the effects of generator saliency and non-impedance loads, with further modifications resulting from the physical properties of the phenomena under study. The good convergence characteristics of the developed procedure are demonstrated, and a comparison is made with the traditional method based on the current equation and the triangularized admittance matrix, using the example of stability analysis of the Yugoslav power grid. (author).

  2. Population based ranking of frameless CT-MRI registration methods

    Energy Technology Data Exchange (ETDEWEB)

    Opposits, Gabor; Kis, Sandor A.; Tron, Lajos; Emri, Miklos [Debrecen Univ. (Hungary). Dept. of Nuclear Medicine; Berenyi, Ervin [Debrecen Univ. (Hungary). Dept. of Biomedical Laboratory and Imaging Science; Takacs, Endre [Rotating Gamma Ltd., Debrecen (Hungary); Dobai, Jozsef G.; Bognar, Laszlo [Debrecen Univ., Medical Center (Hungary). Dept. of Neurosurgery; Szuecs, Bernadett [ScanoMed Ltd., Debrecen (Hungary)

    2015-07-01

    Clinical practice often requires simultaneous information obtained by two different imaging modalities. Registration algorithms are commonly used for this purpose. Automated procedures are very helpful in cases when the same kind of registration has to be performed on images of a high number of subjects. Radiotherapists would prefer to use the best automated method to assist therapy planning, however there are not accepted procedures for ranking the different registration algorithms. We were interested in developing a method to measure the population level performance of CT-MRI registration algorithms by a parameter of values in the [0,1] interval. Pairs of CT and MRI images were collected from 1051 subjects. Results of an automated registration were corrected manually until a radiologist and a neurosurgeon expert both accepted the result as good. This way 1051 registered MRI images were produced by the same pair of experts to be used as gold standards for the evaluation of the performance of other registration algorithms. Pearson correlation coefficient, mutual information, normalized mutual information, Kullback-Leibler divergence, L{sub 1} norm and square L{sub 2} norm (dis)similarity measures were tested for sensitivity to indicate the extent of (dis)similarity of a pair of individual mismatched images. The square Hellinger distance proved suitable to grade the performance of registration algorithms at population level providing the developers with a valuable tool to rank algorithms. The developed procedure provides an objective method to find the registration algorithm performing the best on the population level out of newly constructed or available preselected ones.

  3. Verification of a primary-to-secondary leaking safety procedure in a nuclear power plant using coloured Petri nets

    International Nuclear Information System (INIS)

    Nemeth, E.; Bartha, T.; Fazekas, Cs.; Hangos, K.M.

    2009-01-01

    This paper deals with formal and simulation-based verification methods of a PRImary-to-SEcondary leaking (abbreviated as PRISE) safety procedure. The PRISE safety procedure controls the draining of the contaminated water in a faulty steam generator when a non-compensable leaking from the primary to the secondary circuit occurs. Because of the discrete nature of the verification, a Coloured Petri Net (CPN) representation is proposed for both the procedure and the plant model. We have proved by using a non-model-based strategy that the PRISE safety procedure is safe, there are no dead markings in the state space, and all transitions are live; being either impartial or fair. Further analysis results have been obtained using a model-based verification approach. We created a simple, low dimensional, nonlinear dynamic model of the primary circuit in a VVER-type pressurized water nuclear power plant for the purpose of the model-based verification. This is in contrast to the widely used safety analysis that requires an accurate detailed model. Our model also describes the relevant safety procedures, as well as all of the major leaking-type faults. We propose a novel method to transform this model to a CPN form by discretization. The composed plant and PRISE safety procedure system has also been analysed by simulation using CPN analysis tools. We found by the model-based analysis-using both single and multiple faults-that the PRISE safety procedure initiates the draining when the PRISE event occurs, and no false alarm will be initiated

  4. A monthly quality assurance procedure for 3D surface imaging.

    Science.gov (United States)

    Wooten, H Omar; Klein, Eric E; Gokhroo, Garima; Santanam, Lakshmi

    2010-12-21

    A procedure for periodic quality assurance of a video surface imaging system is introduced. AlignRT is a video camera-based patient localization system that captures and compares images of a patient's topography to a DICOM-formatted external contour, then calculates shifts required to accurately reposition the patient. This technical note describes the tools and methods implemented in our department to verify correct and accurate operation of the AlignRT hardware and software components. The procedure described is performed monthly and complements a daily calibration of the system.

  5. A nodal method based on matrix-response method

    International Nuclear Information System (INIS)

    Rocamora Junior, F.D.; Menezes, A.

    1982-01-01

    A nodal method based in the matrix-response method, is presented, and its application to spatial gradient problems, such as those that exist in fast reactors, near the core - blanket interface, is investigated. (E.G.) [pt

  6. In vitro biofilm formation on resin-based composites after different finishing and polishing procedures.

    Science.gov (United States)

    Cazzaniga, Gloria; Ottobelli, Marco; Ionescu, Andrei C; Paolone, Gaetano; Gherlone, Enrico; Ferracane, Jack L; Brambilla, Eugenio

    2017-12-01

    To evaluate the influence of surface treatments of different resin-based composites (RBCs) on S. mutans biofilm formation. 4 RBCs (microhybrid, nanohybrid, nanofilled, bulk-filled) and 6 finishing-polishing (F/P) procedures (open-air light-curing, light-curing against Mylar strip, aluminum oxide discs, one-step rubber point, diamond bur, multi-blade carbide bur) were evaluated. Surface roughness (SR) (n=5/group), gloss (n=5/group), scanning electron microscopy morphological analysis (SEM), energy-dispersive X-ray spectrometry (EDS) (n=3/group), and S. mutans biofilm formation (n=16/group) were assessed. EDS analysis was repeated after the biofilm assay. A morphological evaluation of S. mutans biofilm was also performed using confocal laser-scanning microscopy (CLSM) (n=2/group). The data were analyzed using Wilcoxon (SR, gloss) and two-way ANOVA with Tukey as post-hoc tests (EDS, biofilm formation). F/P procedures as well as RBCs significantly influenced SR and gloss. While F/P procedures did not significantly influence S. mutans biofilm formation, a significant influence of RBCs on the same parameter was found. Different RBCs showed different surface elemental composition. Both F/P procedures and S. mutans biofilm formation significantly modified this parameter. The tested F/P procedures significantly influenced RBCs surface properties but did not significantly affect S. mutans biofilm formation. The significant influence of the different RBCs tested on S. mutans biofilm formation suggests that material characteristics and composition play a greater role than SR. F/P procedures of RBCs may unexpectedly play a minor role compared to that of the restoration material itself in bacterial colonization. Copyright © 2017 Elsevier Ltd. All rights reserved.

  7. Piezoelectric Accelerometers Modification Based on the Finite Element Method

    DEFF Research Database (Denmark)

    Liu, Bin; Kriegbaum, B.

    2000-01-01

    The paper describes the modification of piezoelectric accelerometers using a Finite Element (FE) method. Brüel & Kjær Accelerometer Type 8325 is chosen as an example to illustrate the advanced accelerometer development procedure. The deviation between the measurement and FE simulation results...

  8. A simulation based engineering method to support HAZOP studies

    DEFF Research Database (Denmark)

    Enemark-Rasmussen, Rasmus; Cameron, David; Angelo, Per Bagge

    2012-01-01

    the conventional HAZOP procedure. The method systematically generates failure scenarios by considering process equipment deviations with pre-defined failure modes. The effect of failure scenarios is then evaluated using dynamic simulations -in this study the K-Spice® software used. The consequences of each failure...

  9. A Procedure for Characterizing the Range of Input Uncertainty Parameters by the Use of FFTBM

    International Nuclear Information System (INIS)

    Petruzzi, A.; Kovtonyuk, A.; Raucci, M.; De Luca, D.; Veronese, F.; D'Auria, F.

    2013-01-01

    In the last years various methodologies were proposed to evaluate the uncertainty of Best Estimate (BE) code predictions. The most used method at the industrial level is based upon the selection of input uncertain parameters, on assigning related ranges of variations and Probability Distribution Functions (PDFs) and on performing a suitable number of code runs to get the combined effect of the variations on the results. A procedure to characterize the variation ranges of the input uncertain parameters is proposed in the paper in place of the usual approach based (mostly) on engineering judgment. The procedure is based on the use of the Fast Fourier Transform Based Method (FFTBM), already part of the Uncertainty Method based on the Accuracy Extrapolation (UMAE) method and extensively used in several international frameworks. The FFTBM has been originally developed to answer questions like 'How long improvements should be added to the system thermal-hydraulic code model? How much simplifications can be introduced and how to conduct an objective comparison?'. The method, easy to understand, convenient to use and user independent, clearly indicates when simulation needs to be improved. The procedure developed for characterizing the range of input uncertainty parameters involves the following main aspects: a) One single input parameter shall not be 'responsible' for the entire error |exp-calc|, unless exceptional situations to be evaluated case by case; b) Initial guess for Max and Min for variation ranges to be based on the usual (adopted) expertise; c) More than one experiment can be used per each NPP and each scenario. Highly influential parameters are expected to be the same. The bounding ranges should be considered for the NPP uncertainty analysis; d) A data base of suitable uncertainty input parameters can be created per each NPP and each transient scenario. (authors)

  10. Development of Facial Rejuvenation Procedures: Thirty Years of Clinical Experience with Face Lifts

    Directory of Open Access Journals (Sweden)

    Byung Jun Kim

    2015-09-01

    Full Text Available Facial rejuvenation procedures can be roughly divided into face lift surgery and nonoperative, less invasive procedures, such as fat grafts, fillers, botulinum toxin injections, thread lifts, or laserbrasion. Face lift surgery or rhytidectomy is the procedure most directly associated with rejuvenation, due to its fundamental ability to restore the anatomical changes caused by aging. Various methods of face lift surgery have been developed over the last hundred years, thanks to advances in the understanding of facial anatomy and the mechanisms of aging, as well as the dedication of innovative surgeons. However, no generally applicable standard method exists, because the condition of each patient is different, and each operative method has advantages and disadvantages. Specific characteristics of the skin of Asians and their skeletal anatomy should be considered when determining the operative method to be used on Asian patients. Plastic surgeons should improve their ability to analyze the original aesthetic properties and problem areas of each patient, drawing on scientific knowledge about the aging process, and they should develop the skills necessary to perform various rejuvenative techniques. In the present article, we reviewed various face lift procedures and the current methods of modified double plane face lift, based on our clinical experience of over 30 years.

  11. Procedure Of Teaching Grammar Using Memory Enhancement

    Directory of Open Access Journals (Sweden)

    Herri Susanto

    2011-11-01

    Full Text Available Teaching grammar has been regarded as a process of understanding from the context. It means a teacher teaches the pupils contextually more than just the rules. However, I have my own experience that teaching grammar methods must depend on the purposes of learning grammar. Some people learn grammar as a means to fulfill the syllabus needs for schools but other people learn grammar for special purposes out of school syllabus, such as for entrance test. For these reasons, the methods of teaching grammar should be different. The students who learn grammar based on the school syllabus probably needs longer procedure of learning that usually uses contextual teaching through listening, speaking, writing, and reading. Nevertheless, students who learn grammar for test need shorter procedure of learning such as memorizing. Therefore, I propose giving a workshop of teaching grammar using memory enhancement as another alternative teaching grammar method. This workshop would show the class that grammar can be learnt through memory enhancement process, i.e.; mind map, music, memory technique and drill to boost up students understanding for test preparation.

  12. Modified prepubic TVT-obturator tape procedure versus the conventional method: a preliminary study.

    Science.gov (United States)

    Long, Cheng-Yu; Wu, Ming-Ping; Wang, Chiu-Lin; Lin, Kun-Ling; Liu, Cheng-Min; Wu, Shu-Hui; Juan, Yung-Shun

    2013-12-01

    To compare the efficacy and safety of the modified prepubic tension-free vaginal tape-obturator (PTVT-O) system procedure with the original TVT-O methods. One hundred and ninety women with urodynamic stress incontinence (USI) were included in this study (93 cases in the TVT-O group and 97 in the PTVT-O group). Clinical assessments before and one year after surgery included urinalyses, 1-h pad tests, urodynamic studies, and a personal interview with the overactive bladder symptom score (OABSS) questionnaire. There were no differences between the two groups in mean age, parity, menopausal status, mean operative time and subjective cure rates (P>0.05), but the efficacy of surgery (cure and improvement) in the PTVT-O group was significantly higher than that in the TVT-O group (P=0.038). Complication rates and visual analog scale (VAS) scores were found to be similar (P>0.05). OABSS decreased significantly after surgery in both groups (P0.05). Our modified procedure is a safe and effective treatment for female USI. It has an advantage over the original TVT-O with better surgical efficacy and comparable postoperative pain, although the follow-up times in this study are different. Copyright © 2013. Published by Elsevier Ireland Ltd.

  13. Multiple position borehole extensometer procedure: Final draft

    International Nuclear Information System (INIS)

    1986-08-01

    The purpose of the Multiple Position Borehole Extensometer Procedure is to provide detailed information for MPBXs installed at the salt Deaf Smith County ESF. This procedure includes design of equipment, installation, instructions, instrument locations, measurement requirements, support requirements, quality assurance procedures, and data acquisition requirements. Data reduction procedures are also discussed; however, the relevance of the data is discussed elsewhere in the appropriate test plans. Sufficient detail is provided in this procedure to allow for integrating the requirements of this procedure into both the facility construction and overall underground testing programs; identifying necessary equipment for procurement; determining data acquisition requirements as input to Automatic Data Acquisition System (ADAS) design; providing step-by-step procedures for training personnel as well as for directing field operations; establishing quality assurance (QA) checkpoints and implementation methods; and defining data reduction methods and providing the anticipated accuracy of the system. 11 refs., 14 figs

  14. Benign Paroxysmal Positional Vertigo after Dental Procedures: A Population-Based Case-Control Study.

    Directory of Open Access Journals (Sweden)

    Tzu-Pu Chang

    Full Text Available Benign paroxysmal positional vertigo (BPPV, the most common type of vertigo in the general population, is thought to be caused by dislodgement of otoliths from otolithic organs into the semicircular canals. In most cases, however, the cause behind the otolith dislodgement is unknown. Dental procedures, one of the most common medical treatments, are considered to be a possible cause of BPPV, although this has yet to be proven. This study is the first nationwide population-based case-control study conducted to investigate the correlation between BPPV and dental manipulation.Patients diagnosed with BPPV between January 1, 2007 and December 31, 2012 were recruited from the National Health Insurance Research Database in Taiwan. We further identified those who had undergone dental procedures within 1 month and within 3 months before the first diagnosis date of BPPV. We also identified the comorbidities of the patients with BPPV, including head trauma, osteoporosis, migraine, hypertension, diabetes, hyperlipidemia and stroke. These variables were then compared to those in age- and gender-matched controls.In total, 768 patients with BPPV and 1536 age- and gender-matched controls were recruited. In the BPPV group, 9.2% of the patients had undergone dental procedures within 1 month before the diagnosis of BPPV. In contrast, only 5.5% of the controls had undergone dental treatment within 1 month before the date at which they were identified (P = 0.001. After adjustments for demographic factors and comorbidities, recent exposure to dental procedures was positively associated with BPPV (adjusted odds ratio 1.77; 95% confidence interval 1.27-2.47. This association was still significant if we expanded the time period from 1 month to 3 months (adjusted odds ratio 1.77; 95% confidence interval 1.39-2.26.Our results demonstrated a correlation between dental procedures and BPPV. The specialists who treat patients with BPPV should consider dental procedures to be a

  15. Benign Paroxysmal Positional Vertigo after Dental Procedures: A Population-Based Case-Control Study.

    Science.gov (United States)

    Chang, Tzu-Pu; Lin, Yueh-Wen; Sung, Pi-Yu; Chuang, Hsun-Yang; Chung, Hsien-Yang; Liao, Wen-Ling

    2016-01-01

    Benign paroxysmal positional vertigo (BPPV), the most common type of vertigo in the general population, is thought to be caused by dislodgement of otoliths from otolithic organs into the semicircular canals. In most cases, however, the cause behind the otolith dislodgement is unknown. Dental procedures, one of the most common medical treatments, are considered to be a possible cause of BPPV, although this has yet to be proven. This study is the first nationwide population-based case-control study conducted to investigate the correlation between BPPV and dental manipulation. Patients diagnosed with BPPV between January 1, 2007 and December 31, 2012 were recruited from the National Health Insurance Research Database in Taiwan. We further identified those who had undergone dental procedures within 1 month and within 3 months before the first diagnosis date of BPPV. We also identified the comorbidities of the patients with BPPV, including head trauma, osteoporosis, migraine, hypertension, diabetes, hyperlipidemia and stroke. These variables were then compared to those in age- and gender-matched controls. In total, 768 patients with BPPV and 1536 age- and gender-matched controls were recruited. In the BPPV group, 9.2% of the patients had undergone dental procedures within 1 month before the diagnosis of BPPV. In contrast, only 5.5% of the controls had undergone dental treatment within 1 month before the date at which they were identified (P = 0.001). After adjustments for demographic factors and comorbidities, recent exposure to dental procedures was positively associated with BPPV (adjusted odds ratio 1.77; 95% confidence interval 1.27-2.47). This association was still significant if we expanded the time period from 1 month to 3 months (adjusted odds ratio 1.77; 95% confidence interval 1.39-2.26). Our results demonstrated a correlation between dental procedures and BPPV. The specialists who treat patients with BPPV should consider dental procedures to be a risk factor

  16. New design procedure development of future reactor critical power estimation. (1) Practical design-by-analysis method for BWR critical power design correlation

    International Nuclear Information System (INIS)

    Yamamoto, Yasushi; Mitsutake, Toru

    2007-01-01

    For present BWR fuels, the full mock-up thermal-hydraulic test, such as the critical power measurement test, pressure drop measurement test and so on, has been needed. However, the full mock-up test required the high costs and large-scale test facility. At present, there are only a few test facilities to perform the full mock-up thermal-hydraulic test in the world. Moreover, for future BWR, the bundle size tends to be larger, because of reducing the plant construction costs and minimizing the routine check period. For instance, AB1600, improved ABWR, was proposed from Toshiba, whose bundle size was 1.2 times larger than the conventional BWR fuel size. It is too expensive and far from realistic to perform the full mock-up thermal-hydraulic test for such a large size fuel bundle. The new design procedure is required to realize the large scale bundle design development, especially for the future reactor. Therefore, the new design procedure, Practical Design-by-Analysis (PDBA) method, has been developed. This new procedure consists of the partial mock-up test and numerical analysis. At present, the subchannel analysis method based on three-fluid two-phase flow model only is a realistic choice. Firstly, the partial mock-up test is performed, for instance, the 1/4 partial mock-up bundle. Then, the first-step critical power correlation coefficients are evaluated with the measured data. The input data, such as the spacer effect model coefficient, on the subchannel analysis are also estimated with the data. Next, the radial power effect on the critical power of the full-bundle size was estimated with the subchannel analysis. Finally, the critical power correlation is modified by the subchannel analysis results. In the present study, the critical power correlation of the conventional 8x8 BWR fuel was developed with the PDBA method by 4x4 partial mock-up tests and the subchannel analysis code. The accuracy of the estimated critical power was 3.8%. The several themes remain to

  17. A multiscale MD-FE model of diffusion in composite media with internal surface interaction based on numerical homogenization procedure.

    Science.gov (United States)

    Kojic, M; Milosevic, M; Kojic, N; Kim, K; Ferrari, M; Ziemys, A

    2014-02-01

    Mass transport by diffusion within composite materials may depend not only on internal microstructural geometry, but also on the chemical interactions between the transported substance and the material of the microstructure. Retrospectively, there is a gap in methods and theory to connect material microstructure properties with macroscale continuum diffusion characteristics. Here we present a new hierarchical multiscale model for diffusion within composite materials that couples material microstructural geometry and interactions between diffusing particles and the material matrix. This model, which bridges molecular dynamics (MD) and the finite element (FE) method, is employed to construct a continuum diffusion model based on a novel numerical homogenization procedure. The procedure is general and robust for evaluating constitutive material parameters of the continuum model. These parameters include the traditional bulk diffusion coefficients and, additionally, the distances from the solid surface accounting for surface interaction effects. We implemented our models to glucose diffusion through the following two geometrical/material configurations: tightly packed silica nanospheres, and a complex fibrous structure surrounding nanospheres. Then, rhodamine 6G diffusion analysis through an aga-rose gel network was performed, followed by a model validation using our experimental results. The microstructural model, numerical homogenization and continuum model offer a new platform for modeling and predicting mass diffusion through complex biological environment and within composite materials that are used in a wide range of applications, like drug delivery and nanoporous catalysts.

  18. The Reform of the Procedural Religious Court Law Based on Islamic Law in Indonesian Legal System

    Directory of Open Access Journals (Sweden)

    Abdullah Gofar

    2017-07-01

    Full Text Available The history of the development of religious courts and the inner atmosphere struggle of Muslims in Indonesia which faced the state’s political force in the New Order era has brought forth the religious procedural law. Article 54 of The 1989 Law No.7 stated that "the applicable law in the Religious Courts are applicable procedural law in the General Court, except those specifically regulated in this law." Philosophically, the Western law both civil substantive law (Burgerlijke Wetboek and formal law/civil procedure (HIR and Rbg, prepared using the approach of individualism, secular, the optical properties of the nature legal dispute was seen as objects (Zaak which is sheer material. While the substantive law in religious courts is the law derived from Islamic law that stem from philosophical values of Islam. So, the presence of the Religious Courts in the scope of judicial in Indonesia still raises problems, including: Why is the western law of civil procedure which promote the value of materialism and formal correctness adopted into religious procedural law, whereas the philosophical orientation is not aligned with the substantive law based on Islamic law, and what are the efforts to reform the reformulation of procedural law of religious courts.

  19. On a problematic procedure to manipulate response biases in recognition experiments: the case of "implied" base rates.

    Science.gov (United States)

    Bröder, Arndt; Malejka, Simone

    2017-07-01

    The experimental manipulation of response biases in recognition-memory tests is an important means for testing recognition models and for estimating their parameters. The textbook manipulations for binary-response formats either vary the payoff scheme or the base rate of targets in the recognition test, with the latter being the more frequently applied procedure. However, some published studies reverted to implying different base rates by instruction rather than actually changing them. Aside from unnecessarily deceiving participants, this procedure may lead to cognitive conflicts that prompt response strategies unknown to the experimenter. To test our objection, implied base rates were compared to actual base rates in a recognition experiment followed by a post-experimental interview to assess participants' response strategies. The behavioural data show that recognition-memory performance was estimated to be lower in the implied base-rate condition. The interview data demonstrate that participants used various second-order response strategies that jeopardise the interpretability of the recognition data. We thus advice researchers against substituting actual base rates with implied base rates.

  20. Procedures monitoring and MAAP analysis

    International Nuclear Information System (INIS)

    May, R.S.

    1991-01-01

    Numerous studies of severe accidents in light water reactors have shown that operator response can play a crucial role in the predicted outcomes of dominant accident scenarios. MAAP provides the capability to specify certain operator actions as input data. However, making reasonable assumptions about the nature and timing of operator response requires substantial knowledge about plant practices and procedures and what they imply for the event being analyzed. The appearance of knowledge based software technology in the mid-1980s provided a natural format for representing and maintaining procedures as IF-THEN rules. The boiling water reactor (BWR) Emergency Operating Procedures Tracking System (EOPTS) was composed of a rule base of procedures and a dedicated inference engine (problem-solver). Based on the general approach and experience of EOPTS, the authors have developed a prototype procedures monitoring system that reads MAAP transient output files and evaluate the EOP messages and instructions that would be implied during each transient time interval. The prototype system was built using the NEXPERT OBJECT expert system development system, running on a 386-class personal computer with 4 MB of memory. The limited scope prototype includes a reduced set of BWR6 EOPs procedures evaluation on a coarse time interval, a simple text-based user interface, and a summary-report generator. The prototype, which is limited to batch-mode analysis of MAAP output, is intended to demonstrate the concept and aid in the design of a production system, which will involve a direct link to MAAP and interactive capabilities

  1. Triptycene-based dianhydrides, polyimides, methods of making each, and methods of use

    KAUST Repository

    Ghanem, Bader; Pinnau, Ingo; Swaidan, Raja

    2015-01-01

    A triptycene-based monomer, a method of making a triptycene-based monomer, a triptycene-based aromatic polyimide, a method of making a triptycene- based aromatic polyimide, methods of using triptycene-based aromatic polyimides, structures incorporating triptycene-based aromatic polyimides, and methods of gas separation are provided. Embodiments of the triptycene-based monomers and triptycene-based aromatic polyimides have high permeabilities and excellent selectivities. Embodiments of the triptycene-based aromatic polyimides have one or more of the following characteristics: intrinsic microporosity, good thermal stability, and enhanced solubility. In an exemplary embodiment, the triptycene-based aromatic polyimides are microporous and have a high BET surface area. In an exemplary embodiment, the triptycene-based aromatic polyimides can be used to form a gas separation membrane.

  2. Triptycene-based dianhydrides, polyimides, methods of making each, and methods of use

    KAUST Repository

    Ghanem, Bader

    2015-12-30

    A triptycene-based monomer, a method of making a triptycene-based monomer, a triptycene-based aromatic polyimide, a method of making a triptycene- based aromatic polyimide, methods of using triptycene-based aromatic polyimides, structures incorporating triptycene-based aromatic polyimides, and methods of gas separation are provided. Embodiments of the triptycene-based monomers and triptycene-based aromatic polyimides have high permeabilities and excellent selectivities. Embodiments of the triptycene-based aromatic polyimides have one or more of the following characteristics: intrinsic microporosity, good thermal stability, and enhanced solubility. In an exemplary embodiment, the triptycene-based aromatic polyimides are microporous and have a high BET surface area. In an exemplary embodiment, the triptycene-based aromatic polyimides can be used to form a gas separation membrane.

  3. An automated Monte-Carlo based method for the calculation of cascade summing factors

    Energy Technology Data Exchange (ETDEWEB)

    Jackson, M.J., E-mail: mark.j.jackson@awe.co.uk; Britton, R.; Davies, A.V.; McLarty, J.L.; Goodwin, M.

    2016-10-21

    A versatile method has been developed to calculate cascade summing factors for use in quantitative gamma-spectrometry analysis procedures. The proposed method is based solely on Evaluated Nuclear Structure Data File (ENSDF) nuclear data, an X-ray energy library, and accurate efficiency characterisations for single detector counting geometries. The algorithm, which accounts for γ–γ, γ–X, γ–511 and γ–e{sup −} coincidences, can be applied to any design of gamma spectrometer and can be expanded to incorporate any number of nuclides. Efficiency characterisations can be derived from measured or mathematically modelled functions, and can accommodate both point and volumetric source types. The calculated results are shown to be consistent with an industry standard gamma-spectrometry software package. Additional benefits including calculation of cascade summing factors for all gamma and X-ray emissions, not just the major emission lines, are also highlighted. - Highlights: • Versatile method to calculate coincidence summing factors for gamma-spectrometry analysis. • Based solely on ENSDF format nuclear data and detector efficiency characterisations. • Enables generation of a CSF library for any detector, geometry and radionuclide. • Improves measurement accuracy and reduces acquisition times required to meet MDA.

  4. A CFD-based aerodynamic design procedure for hypersonic wind-tunnel nozzles

    Science.gov (United States)

    Korte, John J.

    1993-01-01

    A new procedure which unifies the best of current classical design practices, computational fluid dynamics (CFD), and optimization procedures is demonstrated for designing the aerodynamic lines of hypersonic wind-tunnel nozzles. The new procedure can be used to design hypersonic wind tunnel nozzles with thick boundary layers where the classical design procedure has been shown to break down. An efficient CFD code, which solves the parabolized Navier-Stokes (PNS) equations using an explicit upwind algorithm, is coupled to a least-squares (LS) optimization procedure. A LS problem is formulated to minimize the difference between the computed flow field and the objective function, consisting of the centerline Mach number distribution and the exit Mach number and flow angle profiles. The aerodynamic lines of the nozzle are defined using a cubic spline, the slopes of which are optimized with the design procedure. The advantages of the new procedure are that it allows full use of powerful CFD codes in the design process, solves an optimization problem to determine the new contour, can be used to design new nozzles or improve sections of existing nozzles, and automatically compensates the nozzle contour for viscous effects as part of the unified design procedure. The new procedure is demonstrated by designing two Mach 15, a Mach 12, and a Mach 18 helium nozzles. The flexibility of the procedure is demonstrated by designing the two Mach 15 nozzles using different constraints, the first nozzle for a fixed length and exit diameter and the second nozzle for a fixed length and throat diameter. The computed flow field for the Mach 15 least squares parabolized Navier-Stokes (LS/PNS) designed nozzle is compared with the classically designed nozzle and demonstrates a significant improvement in the flow expansion process and uniform core region.

  5. Multiple predictor smoothing methods for sensitivity analysis: Example results

    International Nuclear Information System (INIS)

    Storlie, Curtis B.; Helton, Jon C.

    2008-01-01

    The use of multiple predictor smoothing methods in sampling-based sensitivity analyses of complex models is investigated. Specifically, sensitivity analysis procedures based on smoothing methods employing the stepwise application of the following nonparametric regression techniques are described in the first part of this presentation: (i) locally weighted regression (LOESS), (ii) additive models, (iii) projection pursuit regression, and (iv) recursive partitioning regression. In this, the second and concluding part of the presentation, the indicated procedures are illustrated with both simple test problems and results from a performance assessment for a radioactive waste disposal facility (i.e., the Waste Isolation Pilot Plant). As shown by the example illustrations, the use of smoothing procedures based on nonparametric regression techniques can yield more informative sensitivity analysis results than can be obtained with more traditional sensitivity analysis procedures based on linear regression, rank regression or quadratic regression when nonlinear relationships between model inputs and model predictions are present

  6. MATLAB-implemented estimation procedure for model-based assessment of hepatic insulin degradation from standard intravenous glucose tolerance test data.

    Science.gov (United States)

    Di Nardo, Francesco; Mengoni, Michele; Morettini, Micaela

    2013-05-01

    Present study provides a novel MATLAB-based parameter estimation procedure for individual assessment of hepatic insulin degradation (HID) process from standard frequently-sampled intravenous glucose tolerance test (FSIGTT) data. Direct access to the source code, offered by MATLAB, enabled us to design an optimization procedure based on the alternating use of Gauss-Newton's and Levenberg-Marquardt's algorithms, which assures the full convergence of the process and the containment of computational time. Reliability was tested by direct comparison with the application, in eighteen non-diabetic subjects, of well-known kinetic analysis software package SAAM II, and by application on different data. Agreement between MATLAB and SAAM II was warranted by intraclass correlation coefficients ≥0.73; no significant differences between corresponding mean parameter estimates and prediction of HID rate; and consistent residual analysis. Moreover, MATLAB optimization procedure resulted in a significant 51% reduction of CV% for the worst-estimated parameter by SAAM II and in maintaining all model-parameter CV% MATLAB-based procedure was suggested as a suitable tool for the individual assessment of HID process. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  7. GLIMM'S METHOD FOR GAS DYNAMICS

    Energy Technology Data Exchange (ETDEWEB)

    Colella, Phillip

    1980-07-01

    We investigate Glimm's method, a method for constructing approximate solutions to systems of hyperbolic conservation laws in one space variable by sampling explicit wave solutions. It is extended to several space variables by operator splitting. We consider two functional problems. 1) We propose a highly accurate form of the sampling procedure, in one space variable, based on the van der Corput sampling sequence. We test the improved sampling procedure numerically in the case of inviscid compressible flow in one space dimension and find that it gives high resolution results both in the smooth parts of the solution, as well as the discontinuities. 2) We investigate the operator splitting procedure by means of which the multidimensional method is constructed. An 0(1) error stemming from the use of this procedure near shocks oblique to the spatial grid is analyzed numerically in the case of the equations for inviscid compressible flow in two space dimensions. We present a hybrid method which eliminates this error, consisting of Glimm's method, used in continuous parts of the flow, and the nonlinear Godunov's method, used in regions where large pressure jumps are generated. The resulting method is seen to be a substantial improvement over either of the component methods for multidimensional calculations.

  8. New test procedure to determine fuel's knock resistance; Neues Kraftstoffpruefverfahren zur Bestimmung der Klopffestigkeit

    Energy Technology Data Exchange (ETDEWEB)

    Huber, Karl [Hochschule Ingolstadt (Germany). Thermodynamik und Verbrennungsmotoren; Hauber, Johann; Raba, Andreas [Hochschule Ingolstadt (Germany). Inst. fuer Angewandte Forschung; Nell, Robert [Rofa Laboratory and Process Analyzers, Kritzendorf (Austria). Bereich Produktentwicklung

    2013-07-15

    Knock resistance is one of the most important quality features of gasoline that is determined by standardised motor test procedures. These procedures were developed more than 80 years ago and have been used nearly unchanged since then. During a four-year research project at Ingolstadt University of Applied Sciences, the procedures for determination of octane numbers were analysed in order to develop a new engine-based test method. (orig.)

  9. A fast pulse phase estimation method for X-ray pulsar signals based on epoch folding

    Directory of Open Access Journals (Sweden)

    Xue Mengfan

    2016-06-01

    Full Text Available X-ray pulsar-based navigation (XPNAV is an attractive method for autonomous deep-space navigation in the future. The pulse phase estimation is a key task in XPNAV and its accuracy directly determines the navigation accuracy. State-of-the-art pulse phase estimation techniques either suffer from poor estimation accuracy, or involve the maximization of generally non-convex object function, thus resulting in a large computational cost. In this paper, a fast pulse phase estimation method based on epoch folding is presented. The statistical properties of the observed profile obtained through epoch folding are developed. Based on this, we recognize the joint probability distribution of the observed profile as the likelihood function and utilize a fast Fourier transform-based procedure to estimate the pulse phase. Computational complexity of the proposed estimator is analyzed as well. Experimental results show that the proposed estimator significantly outperforms the currently used cross-correlation (CC and nonlinear least squares (NLS estimators, while significantly reduces the computational complexity compared with NLS and maximum likelihood (ML estimators.

  10. Development of efficient time-evolution method based on three-term recurrence relation

    International Nuclear Information System (INIS)

    Akama, Tomoko; Kobayashi, Osamu; Nanbu, Shinkoh

    2015-01-01

    The advantage of the real-time (RT) propagation method is a direct solution of the time-dependent Schrödinger equation which describes frequency properties as well as all dynamics of a molecular system composed of electrons and nuclei in quantum physics and chemistry. Its applications have been limited by computational feasibility, as the evaluation of the time-evolution operator is computationally demanding. In this article, a new efficient time-evolution method based on the three-term recurrence relation (3TRR) was proposed to reduce the time-consuming numerical procedure. The basic formula of this approach was derived by introducing a transformation of the operator using the arcsine function. Since this operator transformation causes transformation of time, we derived the relation between original and transformed time. The formula was adapted to assess the performance of the RT time-dependent Hartree-Fock (RT-TDHF) method and the time-dependent density functional theory. Compared to the commonly used fourth-order Runge-Kutta method, our new approach decreased computational time of the RT-TDHF calculation by about factor of four, showing the 3TRR formula to be an efficient time-evolution method for reducing computational cost

  11. Avoiding thermal striping damage: Experimentally-based design procedures for high-cycle thermal fatigue

    International Nuclear Information System (INIS)

    Betts, C.; Judd, A.M.; Lewis, M.W.J.

    1994-01-01

    In the coolant circuits of a liquid metal cooled reactor (LMR), where there is turbulent mixing of coolant streams at different temperatures, there are temperature fluctuations in the fluid. If an item of the reactor structure is immersed in this fluid it will, because of the good heat transfer from the flowing liquid metal, experience surface temperature fluctuations which will induce dynamic surface strains. It is necessary to design the reactor so that these temperature fluctuations do not, over the life of the plant, cause damage. The purpose of this paper is to describe design procedures to prevent damage of this type. Two such procedures are given, one to prevent the initiation of defects in a nominally defect-free structure or to allow initiation only at the end of the component life, and the other to prevent significant growth of undetectable pre-existing defects of the order of 0.2 to 0.4 mm in depth. Experimental validation of these procedures is described, and the way they can be applied in practice is indicated. To set the scene the paper starts with a brief summary of cases in which damage of this type, or the need to avoid such damage, have had important effects on reactor operation. Structural damage caused by high-cycle thermal fatigue has had a significant adverse influence on the operation of LMRs on several occasions. It is necessary to eliminate the risk of such damage at the design stage. In the absence of detailed knowledge of the temperature history to which it will be subject, an LMR structure can be designed so that, if it is initially free of defects more than 0.1 mm deep, no such defects will be initiated by high-cycle fatigue. This can be done by ensuring that the maximum source temperature difference in the liquid metal is less than a limiting value, which depends on temperature. The limit is very low, however, and likely to be restrictive. This method, by virtue of its safety margin, takes into account pre-existing surface crack

  12. Note: Photopyroelectric measurement of thermal effusivity of transparent liquids by a method free of fitting procedures

    Science.gov (United States)

    Ivanov, R.; Marín, E.; Villa, J.; Hernández Aguilar, C.; Domínguez Pacheco, A.; Hernández Garrido, S.

    2016-02-01

    In a recent paper published in this journal [R. Ivanov et al., Rev. Sci. Instrum. 86, 064902 (2015)], a methodology free of fitting procedures for determining the thermal effusivity of liquids using the electropyroelectric technique was reported. Here the same measurement principle is extended to the well-known photopyroelectric technique. The theoretical basis and experimental basis of the method are presented and its usefulness is demonstrated with measurements on test samples.

  13. NEW SPECTROPHOTOMETRIC METHOD WITH KMnO 4 FOR ...

    African Journals Online (AJOL)

    The new method, an instrument-based and simple experimental procedure involves the reaction of the hypochlorite with arsenious oxide (As2O3) (pH 6.5) followed by the coupled reaction of residual As2O3 with permanganate at acidic pH. A titration procedure is described and spectrophotometric method is designed using ...

  14. Modeling pollen time series using seasonal-trend decomposition procedure based on LOESS smoothing.

    Science.gov (United States)

    Rojo, Jesús; Rivero, Rosario; Romero-Morte, Jorge; Fernández-González, Federico; Pérez-Badia, Rosa

    2017-02-01

    Analysis of airborne pollen concentrations provides valuable information on plant phenology and is thus a useful tool in agriculture-for predicting harvests in crops such as the olive and for deciding when to apply phytosanitary treatments-as well as in medicine and the environmental sciences. Variations in airborne pollen concentrations, moreover, are indicators of changing plant life cycles. By modeling pollen time series, we can not only identify the variables influencing pollen levels but also predict future pollen concentrations. In this study, airborne pollen time series were modeled using a seasonal-trend decomposition procedure based on LOcally wEighted Scatterplot Smoothing (LOESS) smoothing (STL). The data series-daily Poaceae pollen concentrations over the period 2006-2014-was broken up into seasonal and residual (stochastic) components. The seasonal component was compared with data on Poaceae flowering phenology obtained by field sampling. Residuals were fitted to a model generated from daily temperature and rainfall values, and daily pollen concentrations, using partial least squares regression (PLSR). This method was then applied to predict daily pollen concentrations for 2014 (independent validation data) using results for the seasonal component of the time series and estimates of the residual component for the period 2006-2013. Correlation between predicted and observed values was r = 0.79 (correlation coefficient) for the pre-peak period (i.e., the period prior to the peak pollen concentration) and r = 0.63 for the post-peak period. Separate analysis of each of the components of the pollen data series enables the sources of variability to be identified more accurately than by analysis of the original non-decomposed data series, and for this reason, this procedure has proved to be a suitable technique for analyzing the main environmental factors influencing airborne pollen concentrations.

  15. Voxel based statistical analysis method for microPET studies to assess the cerebral glucose metabolism in cat deafness model: comparison to ROI based method

    International Nuclear Information System (INIS)

    Kim, Jin Su; Lee, Jae Sung; Park, Min Hyun; Lee, Jong Jin; Kang, Hye Jin; Lee, Hyo Jeong; Oh, Seung Ha; Kim, Chong Sun; Jung, June Key; Lee, Myung Chul; Lee, Dong Soo; Lim, Sang Moo

    2005-01-01

    Imaging research on the brain of sensory-deprived cats using small animal PET scanner has gained interest since the abundant information about the sensory system of ths animal is available and close examination of the brain is possible due to larger size of its brain than mouse or rat. In this study, we have established the procedures for 3D voxel-based statistical analysis (SPM) of FDG PET image of cat brain, and confirmed using ROI based-method. FDG PET scans of 4 normal and 4 deaf cats were acquired for 30 minutes using microPET R4 scanner. Only the brain cortices were extracted using a masking and threshold method to facilitate spatial normalization. After spatial normalization and smoothing, 3D voxel-wise and ROI based t-test were performed to identify the regions with significant different FDG uptake between the normal and deaf cats. In ROI analysis, 26 ROIs were drawn on both hemispheres, and regional mean pixel value in each ROI was normalized to the global mean of the brain. Cat brains were spatially normalized well onto the target brain due to the removal of background activity. When cerebral glucose metabolism of deaf cats were compared to the normal controls after removing the effects of the global count, the glucose metabolism in the auditory cortex, head of caudate nucleus, and thalamus in both hemispheres of the deaf cats was significantly lower than that of the controls (P<0.01). No area showed a significantly increased metabolism in the deaf cats even in higher significance level (P<0.05). ROI analysis also showed significant reduction of glucose metabolism in the same region. This study established and confirmed a method for voxel-based analysis of animal PET data of cat brain, which showed high localization accuracy and specificity and was useful for examining the cerebral glucose metabolism in a cat cortical deafness model

  16. Voxel based statistical analysis method for microPET studies to assess the cerebral glucose metabolism in cat deafness model: comparison to ROI based method

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Jin Su; Lee, Jae Sung; Park, Min Hyun; Lee, Jong Jin; Kang, Hye Jin; Lee, Hyo Jeong; Oh, Seung Ha; Kim, Chong Sun; Jung, June Key; Lee, Myung Chul; Lee, Dong Soo [Seoul National University College of Medicine, Seoul (Korea, Republic of); Lim, Sang Moo [KIRAMS, Seoul (Korea, Republic of)

    2005-07-01

    Imaging research on the brain of sensory-deprived cats using small animal PET scanner has gained interest since the abundant information about the sensory system of ths animal is available and close examination of the brain is possible due to larger size of its brain than mouse or rat. In this study, we have established the procedures for 3D voxel-based statistical analysis (SPM) of FDG PET image of cat brain, and confirmed using ROI based-method. FDG PET scans of 4 normal and 4 deaf cats were acquired for 30 minutes using microPET R4 scanner. Only the brain cortices were extracted using a masking and threshold method to facilitate spatial normalization. After spatial normalization and smoothing, 3D voxel-wise and ROI based t-test were performed to identify the regions with significant different FDG uptake between the normal and deaf cats. In ROI analysis, 26 ROIs were drawn on both hemispheres, and regional mean pixel value in each ROI was normalized to the global mean of the brain. Cat brains were spatially normalized well onto the target brain due to the removal of background activity. When cerebral glucose metabolism of deaf cats were compared to the normal controls after removing the effects of the global count, the glucose metabolism in the auditory cortex, head of caudate nucleus, and thalamus in both hemispheres of the deaf cats was significantly lower than that of the controls (P<0.01). No area showed a significantly increased metabolism in the deaf cats even in higher significance level (P<0.05). ROI analysis also showed significant reduction of glucose metabolism in the same region. This study established and confirmed a method for voxel-based analysis of animal PET data of cat brain, which showed high localization accuracy and specificity and was useful for examining the cerebral glucose metabolism in a cat cortical deafness model.

  17. The "SAFARI" Method of Collection Study and Cooperative Acquisition for a Multi-Library Cooperative. A Manual of Procedures.

    Science.gov (United States)

    Sinclair, Dorothy

    This document examines the importance and difficulties in resource sharing and acquisition by libraries and introduces the procedures of the Site Appraisal for Area Resources Inventory (SAFARI) system as a method of comparative evaluation of subject collections among a group of libraries. Resource, or collection, sharing offers specific…

  18. Multiple predictor smoothing methods for sensitivity analysis: Description of techniques

    International Nuclear Information System (INIS)

    Storlie, Curtis B.; Helton, Jon C.

    2008-01-01

    The use of multiple predictor smoothing methods in sampling-based sensitivity analyses of complex models is investigated. Specifically, sensitivity analysis procedures based on smoothing methods employing the stepwise application of the following nonparametric regression techniques are described: (i) locally weighted regression (LOESS), (ii) additive models, (iii) projection pursuit regression, and (iv) recursive partitioning regression. Then, in the second and concluding part of this presentation, the indicated procedures are illustrated with both simple test problems and results from a performance assessment for a radioactive waste disposal facility (i.e., the Waste Isolation Pilot Plant). As shown by the example illustrations, the use of smoothing procedures based on nonparametric regression techniques can yield more informative sensitivity analysis results than can be obtained with more traditional sensitivity analysis procedures based on linear regression, rank regression or quadratic regression when nonlinear relationships between model inputs and model predictions are present

  19. Finite difference time domain calculation of three-dimensional phononic band structures using a postprocessing method based on the filter diagonalization

    International Nuclear Information System (INIS)

    Su Xiaoxing; Ma Tianxue; Wang Yuesheng

    2011-01-01

    If the band structure of a three-dimensional (3D) phononic crystal (PNC) is calculated by using the finite difference time domain (FDTD) method combined with the fast Fourier transform (FFT)-based postprocessing method, good results can only be ensured by a sufficiently large number of FDTD iterations. On a common computer platform, the total computation time will be very long. To overcome this difficulty, an excellent harmonic inversion algorithm called the filter diagonalization method (FDM) can be used in the postprocessing to reduce the number of FDTD iterations. However, the low efficiency of the FDM, which occurs when a relatively long time series is given, does not necessarily ensure an effective reduction of the total computation time. In this paper, a postprocessing method based on the FDM is proposed. The main procedure of the method is designed considering the aim to make the time spent on the method itself far less than the corresponding time spent on the FDTD iterations. To this end, the FDTD time series is preprocessed to be shortened significantly before the FDM frequency extraction. The preprocessing procedure is performed with the filter and decimation operations, which are widely used in narrow-band signal processing. Numerical results for a typical 3D solid PNC system show that the proposed postprocessing method can be used to effectively reduce the total computation time of the FDTD calculation of 3D phononic band structures.

  20. Finite difference time domain calculation of three-dimensional phononic band structures using a postprocessing method based on the filter diagonalization

    Energy Technology Data Exchange (ETDEWEB)

    Su Xiaoxing [School of Electronic and Information Engineering, Beijing Jiaotong University, Beijing 100044 (China); Ma Tianxue; Wang Yuesheng, E-mail: xxsu@bjtu.edu.cn [Institute of Engineering Mechanics, Beijing Jiaotong University, Beijing 100044 (China)

    2011-10-15

    If the band structure of a three-dimensional (3D) phononic crystal (PNC) is calculated by using the finite difference time domain (FDTD) method combined with the fast Fourier transform (FFT)-based postprocessing method, good results can only be ensured by a sufficiently large number of FDTD iterations. On a common computer platform, the total computation time will be very long. To overcome this difficulty, an excellent harmonic inversion algorithm called the filter diagonalization method (FDM) can be used in the postprocessing to reduce the number of FDTD iterations. However, the low efficiency of the FDM, which occurs when a relatively long time series is given, does not necessarily ensure an effective reduction of the total computation time. In this paper, a postprocessing method based on the FDM is proposed. The main procedure of the method is designed considering the aim to make the time spent on the method itself far less than the corresponding time spent on the FDTD iterations. To this end, the FDTD time series is preprocessed to be shortened significantly before the FDM frequency extraction. The preprocessing procedure is performed with the filter and decimation operations, which are widely used in narrow-band signal processing. Numerical results for a typical 3D solid PNC system show that the proposed postprocessing method can be used to effectively reduce the total computation time of the FDTD calculation of 3D phononic band structures.

  1. An electrical resistivity-based method for investigation of subsurface structure

    Science.gov (United States)

    Alves Meira Neto, A.; Litwin, D.; Troch, P. A. A.; Ferre, T. P. A.

    2017-12-01

    Resolving the spatial distribution of soil porosity within the subsurface is of great importance for understanding flow and transport within heterogeneous media. Additionally, porosity patterns can be associated with the availability of water and carbon dioxide that will drive geochemical reactions and constrain microbiological growth. The use of controlled experimentation has the potential to circumvent problems related to the external and internal variability of natural systems, while also allowing a higher degree of observability. In this study, we suggest an ERT-based method of retrieving porosity fields based on the application of Archie's law associated with an experimental procedure that can be used in laboratory-scale studies. We used a 2 cubic meter soil lysimeter, equipped with 238 electrodes distributed along its walls for testing the method. The lysimeter serves as a scaled-down version of the highly monitored artificial hillslopes at the Landscape Evolution Observatory (LEO) located at Biosphere 2 - University of Arizona. The capability of the ERT system in deriving spatially distributed patterns of porosity with respect to its several sources of uncertainty was numerically evaluated. The results will be used to produce an optimal experimental design and for assessing the reliability of experimental results. This novel approach has the potential to further resolve subsurface heterogeneity within the LEO project, and highlight the use of ERT-derived results for hydro-bio-geochemical studies.

  2. Quality assurance methods and procedures for accepting radioactive waste for final storage

    International Nuclear Information System (INIS)

    Wenger, R.

    1992-01-01

    The concept of quality assurance for the final storage of radioactive materials is presented together with the procedure, characterisation, procedural development and documentation involved. Other topics include the assessment of the material to find its suitability for final storage and the tests for transport. 4 figs., 9 refs

  3. Guidelines for the verification and validation of expert system software and conventional software: Evaluation of knowledge base certification methods. Volume 4

    International Nuclear Information System (INIS)

    Miller, L.A.; Hayes, J.E.; Mirsky, S.M.

    1995-03-01

    This report presents the results of the Knowledge Base Certification activity of the expert systems verification and validation (V ampersand V) guideline development project which is jointly funded by the US Nuclear Regulatory Commission and the Electric Power Research Institute. The ultimate objective is the formulation of guidelines for the V ampersand V of expert systems for use in nuclear power applications. This activity is concerned with the development and testing of various methods for assuring the quality of knowledge bases. The testing procedure used was that of behavioral experiment, the first known such evaluation of any type of V ampersand V activity. The value of such experimentation is its capability to provide empirical evidence for -- or against -- the effectiveness of plausible methods in helping people find problems in knowledge bases. The three-day experiment included 20 participants from three nuclear utilities, the Nuclear Regulatory Commission's Technical training Center, the University of Maryland, EG ampersand G Idaho, and SAIC. The study used two real nuclear expert systems: a boiling water reactor emergency operating procedures tracking system and a pressurized water reactor safety assessment systems. Ten participants were assigned to each of the expert systems. All participants were trained in and then used a sequence of four different V ampersand V methods selected as being the best and most appropriate for study on the basis of prior evaluation activities. These methods either involved the analysis and tracing of requirements to elements in the knowledge base (requirements grouping and requirements tracing) or else involved direct inspection of the knowledge base for various kinds of errors. Half of the subjects within each system group used the best manual variant of the V ampersand V methods (the control group), while the other half were supported by the results of applying real or simulated automated tools to the knowledge bases

  4. Assessment of patients' skin dose during interventional cardiology procedures

    International Nuclear Information System (INIS)

    Tsapaki, V.; Vardalaki, E.; Kottou, S.; Molfetas, M.; Neofotistou, V.

    2002-01-01

    During the last 30 years the use of Interventional Cardiology (IC) procedures has increased significantly, mainly due to the benefits and advantages of the method that offers more accurate diagnosis and treatment along with less complications and hospitalization. However, IC procedures are based on the use of x-ray radiation, mostly localized at certain areas of patient's body and for extended periods of time. Consequently, patient may receive high radiation dose and deterministic effects, such as erythema, epilation or even dermal necrosis may be observed. Therefore, the need for reducing radiation dose is highly important. In order to achieve this, good knowledge of the dose levels delivered to the patient during IC procedures is essential since radiation effects are known to increase with dose. It is of great interest to know the point where the maximum skin dose (MSD) is noted since individual sensitivity may vary. MSDs greater than 1 Gy should be recorded. Patient dosimetry during IC procedures is a complex task since these type of procedures depend on various factors, such as complexity and severity of case, different specifications of x-ray equipment and patient's physical characteristics. Moreover, cardiologist's experience plays an important role. For these reasons, Food and Drug Administration (FDA), the International Commission on Radiological Protection (ICRP) as well as the World Health Organization (WHO), have published documents on radiation safety and ways to reduce skin injuries during IC procedures. Various methods have been proposed for measuring MSD such as the use of slow radiotherapy films, thermoluminescent detectors (TLD), scintillation detectors, Dose-Area Product (DAP) meter, as well as a combination of DAP and air kerma. A literature review on MSDs measured during IC procedures showed that doses ranged from 300 to 43000 mGy

  5. A New Human-Machine Interfaces of Computer-based Procedure System to Reduce the Team Errors in Nuclear Power Plants

    International Nuclear Information System (INIS)

    Kim, Sa Kil; Sim, Joo Hyun; Lee, Hyun Chul

    2016-01-01

    In this study, we identify the emerging types of team errors, especially, in digitalized control room of nuclear power plants such as the APR-1400 main control room of Korea. Most works in nuclear industry are to be performed by a team of more than two persons. Even though the individual errors can be detected and recovered by the qualified others and/or the well trained team, it is rather seldom that the errors by team could be easily detected and properly recovered by the team itself. Note that the team is defined as two or more people who are appropriately interacting with each other, and the team is a dependent aggregate, which accomplishes a valuable goal. Organizational errors sometimes increase the likelihood of operator errors through the active failure pathway and, at the same time, enhance the possibility of adverse outcomes through defensive weaknesses. We incorporate the crew resource management as a representative approach to deal with the team factors of the human errors. We suggest a set of crew resource management training procedures under the unsafe environments where human errors can have devastating effects. We are on the way to develop alternative interfaces against team error in a condition of using a computer-based procedure system in a digitalized main control room. The computer-based procedure system is a representative feature of digitalized control room. In this study, we will propose new interfaces of computer-based procedure system to reduce feasible team errors. We are on the way of effectiveness test to validate whether the new interface can reduce team errors during operating with a computer-based procedure system in a digitalized control room

  6. A New Human-Machine Interfaces of Computer-based Procedure System to Reduce the Team Errors in Nuclear Power Plants

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Sa Kil; Sim, Joo Hyun; Lee, Hyun Chul [Korea Atomic Research Institute, Daejeon (Korea, Republic of)

    2016-10-15

    In this study, we identify the emerging types of team errors, especially, in digitalized control room of nuclear power plants such as the APR-1400 main control room of Korea. Most works in nuclear industry are to be performed by a team of more than two persons. Even though the individual errors can be detected and recovered by the qualified others and/or the well trained team, it is rather seldom that the errors by team could be easily detected and properly recovered by the team itself. Note that the team is defined as two or more people who are appropriately interacting with each other, and the team is a dependent aggregate, which accomplishes a valuable goal. Organizational errors sometimes increase the likelihood of operator errors through the active failure pathway and, at the same time, enhance the possibility of adverse outcomes through defensive weaknesses. We incorporate the crew resource management as a representative approach to deal with the team factors of the human errors. We suggest a set of crew resource management training procedures under the unsafe environments where human errors can have devastating effects. We are on the way to develop alternative interfaces against team error in a condition of using a computer-based procedure system in a digitalized main control room. The computer-based procedure system is a representative feature of digitalized control room. In this study, we will propose new interfaces of computer-based procedure system to reduce feasible team errors. We are on the way of effectiveness test to validate whether the new interface can reduce team errors during operating with a computer-based procedure system in a digitalized control room.

  7. Statistical design of mass spectrometry calibration procedures

    International Nuclear Information System (INIS)

    Bayne, C.K.

    1996-11-01

    The main objective of this task was to agree on calibration procedures to estimate the system parameters (i.e., dead-time correction, ion-counting conversion efficiency, and detector efficiency factors) for SAL's new Finnigan MAT-262 mass spectrometer. SAL will use this mass spectrometer in a clean-laboratory which was opened in December 1995 to measure uranium and plutonium isotopes on environmental samples. The Finnigan MAT-262 mass spectrometer has a multi-detector system with seven Faraday cup detectors and one ion- counter for the measurement of very small signals (e.g. 10 -17 Ampere range). ORNL has made preliminary estimates of the system parameters based on SAL's experimental data measured in late 1994 when the Finnigan instrument was relatively new. SAL generated additional data in 1995 to verify the calibration procedures for estimating the dead-time correction factor, the ion-counting conversion factor and the Faraday cup detector efficiency factors. The system parameters estimated on the present data will have to be reestablished when the Finnigan MAT-262 is moved-to the new clean- laboratory. Different methods will be used to analyzed environmental samples than the current measurement methods being used. For example, the environmental samples will be electroplated on a single filament rather than using the current two filament system. An outline of the calibration standard operating procedure (SOP) is included

  8. Four points function fitted and first derivative procedure for determining the end points in potentiometric titration curves: statistical analysis and method comparison.

    Science.gov (United States)

    Kholeif, S A

    2001-06-01

    A new method that belongs to the differential category for determining the end points from potentiometric titration curves is presented. It uses a preprocess to find first derivative values by fitting four data points in and around the region of inflection to a non-linear function, and then locate the end point, usually as a maximum or minimum, using an inverse parabolic interpolation procedure that has an analytical solution. The behavior and accuracy of the sigmoid and cumulative non-linear functions used are investigated against three factors. A statistical evaluation of the new method using linear least-squares method validation and multifactor data analysis are covered. The new method is generally applied to symmetrical and unsymmetrical potentiometric titration curves, and the end point is calculated using numerical procedures only. It outperforms the "parent" regular differential method in almost all factors levels and gives accurate results comparable to the true or estimated true end points. Calculated end points from selected experimental titration curves compatible with the equivalence point category of methods, such as Gran or Fortuin, are also compared with the new method.

  9. Dynamic RCS Simulation of a Missile Target Group Based on the High-frequency Asymptotic Method

    Directory of Open Access Journals (Sweden)

    Zhao Tao

    2014-04-01

    Full Text Available To simulate dynamic Radar Cross Section (RCS of missile target group, an efficient RCS prediction approach is proposed based on the high-frequency asymptotic theory. The minimal energy trajectory and coordinate transformation is used to get trajectories of the missile, decoys and roll booster, and establish the dynamic scene for the separate procedure of the target group, and the dynamic RCS including specular reflection, edge diffraction and multi-reflection from the target group are obtained by Physical Optics (PO, Equivalent Edge Currents (EEC and Shooting-and-Bouncing Ray (SBR methods. Compared with the dynamic RCS result with the common interpolation method, the proposed method is consistent with the common method when the targets in the scene are far away from each other and each target is not sheltered by others in the incident direction. When the target group is densely distributed and the shelter effect can not be neglected, the interpolation method is extremely difficult to realize, whereas the proposed method is successful.

  10. Modelling the Determinants of Winning in Public Tendering Procedures Based on the Activity of a Selected Company

    Directory of Open Access Journals (Sweden)

    Maciej Malara

    2012-01-01

    Full Text Available The purpose of this article is to identify the factors influencing the probability of winning in public procurement procedures and to assess the strength of their impact from the perspective of both: the bidder and procurer. The research was conducted with the use of series of quantitative methods: binary logistic regression, discriminant analysis and cluster analysis. It was based on a sample consisting of public tenders, in which the examined company performed the role of a bidder. Thus, the research process was aimed at both identifying the factors of success and estimating the probability of achieving it, where it was possible to obtain probabilities. The main idea of this research is to answer questions about the utility of various methods of quantitative analysis in the case of analyzing determinants of success. Results of the research are presented in the following sequence of sections: characteristics of the examined material, the process of modelling the probability of winning, evaluation of the quality of the results obtained. (original abstract

  11. Procedure for the production of PZC based