WorldWideScience

Sample records for research verification trials

  1. Efficient design of clinical trials and epidemiological research: is it possible?

    Science.gov (United States)

    Lauer, Michael S; Gordon, David; Wei, Gina; Pearson, Gail

    2017-08-01

    Randomized clinical trials and large-scale, cohort studies continue to have a critical role in generating evidence in cardiovascular medicine; however, the increasing concern is that ballooning costs threaten the clinical trial enterprise. In this Perspectives article, we discuss the changing landscape of clinical research, and clinical trials in particular, focusing on reasons for the increasing costs and inefficiencies. These reasons include excessively complex design, overly restrictive inclusion and exclusion criteria, burdensome regulations, excessive source-data verification, and concerns about the effect of clinical research conduct on workflow. Thought leaders have called on the clinical research community to consider alternative, transformative business models, including those models that focus on simplicity and leveraging of digital resources. We present some examples of innovative approaches by which some investigators have successfully conducted large-scale, clinical trials at relatively low cost. These examples include randomized registry trials, cluster-randomized trials, adaptive trials, and trials that are fully embedded within digital clinical care or administrative platforms.

  2. An unattended verification station for UF6 cylinders: Field trial findings

    Science.gov (United States)

    Smith, L. E.; Miller, K. A.; McDonald, B. S.; Webster, J. B.; Zalavadia, M. A.; Garner, J. R.; Stewart, S. L.; Branney, S. J.; Todd, L. C.; Deshmukh, N. S.; Nordquist, H. A.; Kulisek, J. A.; Swinhoe, M. T.

    2017-12-01

    In recent years, the International Atomic Energy Agency (IAEA) has pursued innovative techniques and an integrated suite of safeguards measures to address the verification challenges posed by the front end of the nuclear fuel cycle. Among the unattended instruments currently being explored by the IAEA is an Unattended Cylinder Verification Station (UCVS), which could provide automated, independent verification of the declared relative enrichment, 235U mass, total uranium mass, and identification for all declared uranium hexafluoride cylinders in a facility (e.g., uranium enrichment plants and fuel fabrication plants). Under the auspices of the United States and European Commission Support Programs to the IAEA, a project was undertaken to assess the technical and practical viability of the UCVS concept. The first phase of the UCVS viability study was centered on a long-term field trial of a prototype UCVS system at a fuel fabrication facility. A key outcome of the study was a quantitative performance evaluation of two nondestructive assay (NDA) methods being considered for inclusion in a UCVS: Hybrid Enrichment Verification Array (HEVA), and Passive Neutron Enrichment Meter (PNEM). This paper provides a description of the UCVS prototype design and an overview of the long-term field trial. Analysis results and interpretation are presented with a focus on the performance of PNEM and HEVA for the assay of over 200 "typical" Type 30B cylinders, and the viability of an "NDA Fingerprint" concept as a high-fidelity means to periodically verify that material diversion has not occurred.

  3. Measuring Data Quality Through a Source Data Verification Audit in a Clinical Research Setting.

    Science.gov (United States)

    Houston, Lauren; Probst, Yasmine; Humphries, Allison

    2015-01-01

    Health data has long been scrutinised in relation to data quality and integrity problems. Currently, no internationally accepted or "gold standard" method exists measuring data quality and error rates within datasets. We conducted a source data verification (SDV) audit on a prospective clinical trial dataset. An audit plan was applied to conduct 100% manual verification checks on a 10% random sample of participant files. A quality assurance rule was developed, whereby if >5% of data variables were incorrect a second 10% random sample would be extracted from the trial data set. Error was coded: correct, incorrect (valid or invalid), not recorded or not entered. Audit-1 had a total error of 33% and audit-2 36%. The physiological section was the only audit section to have <5% error. Data not recorded to case report forms had the greatest impact on error calculations. A significant association (p=0.00) was found between audit-1 and audit-2 and whether or not data was deemed correct or incorrect. Our study developed a straightforward method to perform a SDV audit. An audit rule was identified and error coding was implemented. Findings demonstrate that monitoring data quality by a SDV audit can identify data quality and integrity issues within clinical research settings allowing quality improvement to be made. The authors suggest this approach be implemented for future research.

  4. Current Research on Containment Technologies for Verification Activities: Advanced Tools for Maintaining Continuity of Knowledge

    International Nuclear Information System (INIS)

    Smartt, H.; Kuhn, M.; Krementz, D.

    2015-01-01

    The U.S. National Nuclear Security Administration (NNSA) Office of Non-proliferation and Verification Research and Development currently funds research on advanced containment technologies to support Continuity of Knowledge (CoK) objectives for verification regimes. One effort in this area is the Advanced Tools for Maintaining Continuity of Knowledge (ATCK) project. Recognizing that CoK assurances must withstand potential threats from sophisticated adversaries, and that containment options must therefore keep pace with technology advances, the NNSA research and development on advanced containment tools is an important investment. The two ATCK efforts underway at present address the technical containment requirements for securing access points (loop seals) and protecting defined volumes. Multiple U.S. national laboratories are supporting this project: Sandia National Laboratories (SNL), Savannah River National Laboratory (SRNL), and Oak Ridge National Laboratory (ORNL). SNL and SRNL are developing the ''Ceramic Seal,'' an active loop seal that integrates multiple advanced security capabilities and improved efficiency housed within a small-volume ceramic body. The development includes an associated handheld reader and interface software. Currently at the prototype stage, the Ceramic Seal will undergo a series of tests to determine operational readiness. It will be field tested in a representative verification trial in 2016. ORNL is developing the Whole Volume Containment Seal (WCS), a flexible conductive fabric capable of enclosing various sizes and shapes of monitored items. The WCS includes a distributed impedance measurement system for imaging the fabric surface area and passive tamper-indicating features such as permanent-staining conductive ink. With the expected technology advances from the Ceramic Seal and WCS, the ATCK project takes significant steps in advancing containment technologies to help maintain CoK for various verification

  5. Neutron spectrometric methods for core inventory verification in research reactors

    International Nuclear Information System (INIS)

    Ellinger, A.; Filges, U.; Hansen, W.; Knorr, J.; Schneider, R.

    2002-01-01

    In consequence of the Non-Proliferation Treaty safeguards, inspections are periodically made in nuclear facilities by the IAEA and the EURATOM Safeguards Directorate. The inspection methods are permanently improved. Therefore, the Core Inventory Verification method is being developed as an indirect method for the verification of the core inventory and to check the declared operation of research reactors

  6. Hybrid approaches to clinical trial monitoring: Practical alternatives to 100% source data verification

    Directory of Open Access Journals (Sweden)

    Sourabh De

    2011-01-01

    Full Text Available For years, a vast majority of clinical trial industry has followed the tenet of 100% source data verification (SDV. This has been driven partly by the overcautious approach to linking quality of data to the extent of monitoring and SDV and partly by being on the safer side of regulations. The regulations however, do not state any upper or lower limits of SDV. What it expects from researchers and the sponsors is methodologies which ensure data quality. How the industry does it is open to innovation and application of statistical methods, targeted and remote monitoring, real time reporting, adaptive monitoring schedules, etc. In short, hybrid approaches to monitoring. Coupled with concepts of optimum monitoring and SDV at site and off-site monitoring techniques, it should be possible to save time required to conduct SDV leading to more available time for other productive activities. Organizations stand to gain directly or indirectly from such savings, whether by diverting the funds back to the R&D pipeline; investing more in technology infrastructure to support large trials; or simply increasing sample size of trials. Whether it also affects the work-life balance of monitors who may then need to travel with a less hectic schedule for the same level of quality and productivity can be predicted only when there is more evidence from field.

  7. Hybrid approaches to clinical trial monitoring: Practical alternatives to 100% source data verification.

    Science.gov (United States)

    De, Sourabh

    2011-07-01

    For years, a vast majority of clinical trial industry has followed the tenet of 100% source data verification (SDV). This has been driven partly by the overcautious approach to linking quality of data to the extent of monitoring and SDV and partly by being on the safer side of regulations. The regulations however, do not state any upper or lower limits of SDV. What it expects from researchers and the sponsors is methodologies which ensure data quality. How the industry does it is open to innovation and application of statistical methods, targeted and remote monitoring, real time reporting, adaptive monitoring schedules, etc. In short, hybrid approaches to monitoring. Coupled with concepts of optimum monitoring and SDV at site and off-site monitoring techniques, it should be possible to save time required to conduct SDV leading to more available time for other productive activities. Organizations stand to gain directly or indirectly from such savings, whether by diverting the funds back to the R&D pipeline; investing more in technology infrastructure to support large trials; or simply increasing sample size of trials. Whether it also affects the work-life balance of monitors who may then need to travel with a less hectic schedule for the same level of quality and productivity can be predicted only when there is more evidence from field.

  8. VAMOS: The verification and monitoring options study: Current research options for in-situ monitoring and verification of contaminant remediation and containment within the vadose zone

    International Nuclear Information System (INIS)

    Betsill, J.D.; Gruebel, R.D.

    1995-09-01

    The Verification and Monitoring Options Study Project (VAMOS) was established to identify high-priority options for future vadose-zone environmental research in the areas of in-situ remediation monitoring, post-closure monitoring, and containment emplacement and verification monitoring. VAMOS examined projected needs not currently being met with applied technology in order to develop viable monitoring and verification research options. The study emphasized a compatible systems approach to reinforce the need for utilizing compatible components to provide user friendly site monitoring systems. To identify the needs and research options related to vadose-zone environmental monitoring and verification, a literature search and expert panel forums were conducted. The search included present drivers for environmental monitoring technology, technology applications, and research efforts. The forums included scientific, academic, industry, and regulatory environmental professionals as well as end users of environmental technology. The experts evaluated current and future monitoring and verification needs, methods for meeting these needs, and viable research options and directions. A variety of high-priority technology development, user facility, and technology guidance research options were developed and presented as an outcome of the literature search and expert panel forums

  9. Participant verification: Prevention of co‑enrolment in clinical trials in ...

    African Journals Online (AJOL)

    Methods. The Medical Research Council (MRC) HIV Prevention Research ... which uses fingerprint-based biometric technology to identify participants. ... and clinical trial sites, with new participant information loaded at first visit to a trial site.

  10. Assessing data quality and the variability of source data verification auditing methods in clinical research settings.

    Science.gov (United States)

    Houston, Lauren; Probst, Yasmine; Martin, Allison

    2018-05-18

    Data audits within clinical settings are extensively used as a major strategy to identify errors, monitor study operations and ensure high-quality data. However, clinical trial guidelines are non-specific in regards to recommended frequency, timing and nature of data audits. The absence of a well-defined data quality definition and method to measure error undermines the reliability of data quality assessment. This review aimed to assess the variability of source data verification (SDV) auditing methods to monitor data quality in a clinical research setting. The scientific databases MEDLINE, Scopus and Science Direct were searched for English language publications, with no date limits applied. Studies were considered if they included data from a clinical trial or clinical research setting and measured and/or reported data quality using a SDV auditing method. In total 15 publications were included. The nature and extent of SDV audit methods in the articles varied widely, depending upon the complexity of the source document, type of study, variables measured (primary or secondary), data audit proportion (3-100%) and collection frequency (6-24 months). Methods for coding, classifying and calculating error were also inconsistent. Transcription errors and inexperienced personnel were the main source of reported error. Repeated SDV audits using the same dataset demonstrated ∼40% improvement in data accuracy and completeness over time. No description was given in regards to what determines poor data quality in clinical trials. A wide range of SDV auditing methods are reported in the published literature though no uniform SDV auditing method could be determined for "best practice" in clinical trials. Published audit methodology articles are warranted for the development of a standardised SDV auditing method to monitor data quality in clinical research settings. Copyright © 2018. Published by Elsevier Inc.

  11. Maximising the value of combining qualitative research and randomised controlled trials in health research: the QUAlitative Research in Trials (QUART) study--a mixed methods study.

    Science.gov (United States)

    O'Cathain, Alicia; Thomas, Kate J; Drabble, Sarah J; Rudolph, Anne; Goode, Jackie; Hewison, Jenny

    2014-06-01

    Researchers sometimes undertake qualitative research with randomised controlled trials (RCTs) of health interventions. To systematically explore how qualitative research is being used with trials and identify ways of maximising its value to the trial aim of providing evidence of effectiveness of health interventions. A sequential mixed methods study with four components. (1) Database search of peer-reviewed journals between January 2008 and September 2010 for articles reporting the qualitative research undertaken with specific trials, (2) systematic search of database of registered trials to identify studies combining qualitative research and trials, (3) survey of 200 lead investigators of trials with no apparent qualitative research and (4) semistructured telephone interviews with 18 researchers purposively sampled from the first three methods. Qualitative research was undertaken with at least 12% of trials. A large number of articles reporting qualitative research undertaken with trials (n=296) were published between 2008 and 2010. A total of 28% (82/296) of articles reported qualitative research undertaken at the pre-trial stage and around one-quarter concerned drugs or devices. The articles focused on 22 aspects of the trial within five broad categories. Some focused on more than one aspect of the trial, totalling 356 examples. The qualitative research focused on the intervention being trialled (71%, 254/356), the design and conduct of the trial (15%, 54/356), the outcomes of the trial (1%, 5/356), the measures used in the trial (3%, 10/356), and the health condition in the trial (9%, 33/356). The potential value of the qualitative research to the trial endeavour included improving the external validity of trials and facilitating interpretation of trial findings. This value could be maximised by using qualitative research more at the pre-trial stage and reporting findings with explicit attention to the implications for the trial endeavour. During interviews

  12. Participant verification: prevention of co-enrolment in clinical trials in South Africa.

    Science.gov (United States)

    Harichund, C; Haripersad, K; Ramjee, R

    2013-05-15

    As KwaZulu-Natal Province is the epicentre of the HIV epidemic in both South Africa (SA) and globally, it is an ideal location to conduct HIV prevention and therapeutic trials. Numerous prevention trials are currently being conducted here; the potential for participant co-enrolment may compromise the validity of these studies and is therefore of great concern. To report the development and feasibility of a digital, fingerprint-based participant identification method to prevent co-enrolment at multiple clinical trial sites. The Medical Research Council (MRC) HIV Prevention Research Unit (HPRU) developed the Biometric Co-enrolment Prevention System (BCEPS), which uses fingerprint-based biometric technology to identify participants. A trial website was used to determine the robustness and usability of the system. After successful testing, the BCEPS was piloted in July 2010 across 7 HPRU clinical research sites. The BCEPS was pre-loaded with study names and clinical trial sites, with new participant information loaded at first visit to a trial site. We successfully implemented the BCEPS at the 7 HPRU sites. Using the BCEPS, we performed real-time 'flagging' of women who were already enrolled in another study as they entered a trial at an HPRU site and, where necessary, excluded them from participation on site. This system has promise in reducing co-enrolment in clinical trials and represents a valuable tool for future implementation by all groups conducting trials. The MRC is currently co-ordinating this effort with clinical trial sites nationally.

  13. Research Areas - Clinical Trials

    Science.gov (United States)

    Information about NCI programs and initiatives that sponsor, conduct, develop, or support clinical trials, including NCI’s Clinical Trial Network (NCTN) and NCI Community Oncology Research Program (NCORP) initiatives.

  14. Sustaining a verification regime in a nuclear weapon-free world. VERTIC research report no. 4

    International Nuclear Information System (INIS)

    Moyland, S. van

    1999-01-01

    Sustaining high levels of commitment to and enthusiasm for the verification regime in a nuclear weapon-free world (NWFW) would be a considerable challenge, but the price of failure would be high. No verification system for a complete ban on a whole of weapon of mass destruction (WMD) has been in existence long enough to provide a precedent or the requisite experience. Nevertheless, lessons from the International Atomic Energy Agency's (IAEA) nuclear safeguards system are instructive. A potential problem over the long haul is the gradual erosion of the deterrent effect of verification that may result from the continual overlooking of minor instances of non-compliance. Flaws in the verification system must be identified and dealt with early lest they also corrode the system. To achieve this the verification organisation's inspectors and analytical staff will need sustained support, encouragement, resources and training. In drawing attention to weaknesses, they must be supported by management and at the political level. The leaking of sensitive information, either industrial or military, by staff of the verification regime is a potential problem. 'Managed access' techniques should be constantly examined and improved. The verification organisation and states parties will need to sustain close co-operation with the nuclear and related industries. Frequent review mechanisms must be established. States must invest time and effort to make them effective. Another potential problem is the withering of resources for sustained verification. Verification organisations tend to be pressured by states to cut or last least cap costs, even if the verification workload increases. The verification system must be effective as knowledge and experience allows. The organisation will need continuously to update its scientific methods and technology. This requires in-house resources plus external research and development (R and D). Universities, laboratories and industry need incentives to

  15. Clinical verification of genetic results returned to research participants: findings from a Colon Cancer Family Registry.

    Science.gov (United States)

    Laurino, Mercy Y; Truitt, Anjali R; Tenney, Lederle; Fisher, Douglass; Lindor, Noralane M; Veenstra, David; Jarvik, Gail P; Newcomb, Polly A; Fullerton, Stephanie M

    2017-11-01

    The extent to which participants act to clinically verify research results is largely unknown. This study examined whether participants who received Lynch syndrome (LS)-related findings pursued researchers' recommendation to clinically verify results with testing performed by a CLIA-certified laboratory. The Fred Hutchinson Cancer Research Center site of the multinational Colon Cancer Family Registry offered non-CLIA individual genetic research results to select registry participants (cases and their enrolled relatives) from 2011 to 2013. Participants who elected to receive results were counseled on the importance of verifying results at a CLIA-certified laboratory. Twenty-six (76.5%) of the 34 participants who received genetic results completed 2- and 12-month postdisclosure surveys; 42.3% of these (11/26) participated in a semistructured follow-up interview. Within 12 months of result disclosure, only 4 (15.4%) of 26 participants reported having verified their results in a CLIA-certified laboratory; of these four cases, all research and clinical results were concordant. Reasons for pursuing clinical verification included acting on the recommendation of the research team and informing future clinical care. Those who did not verify results cited lack of insurance coverage and limited perceived personal benefit of clinical verification as reasons for inaction. These findings suggest researchers will need to address barriers to seeking clinical verification in order to ensure that the intended benefits of returning genetic research results are realized. © 2017 The Authors. Molecular Genetics & Genomic Medicine published by Wiley Periodicals, Inc.

  16. Multilateral disarmament verification

    International Nuclear Information System (INIS)

    Persbo, A.

    2013-01-01

    Non-governmental organisations, such as VERTIC (Verification Research, Training and Information Centre), can play an important role in the promotion of multilateral verification. Parties involved in negotiating nuclear arms accords are for the most part keen that such agreements include suitable and robust provisions for monitoring and verification. Generally progress in multilateral arms control verification is often painstakingly slow, but from time to time 'windows of opportunity' - that is, moments where ideas, technical feasibility and political interests are aligned at both domestic and international levels - may occur and we have to be ready, so the preparatory work is very important. In the context of nuclear disarmament, verification (whether bilateral or multilateral) entails an array of challenges, hurdles and potential pitfalls relating to national security, health, safety and even non-proliferation, so preparatory work is complex and time-greedy. A UK-Norway Initiative was established in order to investigate the role that a non-nuclear-weapon state such as Norway could potentially play in the field of nuclear arms control verification. (A.C.)

  17. Improved verification methods for safeguards verifications at enrichment plants

    International Nuclear Information System (INIS)

    Lebrun, A.; Kane, S. C.; Bourva, L.; Poirier, S.; Loghin, N. E.; Langlands, D.

    2009-01-01

    The International Atomic Energy Agency (IAEA) has initiated a coordinated research and development programme to improve its verification methods and equipment applicable to enrichment plants. The programme entails several individual projects to meet the objectives of the IAEA Safeguards Model Approach for Gas Centrifuge Enrichment Plants updated in 2006. Upgrades of verification methods to confirm the absence of HEU (highly enriched uranium) production have been initiated and, in particular, the Cascade Header Enrichment Monitor (CHEM) has been redesigned to reduce its weight and incorporate an electrically cooled germanium detector. Such detectors are also introduced to improve the attended verification of UF 6 cylinders for the verification of the material balance. Data sharing of authenticated operator weighing systems such as accountancy scales and process load cells is also investigated as a cost efficient and an effective safeguards measure combined with unannounced inspections, surveillance and non-destructive assay (NDA) measurement. (authors)

  18. Improved verification methods for safeguards verifications at enrichment plants

    Energy Technology Data Exchange (ETDEWEB)

    Lebrun, A.; Kane, S. C.; Bourva, L.; Poirier, S.; Loghin, N. E.; Langlands, D. [Department of Safeguards, International Atomic Energy Agency, Wagramer Strasse 5, A1400 Vienna (Austria)

    2009-07-01

    The International Atomic Energy Agency (IAEA) has initiated a coordinated research and development programme to improve its verification methods and equipment applicable to enrichment plants. The programme entails several individual projects to meet the objectives of the IAEA Safeguards Model Approach for Gas Centrifuge Enrichment Plants updated in 2006. Upgrades of verification methods to confirm the absence of HEU (highly enriched uranium) production have been initiated and, in particular, the Cascade Header Enrichment Monitor (CHEM) has been redesigned to reduce its weight and incorporate an electrically cooled germanium detector. Such detectors are also introduced to improve the attended verification of UF{sub 6} cylinders for the verification of the material balance. Data sharing of authenticated operator weighing systems such as accountancy scales and process load cells is also investigated as a cost efficient and an effective safeguards measure combined with unannounced inspections, surveillance and non-destructive assay (NDA) measurement. (authors)

  19. Preliminary Research on the Verification Task of North Korea's Plutonium Declaration

    International Nuclear Information System (INIS)

    Kim, Hyun Chul; Park, Il Jin

    2009-01-01

    The denuclearization of North Korea seems challenging. North Korea has recognized itself as a nuclear weapon state by carrying out two nuclear tests while many other nations including South Korea have opposed North Korea's nuclear proliferation. As a result of longstanding negotiations, North Korea provided nearly 19,000 pages of operation history of three Yongbyon nuclear facilities on May 8, 2008 and a 60-page declaration of its nuclear activities and programs on June 26, 2008. However, one should notice that declaration documents are by themselves meaningless without their verification. To completely dismantle North Korea's nuclear programs, the verification task based on its declaration documents should be performed very thoroughly, considering the possibility of the presence of the undeclared nuclear materials and facilities. The verification task of North Korea's nuclear declaration consists of many broad themes to deal with, such as the review of declaration documents, the interview with facility operators, the sampling in the field, the laboratory analysis of the sample, data interpretation, and so on. One of the important themes is to verify North Korea's declared plutonium stockpile by comparing the declaration documents with measurement data which can be obtained from the sampling in the field and laboratory analysis. To prepare for the possible future verification of the declared plutonium stockpile, it is meaningful to give a thought on what data can be compared and what samples need to be taken and analyzed. In this study, we focus on the data to be compared and samples to be taken and analyzed for the plutonium accounting, as a preliminary research. To give a quantitative example, the nuclear material of the most recent North Korea's spent fuel rods discharged from the 5 MWe reactor is analyzed. On June 13, 2009, North Korea declared that more than one-third of the spent fuel rods had been reprocessed

  20. Nuclear test ban verification

    International Nuclear Information System (INIS)

    Chun, Kin-Yip

    1991-07-01

    This report describes verification and its rationale, the basic tasks of seismic verification, the physical basis for earthquake/explosion source discrimination and explosion yield determination, the technical problems pertaining to seismic monitoring of underground nuclear tests, the basic problem-solving strategy deployed by the forensic seismology resarch team at the University of Toronto, and the scientific significance of the team's research. The research carried out at the Univeristy of Toronto has two components: teleseismic verification using P wave recordings from the Yellowknife Seismic Array (YKA), and regional (close-in) verification using high-frequency L g and P n recordings from the Eastern Canada Telemetered Network. Major differences have been found in P was attenuation among the propagation paths connecting the YKA listening post with seven active nuclear explosion testing areas in the world. Significant revisions have been made to previously published P wave attenuation results, leading to more interpretable nuclear explosion source functions. (11 refs., 12 figs.)

  1. Scalable Techniques for Formal Verification

    CERN Document Server

    Ray, Sandip

    2010-01-01

    This book presents state-of-the-art approaches to formal verification techniques to seamlessly integrate different formal verification methods within a single logical foundation. It should benefit researchers and practitioners looking to get a broad overview of the spectrum of formal verification techniques, as well as approaches to combining such techniques within a single framework. Coverage includes a range of case studies showing how such combination is fruitful in developing a scalable verification methodology for industrial designs. This book outlines both theoretical and practical issue

  2. Verification of chemistry reference ranges using a simple method in sub-Saharan Africa

    Directory of Open Access Journals (Sweden)

    Irith De Baetselier

    2016-10-01

    Full Text Available Background: Chemistry safety assessments are interpreted by using chemistry reference ranges (CRRs. Verification of CRRs is time consuming and often requires a statistical background. Objectives: We report on an easy and cost-saving method to verify CRRs. Methods: Using a former method introduced by Sigma Diagnostics, three study sites in sub- Saharan Africa, Bondo, Kenya, and Pretoria and Bloemfontein, South Africa, verified the CRRs for hepatic and renal biochemistry assays performed during a clinical trial of HIV antiretroviral pre-exposure prophylaxis. The aspartate aminotransferase/alanine aminotransferase, creatinine and phosphorus results from 10 clinically-healthy participants at the screening visit were used. In the event the CRRs did not pass the verification, new CRRs had to be calculated based on 40 clinically-healthy participants. Results: Within a few weeks, the study sites accomplished verification of the CRRs without additional costs. The aspartate aminotransferase reference ranges for the Bondo, Kenya site and the alanine aminotransferase reference ranges for the Pretoria, South Africa site required adjustment. The phosphorus CRR passed verification and the creatinine CRR required adjustment at every site. The newly-established CRR intervals were narrower than the CRRs used previously at these study sites due to decreases in the upper limits of the reference ranges. As a result, more toxicities were detected. Conclusion: To ensure the safety of clinical trial participants, verification of CRRs should be standard practice in clinical trials conducted in settings where the CRR has not been validated for the local population. This verification method is simple, inexpensive, and can be performed by any medical laboratory.

  3. Verification of chemistry reference ranges using a simple method in sub-Saharan Africa.

    Science.gov (United States)

    De Baetselier, Irith; Taylor, Douglas; Mandala, Justin; Nanda, Kavita; Van Campenhout, Christel; Agingu, Walter; Madurai, Lorna; Barsch, Eva-Maria; Deese, Jennifer; Van Damme, Lut; Crucitti, Tania

    2016-01-01

    Chemistry safety assessments are interpreted by using chemistry reference ranges (CRRs). Verification of CRRs is time consuming and often requires a statistical background. We report on an easy and cost-saving method to verify CRRs. Using a former method introduced by Sigma Diagnostics, three study sites in sub-Saharan Africa, Bondo, Kenya, and Pretoria and Bloemfontein, South Africa, verified the CRRs for hepatic and renal biochemistry assays performed during a clinical trial of HIV antiretroviral pre-exposure prophylaxis. The aspartate aminotransferase/alanine aminotransferase, creatinine and phosphorus results from 10 clinically-healthy participants at the screening visit were used. In the event the CRRs did not pass the verification, new CRRs had to be calculated based on 40 clinically-healthy participants. Within a few weeks, the study sites accomplished verification of the CRRs without additional costs. The aspartate aminotransferase reference ranges for the Bondo, Kenya site and the alanine aminotransferase reference ranges for the Pretoria, South Africa site required adjustment. The phosphorus CRR passed verification and the creatinine CRR required adjustment at every site. The newly-established CRR intervals were narrower than the CRRs used previously at these study sites due to decreases in the upper limits of the reference ranges. As a result, more toxicities were detected. To ensure the safety of clinical trial participants, verification of CRRs should be standard practice in clinical trials conducted in settings where the CRR has not been validated for the local population. This verification method is simple, inexpensive, and can be performed by any medical laboratory.

  4. Achievement report on developing superconductor power applied technologies in fiscal 1999 (2). Research and development of superconductor wire materials, research and development of superconductor power generators, research of total systems, research and development of freezing systems, and verification tests; 1999 nendo chodendo denryoku oyo gijutsu kaihatsu seika hokokusho. 2. Chodendo senzai no kenkyu kaihatsu / chodendo hatsudenki no kenkyu kaihatsu / total system no kenkyu / reito system no kenkyu kaihatsu / jissho shiken

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2000-03-01

    With an objective to achieve higher efficiency, higher density, and higher stability in power systems, research and development has been performed on superconductor power generators. This paper summarizes the achievements thereof in fiscal 1999. A verification test was given on the rotor of an ultra high speed responding generator. In a sudden short circuit test using the different phase charging method, no anomalies were found such as quench generation and vibration changes, wherein the healthiness of the generator was verified. In the VVVF actuation test, knowledge was acquired on the actuation method when the ultra high speed responding generator is applied to a combined cycle plant. After the verification test has been completed, the disassembly inspections such as visual check and non-destructive test were performed. With regard to the vacuum leakage found in the rotor under very low temperatures, the causes were presumed and the countermeasures were discussed by observing the weld structures. In the design research, the conception design on the 200-MW pilot generator was reviewed by reflecting the results of the verification tests on the model generator. At the same time, trial design was made on a 600-MW target generator. In summarizing the overall research achievements, the achievements and evaluations were summarized on technological issues that have been allotted to each research member. (NEDO)

  5. HDL to verification logic translator

    Science.gov (United States)

    Gambles, J. W.; Windley, P. J.

    1992-01-01

    The increasingly higher number of transistors possible in VLSI circuits compounds the difficulty in insuring correct designs. As the number of possible test cases required to exhaustively simulate a circuit design explodes, a better method is required to confirm the absence of design faults. Formal verification methods provide a way to prove, using logic, that a circuit structure correctly implements its specification. Before verification is accepted by VLSI design engineers, the stand alone verification tools that are in use in the research community must be integrated with the CAD tools used by the designers. One problem facing the acceptance of formal verification into circuit design methodology is that the structural circuit descriptions used by the designers are not appropriate for verification work and those required for verification lack some of the features needed for design. We offer a solution to this dilemma: an automatic translation from the designers' HDL models into definitions for the higher-ordered logic (HOL) verification system. The translated definitions become the low level basis of circuit verification which in turn increases the designer's confidence in the correctness of higher level behavioral models.

  6. Employing 3D Virtual Reality and the Unity Game Engine to Support Nuclear Verification Research

    International Nuclear Information System (INIS)

    Patton, T.

    2015-01-01

    This project centres on the development of a virtual nuclear facility environment to assist non-proliferation and nuclear arms control practitioners - including researchers, negotiators, or inspectors - in developing and refining a verification system and secure chain of custody of material or equipment. The platform for creating the virtual facility environment is the Unity 3D game engine. This advanced platform offers both the robust capability and flexibility necessary to support the design goals of the facility. The project also employs Trimble SketchUp and Blender 3D for constructing the model components. The development goal of this phase of the project was to generate a virtual environment that includes basic physics in which avatars can interact with their environment through actions such as picking up objects, operating vehicles, dismantling a warhead through a spherical representation system, opening/closing doors through a custom security access system, and conducting CCTV surveillance. Initial testing of virtual radiation simulation techniques was also explored in preparation for the next phase of development. Some of the eventual utilities and applications for this platform include: 1. conducting live multi-person exercises of verification activities within a single, shared virtual environment, 2. refining procedures, individual roles, and equipment placement in the contexts of non-proliferation or arms control negotiations 3. hands on training for inspectors, and 4. a portable tool/reference for inspectors to use while carrying out inspections. This project was developed under the Multilateral Verification Project, led by the Verification Research, Training and Information Centre (VERTIC) in the United Kingdom, and financed by the Norwegian Ministry of Foreign Affairs. The environment was constructed at the Vienna Center for Disarmament and Non-Proliferation (VCDNP). (author)

  7. Qualitative research within trials: developing a standard operating procedure for a clinical trials unit

    Science.gov (United States)

    2013-01-01

    Background Qualitative research methods are increasingly used within clinical trials to address broader research questions than can be addressed by quantitative methods alone. These methods enable health professionals, service users, and other stakeholders to contribute their views and experiences to evaluation of healthcare treatments, interventions, or policies, and influence the design of trials. Qualitative data often contribute information that is better able to reform policy or influence design. Methods Health services researchers, including trialists, clinicians, and qualitative researchers, worked collaboratively to develop a comprehensive portfolio of standard operating procedures (SOPs) for the West Wales Organisation for Rigorous Trials in Health (WWORTH), a clinical trials unit (CTU) at Swansea University, which has recently achieved registration with the UK Clinical Research Collaboration (UKCRC). Although the UKCRC requires a total of 25 SOPs from registered CTUs, WWORTH chose to add an additional qualitative-methods SOP (QM-SOP). Results The qualitative methods SOP (QM-SOP) defines good practice in designing and implementing qualitative components of trials, while allowing flexibility of approach and method. Its basic principles are that: qualitative researchers should be contributors from the start of trials with qualitative potential; the qualitative component should have clear aims; and the main study publication should report on the qualitative component. Conclusions We recommend that CTUs consider developing a QM-SOP to enhance the conduct of quantitative trials by adding qualitative data and analysis. We judge that this improves the value of quantitative trials, and contributes to the future development of multi-method trials. PMID:23433341

  8. Self-verification and contextualized self-views.

    Science.gov (United States)

    Chen, Serena; English, Tammy; Peng, Kaiping

    2006-07-01

    Whereas most self-verification research has focused on people's desire to verify their global self-conceptions, the present studies examined self-verification with regard to contextualized selfviews-views of the self in particular situations and relationships. It was hypothesized that individuals whose core self-conceptions include contextualized self-views should seek to verify these self-views. In Study 1, the more individuals defined the self in dialectical terms, the more their judgments were biased in favor of verifying over nonverifying feedback about a negative, situation-specific self-view. In Study 2, consistent with research on gender differences in the importance of relationships to the self-concept, women but not men showed a similar bias toward feedback about a negative, relationship-specific self-view, a pattern not seen for global self-views. Together, the results support the notion that self-verification occurs for core self-conceptions, whatever form(s) they may take. Individual differences in self-verification and the nature of selfhood and authenticity are discussed.

  9. Clinical verification in homeopathy and allergic conditions.

    Science.gov (United States)

    Van Wassenhoven, Michel

    2013-01-01

    The literature on clinical research in allergic conditions treated with homeopathy includes a meta-analysis of randomised controlled trials (RCT) for hay fever with positive conclusions and two positive RCTs in asthma. Cohort surveys using validated Quality of Life questionnaires have shown improvement in asthma in children, general allergic conditions and skin diseases. Economic surveys have shown positive results in eczema, allergy, seasonal allergic rhinitis, asthma, food allergy and chronic allergic rhinitis. This paper reports clinical verification of homeopathic symptoms in all patients and especially in various allergic conditions in my own primary care practice. For preventive treatments in hay fever patients, Arsenicum album was the most effective homeopathic medicine followed by Nux vomica, Pulsatilla pratensis, Gelsemium, Sarsaparilla, Silicea and Natrum muriaticum. For asthma patients, Arsenicum iodatum appeared most effective, followed by Lachesis, Calcarea arsenicosa, Carbo vegetabilis and Silicea. For eczema and urticaria, Mezereum was most effective, followed by Lycopodium, Sepia, Arsenicum iodatum, Calcarea carbonica and Psorinum. The choice of homeopathic medicine depends on the presence of other associated symptoms and 'constitutional' features. Repertories should be updated by including results of such clinical verifications of homeopathic prescribing symptoms. Copyright © 2012 The Faculty of Homeopathy. Published by Elsevier Ltd. All rights reserved.

  10. Comparing formal verification approaches of interlocking systems

    DEFF Research Database (Denmark)

    Haxthausen, Anne Elisabeth; Nguyen, Hoang Nga; Roggenbach, Markus

    2016-01-01

    these approaches. As a first step towards this, in this paper we suggest a way to compare different formal approaches for verifying designs of route-based interlocking systems and we demonstrate it on modelling and verification approaches developed within the research groups at DTU/Bremen and at Surrey......The verification of railway interlocking systems is a challenging task, and therefore several research groups have suggested to improve this task by using formal methods, but they use different modelling and verification approaches. To advance this research, there is a need to compare....../Swansea. The focus is on designs that are specified by so-called control tables. The paper can serve as a starting point for further comparative studies. The DTU/Bremen research has been funded by the RobustRailS project granted by Innovation Fund Denmark. The Surrey/Swansea research has been funded by the Safe...

  11. Can self-verification strivings fully transcend the self-other barrier? Seeking verification of ingroup identities.

    Science.gov (United States)

    Gómez, Angel; Seyle, D Conor; Huici, Carmen; Swann, William B

    2009-12-01

    Recent research has demonstrated self-verification strivings in groups, such that people strive to verify collective identities, which are personal self-views (e.g., "sensitive") associated with group membership (e.g., "women"). Such demonstrations stop short of showing that the desire for self-verification can fully transcend the self-other barrier, as in people working to verify ingroup identities (e.g., "Americans are loud") even when such identities are not self-descriptive ("I am quiet and unassuming"). Five studies focus on such ingroup verification strivings. Results indicate that people prefer to interact with individuals who verify their ingroup identities over those who enhance these identities (Experiments 1-5). Strivings for ingroup identity verification were independent of the extent to which the identities were self-descriptive but were stronger among participants who were highly invested in their ingroup identities, as reflected in high certainty of these identities (Experiments 1-4) and high identification with the group (Experiments 1-5). In addition, whereas past demonstrations of self-verification strivings have been limited to efforts to verify the content of identities (Experiments 1 to 3), the findings also show that they strive to verify the valence of their identities (i.e., the extent to which the identities are valued; Experiments 4 and 5). Self-verification strivings, rather than self-enhancement strivings, appeared to motivate participants' strivings for ingroup identity verification. Links to collective self-verification strivings and social identity theory are discussed.

  12. Guidance for Researchers Developing and Conducting Clinical Trials in Practice-based Research Networks (PBRNs)

    Science.gov (United States)

    Dolor, Rowena J.; Schmit, Kristine M.; Graham, Deborah G.; Fox, Chester H.; Baldwin, Laura Mae

    2015-01-01

    Background There is increased interest nationally in multicenter clinical trials to answer questions about clinical effectiveness, comparative effectiveness, and safety in real-world community settings. Primary care practice-based research networks (PBRNs), comprising community- and/or academically affiliated practices committed to improving medical care for a range of health problems, offer ideal settings for these trials, especially pragmatic clinical trials. However, many researchers are not familiar with working with PBRNs. Methods Experts in practice-based research identified solutions to challenges that researchers and PBRN personnel experience when collaborating on clinical trials in PBRNs. These were organized as frequently asked questions in a draft document presented at a 2013 Agency for Health care Research and Quality PBRN conference workshop, revised based on participant feedback, then shared with additional experts from the DARTNet Institute, Clinical Translational Science Award PBRN, and North American Primary Care Research Group PBRN workgroups for further input and modification. Results The “Toolkit for Developing and Conducting Multi-site Clinical Trials in Practice-Based Research Networks” offers guidance in the areas of recruiting and engaging practices, budgeting, project management, and communication, as well as templates and examples of tools important in developing and conducting clinical trials. Conclusion Ensuring the successful development and conduct of clinical trials in PBRNs requires a highly collaborative approach between academic research and PBRN teams. PMID:25381071

  13. Spent fuel verification options for final repository safeguards in Finland. A study on verification methods, their feasibility and safety aspects

    International Nuclear Information System (INIS)

    Hautamaeki, J.; Tiitta, A.

    2000-12-01

    The verification possibilities of the spent fuel assemblies from the Olkiluoto and Loviisa NPPs and the fuel rods from the research reactor of VTT are contemplated in this report. The spent fuel assemblies have to be verified at the partial defect level before the final disposal into the geologic repository. The rods from the research reactor may be verified at the gross defect level. Developing a measurement system for partial defect verification is a complicated and time-consuming task. The Passive High Energy Gamma Emission Tomography and the Fork Detector combined with Gamma Spectrometry are the most potential measurement principles to be developed for this purpose. The whole verification process has to be planned to be as slick as possible. An early start in the planning of the verification and developing the measurement devices is important in order to enable a smooth integration of the verification measurements into the conditioning and disposal process. The IAEA and Euratom have not yet concluded the safeguards criteria for the final disposal. E.g. criteria connected to the selection of the best place to perform the verification. Measurements have not yet been concluded. Options for the verification places have been considered in this report. One option for a verification measurement place is the intermediate storage. The other option is the encapsulation plant. Crucial viewpoints are such as which one offers the best practical possibilities to perform the measurements effectively and which would be the better place in the safeguards point of view. Verification measurements may be needed both in the intermediate storages and in the encapsulation plant. In this report also the integrity of the fuel assemblies after wet intermediate storage period is assessed, because the assemblies have to stand the handling operations of the verification measurements. (orig.)

  14. NIH Clinical Research Trials and You

    Science.gov (United States)

    ... Info Lines Health Services Locator HealthCare.gov NIH Clinical Research Trials and You Talking to Your Doctor Science ... Labs & Clinics Training Opportunities Library Resources Research Resources Clinical Research Resources Safety, Regulation and Guidance More » Quick Links ...

  15. Initial verification and validation of RAZORBACK - A research reactor transient analysis code

    Energy Technology Data Exchange (ETDEWEB)

    Talley, Darren G. [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States)

    2015-09-01

    This report describes the work and results of the initial verification and validation (V&V) of the beta release of the Razorback code. Razorback is a computer code designed to simulate the operation of a research reactor (such as the Annular Core Research Reactor (ACRR)) by a coupled numerical solution of the point reactor kinetics equations, the energy conservation equation for fuel element heat transfer, and the mass, momentum, and energy conservation equations for the water cooling of the fuel elements. This initial V&V effort was intended to confirm that the code work to-date shows good agreement between simulation and actual ACRR operations, indicating that the subsequent V&V effort for the official release of the code will be successful.

  16. Public information about clinical trials and research.

    Science.gov (United States)

    Plétan, Yannick; Zannad, Faïez; Jaillon, Patrice

    2003-01-01

    Be it to restore the confused image of clinical research in relation to the lay public, or to develop new ways of accruing healthy volunteers or patients for clinical trials, there is a need to draft some guidance on how best to provide information on research. Although the French legal and regulatory armamentarium in this area is essentially liberal, there is currently little-justified reluctance among study sponsors to advertise publicly. A group of academic and pharmaceutical industry researchers, assembled for a workshop, together with regulators, journalists, representatives from ethics committees, social security, patient and health consumer groups and other French institutional bodies, has suggested the following series of recommendations: there is no need for additional legal or regulatory constraints; sponsors should be aware of and make use of direct public information on trials; a 'good practice charter' on public communication about clinical trials should be developed; all professionals should be involved in this communication platform; communication in the patient's immediate vicinity should be preferred (primary-care physician, local press); clinical databases and websites accessible to professionals, but also to patients and non-professionals, should be developed; genuine instruction on clinical trials for physicians and health professionals unfamiliar with such trials should be developed and disseminated; media groups should receive at least some training in the fundamentals of clinical research.

  17. Research on key technology of the verification system of steel rule based on vision measurement

    Science.gov (United States)

    Jia, Siyuan; Wang, Zhong; Liu, Changjie; Fu, Luhua; Li, Yiming; Lu, Ruijun

    2018-01-01

    The steel rule plays an important role in quantity transmission. However, the traditional verification method of steel rule based on manual operation and reading brings about low precision and low efficiency. A machine vison based verification system of steel rule is designed referring to JJG1-1999-Verificaiton Regulation of Steel Rule [1]. What differentiates this system is that it uses a new calibration method of pixel equivalent and decontaminates the surface of steel rule. Experiments show that these two methods fully meet the requirements of the verification system. Measuring results strongly prove that these methods not only meet the precision of verification regulation, but also improve the reliability and efficiency of the verification system.

  18. Clinical trials: bringing research to the bedside.

    Science.gov (United States)

    Arvay, C A

    1991-02-01

    Over the years, clinical trials with their structured treatment plans and multicenter involvement have been instrumental in developing new treatments and establishing standard of care therapy. While clinical trials strive to advance medical knowledge, they provide scientifically sound, state of the art care and their use should be increased. The Brain Tumor Cooperative Group, one such NCI-sponsored cooperative group, has been the primary group for the treatment of malignant gliomas. As the field of neuro-oncology expands, the neuroscience nurse needs to develop an understanding of clinical trials and their operation. The nurse is in an optimal position to support medical research and the research participant.

  19. Compressive sensing using optimized sensing matrix for face verification

    Science.gov (United States)

    Oey, Endra; Jeffry; Wongso, Kelvin; Tommy

    2017-12-01

    Biometric appears as one of the solutions which is capable in solving problems that occurred in the usage of password in terms of data access, for example there is possibility in forgetting password and hard to recall various different passwords. With biometrics, physical characteristics of a person can be captured and used in the identification process. In this research, facial biometric is used in the verification process to determine whether the user has the authority to access the data or not. Facial biometric is chosen as its low cost implementation and generate quite accurate result for user identification. Face verification system which is adopted in this research is Compressive Sensing (CS) technique, in which aims to reduce dimension size as well as encrypt data in form of facial test image where the image is represented in sparse signals. Encrypted data can be reconstructed using Sparse Coding algorithm. Two types of Sparse Coding namely Orthogonal Matching Pursuit (OMP) and Iteratively Reweighted Least Squares -ℓp (IRLS-ℓp) will be used for comparison face verification system research. Reconstruction results of sparse signals are then used to find Euclidean norm with the sparse signal of user that has been previously saved in system to determine the validity of the facial test image. Results of system accuracy obtained in this research are 99% in IRLS with time response of face verification for 4.917 seconds and 96.33% in OMP with time response of face verification for 0.4046 seconds with non-optimized sensing matrix, while 99% in IRLS with time response of face verification for 13.4791 seconds and 98.33% for OMP with time response of face verification for 3.1571 seconds with optimized sensing matrix.

  20. Conducting qualitative research within Clinical Trials Units: avoiding potential pitfalls.

    Science.gov (United States)

    Cooper, Cindy; O'Cathain, Alicia; Hind, Danny; Adamson, Joy; Lawton, Julia; Baird, Wendy

    2014-07-01

    The value of using qualitative research within or alongside randomised controlled trials (RCTs) is becoming more widely accepted. Qualitative research may be conducted concurrently with pilot or full RCTs to understand the feasibility and acceptability of the interventions being tested, or to improve trial conduct. Clinical Trials Units (CTUs) in the United Kingdom (UK) manage large numbers of RCTs and, increasingly, manage the qualitative research or collaborate with qualitative researchers external to the CTU. CTUs are beginning to explicitly manage the process, for example, through the use of standard operating procedures for designing and implementing qualitative research with trials. We reviewed the experiences of two UK Clinical Research Collaboration (UKCRC) registered CTUs of conducting qualitative research concurrently with RCTs. Drawing on experiences gained from 15 studies, we identify the potential for the qualitative research to undermine the successful completion or scientific integrity of RCTs. We show that potential problems can arise from feedback of interim or final qualitative findings to members of the trial team or beyond, in particular reporting qualitative findings whilst the trial is on-going. The problems include: We make recommendations for improving the management of qualitative research within CTUs. Copyright © 2014. Published by Elsevier Inc.

  1. Construction of ethics in clinical research: clinical trials registration

    Directory of Open Access Journals (Sweden)

    C. A. Caramori

    2007-01-01

    Full Text Available Scientific development that has been achieved through decades finds in clinical research a great possibility of translating findings to human health application. Evidence given by clinical trials allows everyone to have access to the best health services. However, the millionaire world of pharmaceutical industries has stained clinical research with doubt and improbability. Study results (fruits of controlled clinical trials and scientific publications (selective, manipulated and with wrong conclusions led to an inappropriate clinical practice, favoring the involved economic aspect. In 2005, the International Committee of Medical Journal Editors (ICMJE, supported by the World Association of Medical Editors, started demanding as a requisite for publication that all clinical trials be registered at the database ClinicalTrials.gov. In 2006, the World Health Organization (WHO created the International Clinical Trial Registry Platform (ICTRP, which gathers several registry centers from all over the world, and required that all researchers and pharmaceutical industries register clinical trials. Such obligatory registration has progressed and will extend to all scientific journals indexed in all worldwide databases. Registration of clinical trials means another step of clinical research towards transparency, ethics and impartiality, resulting in real evidence to the forthcoming changes in clinical practice as well as in the health situation.

  2. Qualitative research within trials: developing a standard operating procedure for a clinical trials unit

    OpenAIRE

    Rapport, Frances; Clement, Clare

    2013-01-01

    BackgroundQualitative research methods are increasingly used within trials to address broader research questions than quantitative methods can address alone. Qualitative methods enable health professionals, service users and other stakeholders to contribute their views and experiences when evaluating health care treatments, interventions or policies. They can influence trial design, allowing for a fuller engagement with research questions, aims and objectives and clarify the complexities of p...

  3. Projected Impact of Compositional Verification on Current and Future Aviation Safety Risk

    Science.gov (United States)

    Reveley, Mary S.; Withrow, Colleen A.; Leone, Karen M.; Jones, Sharon M.

    2014-01-01

    The projected impact of compositional verification research conducted by the National Aeronautic and Space Administration System-Wide Safety and Assurance Technologies on aviation safety risk was assessed. Software and compositional verification was described. Traditional verification techniques have two major problems: testing at the prototype stage where error discovery can be quite costly and the inability to test for all potential interactions leaving some errors undetected until used by the end user. Increasingly complex and nondeterministic aviation systems are becoming too large for these tools to check and verify. Compositional verification is a "divide and conquer" solution to addressing increasingly larger and more complex systems. A review of compositional verification research being conducted by academia, industry, and Government agencies is provided. Forty-four aviation safety risks in the Biennial NextGen Safety Issues Survey were identified that could be impacted by compositional verification and grouped into five categories: automation design; system complexity; software, flight control, or equipment failure or malfunction; new technology or operations; and verification and validation. One capability, 1 research action, 5 operational improvements, and 13 enablers within the Federal Aviation Administration Joint Planning and Development Office Integrated Work Plan that could be addressed by compositional verification were identified.

  4. Methodology and Toolset for Model Verification, Hardware/Software co-simulation, Performance Optimisation and Customisable Source-code generation

    DEFF Research Database (Denmark)

    Berger, Michael Stübert; Soler, José; Yu, Hao

    2013-01-01

    The MODUS project aims to provide a pragmatic and viable solution that will allow SMEs to substantially improve their positioning in the embedded-systems development market. The MODUS tool will provide a model verification and Hardware/Software co-simulation tool (TRIAL) and a performance...... optimisation and customisable source-code generation tool (TUNE). The concept is depicted in automated modelling and optimisation of embedded-systems development. The tool will enable model verification by guiding the selection of existing open-source model verification engines, based on the automated analysis...

  5. Verification of safety critical software

    International Nuclear Information System (INIS)

    Son, Ki Chang; Chun, Chong Son; Lee, Byeong Joo; Lee, Soon Sung; Lee, Byung Chai

    1996-01-01

    To assure quality of safety critical software, software should be developed in accordance with software development procedures and rigorous software verification and validation should be performed. Software verification is the formal act of reviewing, testing of checking, and documenting whether software components comply with the specified requirements for a particular stage of the development phase[1]. New software verification methodology was developed and was applied to the Shutdown System No. 1 and 2 (SDS1,2) for Wolsung 2,3 and 4 nuclear power plants by Korea Atomic Energy Research Institute(KAERI) and Atomic Energy of Canada Limited(AECL) in order to satisfy new regulation requirements of Atomic Energy Control Boars(AECB). Software verification methodology applied to SDS1 for Wolsung 2,3 and 4 project will be described in this paper. Some errors were found by this methodology during the software development for SDS1 and were corrected by software designer. Outputs from Wolsung 2,3 and 4 project have demonstrated that the use of this methodology results in a high quality, cost-effective product. 15 refs., 6 figs. (author)

  6. ENVIRONMENTAL TECHNOLOGY VERIFICATION: TEST/QA PLAN FOR THE VERIFICATION TESTING OF SELECTIVE CATALYTIC REDUCTION CONTROL TECHNOLOGIES FOR HIGHWAY, NONROAD, AND STATIONARY USE DIESEL ENGINES

    Science.gov (United States)

    The U.S. Environmental Protection Agency established the Environmental Technology Verification Program to accelerate the development and commercialization of improved environmental technology through third party verification and reporting of product performance. Research Triangl...

  7. 76 FR 81991 - National Spectrum Sharing Research Experimentation, Validation, Verification, Demonstration and...

    Science.gov (United States)

    2011-12-29

    ... NATIONAL SCIENCE FOUNDATION National Spectrum Sharing Research Experimentation, Validation... requirements of national level spectrum research, development, demonstration, and field trial facilities... to determine the optimal way to manage and use the radio spectrum. During Workshop I held at Boulder...

  8. Bibliography for Verification and Validation in Computational Simulation

    International Nuclear Information System (INIS)

    Oberkampf, W.L.

    1998-01-01

    A bibliography has been compiled dealing with the verification and validation of computational simulations. The references listed in this bibliography are concentrated in the field of computational fluid dynamics (CFD). However, references from the following fields are also included: operations research, heat transfer, solid dynamics, software quality assurance, software accreditation, military systems, and nuclear reactor safety. This bibliography, containing 221 references, is not meant to be comprehensive. It was compiled during the last ten years in response to the author's interest and research in the methodology for verification and validation. The emphasis in the bibliography is in the following areas: philosophy of science underpinnings, development of terminology and methodology, high accuracy solutions for CFD verification, experimental datasets for CFD validation, and the statistical quantification of model validation. This bibliography should provide a starting point for individual researchers in many fields of computational simulation in science and engineering

  9. Bibliography for Verification and Validation in Computational Simulations

    Energy Technology Data Exchange (ETDEWEB)

    Oberkampf, W.L.

    1998-10-01

    A bibliography has been compiled dealing with the verification and validation of computational simulations. The references listed in this bibliography are concentrated in the field of computational fluid dynamics (CFD). However, references from the following fields are also included: operations research, heat transfer, solid dynamics, software quality assurance, software accreditation, military systems, and nuclear reactor safety. This bibliography, containing 221 references, is not meant to be comprehensive. It was compiled during the last ten years in response to the author's interest and research in the methodology for verification and validation. The emphasis in the bibliography is in the following areas: philosophy of science underpinnings, development of terminology and methodology, high accuracy solutions for CFD verification, experimental datasets for CFD validation, and the statistical quantification of model validation. This bibliography should provide a starting point for individual researchers in many fields of computational simulation in science and engineering.

  10. The verification of neutron activation analysis support system (cooperative research)

    Energy Technology Data Exchange (ETDEWEB)

    Sasajima, Fumio; Ichimura, Shigeju; Ohtomo, Akitoshi; Takayanagi, Masaji [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment; Sawahata, Hiroyuki; Ito, Yasuo [Tokyo Univ. (Japan). Research Center for Nuclear Science and Technology; Onizawa, Kouji [Radiation Application Development Association, Tokai, Ibaraki (Japan)

    2000-12-01

    Neutron activation analysis support system is the system in which even the user who has not much experience in the neutron activation analysis can conveniently and accurately carry out the multi-element analysis of the sample. In this verification test, subjects such functions, usability, precision and accuracy of the analysis and etc. of the neutron activation analysis support system were confirmed. As a method of the verification test, it was carried out using irradiation device, measuring device, automatic sample changer and analyzer equipped in the JRR-3M PN-3 facility, and analysis software KAYZERO/SOLCOI based on the k{sub 0} method. With these equipments, calibration of the germanium detector, measurement of the parameter of the irradiation field and analysis of three kinds of environmental standard sample were carried out. The k{sub 0} method adopted in this system is primarily utilized in Europe recently, and it is the analysis method, which can conveniently and accurately carried out the multi-element analysis of the sample without requiring individual comparison standard sample. By this system, total 28 elements were determined quantitatively, and 16 elements with the value guaranteed as analytical data of the NIST (National Institute of Standards and Technology) environment standard sample were analyzed in the accuracy within 15%. This report describes content and verification result of neutron activation support system. (author)

  11. Verification of the thermal design of electronic equipment

    Energy Technology Data Exchange (ETDEWEB)

    Hienonen, R.; Karjalainen, M.; Lankinen, R. [VTT Automation, Espoo (Finland). ProTechno

    1997-12-31

    The project `Verification of the thermal design of electronic equipment` studied the methodology to be followed in the verification of thermal design of electronic equipment. This project forms part of the `Cool Electronics` research programme funded by TEKES, the Finnish Technology Development Centre. This project was carried out jointly by VTT Automation, Lappeenranta University of Technology, Nokia Research Center and ABB Industry Oy VSD-Technology. The thermal design of electronic equipment has a significant impact on the cost, reliability, tolerance to different environments, selection of components and materials, and ergonomics of the product. This report describes the method for verification of thermal design. It assesses the goals set for thermal design, environmental requirements, technical implementation of the design, thermal simulation and modelling, and design qualification testing and the measurements needed. The verification method covers all packaging levels of electronic equipment from the system level to the electronic component level. The method described in this report can be used as part of the quality system of a corporation. The report includes information about the measurement and test methods needed in the verification process. Some measurement methods for the temperature, flow and pressure of air are described. (orig.) Published in Finnish VTT Julkaisuja 824. 22 refs.

  12. Advanced verification topics

    CERN Document Server

    Bhattacharya, Bishnupriya; Hall, Gary; Heaton, Nick; Kashai, Yaron; Khan Neyaz; Kirshenbaum, Zeev; Shneydor, Efrat

    2011-01-01

    The Accellera Universal Verification Methodology (UVM) standard is architected to scale, but verification is growing and in more than just the digital design dimension. It is growing in the SoC dimension to include low-power and mixed-signal and the system integration dimension to include multi-language support and acceleration. These items and others all contribute to the quality of the SOC so the Metric-Driven Verification (MDV) methodology is needed to unify it all into a coherent verification plan. This book is for verification engineers and managers familiar with the UVM and the benefits it brings to digital verification but who also need to tackle specialized tasks. It is also written for the SoC project manager that is tasked with building an efficient worldwide team. While the task continues to become more complex, Advanced Verification Topics describes methodologies outside of the Accellera UVM standard, but that build on it, to provide a way for SoC teams to stay productive and profitable.

  13. What can qualitative research do for randomised controlled trials? A systematic mapping review

    Science.gov (United States)

    O'Cathain, A; Thomas, K J; Drabble, S J; Rudolph, A; Hewison, J

    2013-01-01

    Objective To develop an empirically based framework of the aspects of randomised controlled trials addressed by qualitative research. Design Systematic mapping review of qualitative research undertaken with randomised controlled trials and published in peer-reviewed journals. Data sources MEDLINE, PreMEDLINE, EMBASE, the Cochrane Library, Health Technology Assessment, PsycINFO, CINAHL, British Nursing Index, Social Sciences Citation Index and ASSIA. Eligibility criteria Articles reporting qualitative research undertaken with trials published between 2008 and September 2010; health research, reported in English. Results 296 articles met the inclusion criteria. Articles focused on 22 aspects of the trial within five broad categories. Some articles focused on more than one aspect of the trial, totalling 356 examples. The qualitative research focused on the intervention being trialled (71%, 254/356); the design, process and conduct of the trial (15%, 54/356); the outcomes of the trial (1%, 5/356); the measures used in the trial (3%, 10/356); and the target condition for the trial (9%, 33/356). A minority of the qualitative research was undertaken at the pretrial stage (28%, 82/296). The value of the qualitative research to the trial itself was not always made explicit within the articles. The potential value included optimising the intervention and trial conduct, facilitating interpretation of the trial findings, helping trialists to be sensitive to the human beings involved in trials, and saving money by steering researchers towards interventions more likely to be effective in future trials. Conclusions A large amount of qualitative research undertaken with specific trials has been published, addressing a wide range of aspects of trials, with the potential to improve the endeavour of generating evidence of effectiveness of health interventions. Researchers can increase the impact of this work on trials by undertaking more of it at the pretrial stage and being explicit

  14. Evaluating the design and reporting of pragmatic trials in osteoarthritis research.

    Science.gov (United States)

    Ali, Shabana Amanda; Kloseck, Marita; Lee, Karen; Walsh, Kathleen Ellen; MacDermid, Joy C; Fitzsimmons, Deborah

    2018-01-01

    Among the challenges in health research is translating interventions from controlled experimental settings to clinical and community settings where chronic disease is managed daily. Pragmatic trials offer a method for testing interventions in real-world settings but are seldom used in OA research. The aim of this study was to evaluate the literature on pragmatic trials in OA research up to August 2016 in order to identify strengths and weaknesses in the design and reporting of these trials. We used established guidelines to assess the degree to which 61 OA studies complied with pragmatic trial design and reporting. We assessed design according to the pragmatic-explanatory continuum indicator summary and reporting according to the pragmatic trials extension of the CONsolidated Standards of Reporting Trials guidelines. None of the pragmatic trials met all 11 criteria evaluated and most of the trials met between 5 and 8 of the criteria. Criteria most often unmet pertained to practitioner expertise (by requiring specialists) and criteria most often met pertained to primary outcome analysis (by using intention-to-treat analysis). Our results suggest a lack of highly pragmatic trials in OA research. We identify this as a point of opportunity to improve research translation, since optimizing the design and reporting of pragmatic trials can facilitate implementation of evidence-based interventions for OA care. © The Author 2017. Published by Oxford University Press on behalf of the British Society for Rheumatology. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  15. Randomised controlled trials in Scandinavian educational research

    DEFF Research Database (Denmark)

    Pontoppidan, Maiken; Keilow, Maria; Dietrichson, Jens

    2018-01-01

    of this paper is to examine the history of randomised controlled trials in Scandinavian compulsory schools (grades 0–10; pupil ages 6-15). Specifically, we investigate drivers and barriers for randomised controlled trials in educational research and the differences between the three Scandinavian countries...... crucial for the implementation of RCTs and are likely more important in smaller countries such as the Scandinavian ones. Supporting institutions have now been established in all three countries, and we believe that the use of RCTs in Scandinavian educational research is likely to continue....... or more interventions were randomly assigned to groups of students and carried out in a school setting with the primary aim of improving the academic performance of children aged 6-15 in grades 0–10 in Denmark, Norway, or Sweden. We included both conducted and ongoing trials. Publications that seemed...

  16. Viability Study for an Unattended UF_6 Cylinder Verification Station: Phase I Final Report

    International Nuclear Information System (INIS)

    Smith, Leon E.; Miller, Karen A.; Garner, James R.; Branney, Sean; McDonald, Benjamin S.; Webster, Jennifer B.; Zalavadia, Mital A.; Todd, Lindsay C.; Kulisek, Jonathan A.; Nordquist, Heather; Deshmukh, Nikhil S.; Stewart, Scott

    2016-01-01

    In recent years, the International Atomic Energy Agency (IAEA) has pursued innovative techniques and an integrated suite of safeguards measures to address the verification challenges posed by the front end of the nuclear fuel cycle. Among the unattended instruments currently being explored by the IAEA is an Unattended Cylinder Verification Station (UCVS) that could provide automated, independent verification of the declared relative enrichment, "2"3"5U mass, total uranium mass and identification for all declared UF_6 cylinders in a facility (e.g., uranium enrichment plants and fuel fabrication plants). Under the auspices of the United States and European Commission Support Programs to the IAEA, a project was undertaken to assess the technical and practical viability of the UCVS concept. The US Support Program team consisted of Pacific Northwest National Laboratory (PNNL, lead), Los Alamos National Laboratory (LANL), Oak Ridge National Laboratory (ORNL) and Savanah River National Laboratory (SRNL). At the core of the viability study is a long-term field trial of a prototype UCVS system at a Westinghouse fuel fabrication facility. A key outcome of the study is a quantitative performance evaluation of two nondestructive assay (NDA) methods being considered for inclusion in a UCVS: Hybrid Enrichment Verification Array (HEVA), and Passive Neutron Enrichment Meter (PNEM). This report provides context for the UCVS concept and the field trial: potential UCVS implementation concepts at an enrichment facility; an overview of UCVS prototype design; field trial objectives and activities. Field trial results and interpretation are presented, with a focus on the performance of PNEM and HEVA for the assay of over 200 ''typical'' Type 30B cylinders, and the viability of an ''NDA Fingerprint'' concept as a high-fidelity means to periodically verify that the contents of a given cylinder are consistent with previous scans. A modeling study, combined with field-measured instrument

  17. Assessing Clinical Trial-Associated Workload in Community-Based Research Programs Using the ASCO Clinical Trial Workload Assessment Tool.

    Science.gov (United States)

    Good, Marjorie J; Hurley, Patricia; Woo, Kaitlin M; Szczepanek, Connie; Stewart, Teresa; Robert, Nicholas; Lyss, Alan; Gönen, Mithat; Lilenbaum, Rogerio

    2016-05-01

    Clinical research program managers are regularly faced with the quandary of determining how much of a workload research staff members can manage while they balance clinical practice and still achieve clinical trial accrual goals, maintain data quality and protocol compliance, and stay within budget. A tool was developed to measure clinical trial-associated workload, to apply objective metrics toward documentation of work, and to provide clearer insight to better meet clinical research program challenges and aid in balancing staff workloads. A project was conducted to assess the feasibility and utility of using this tool in diverse research settings. Community-based research programs were recruited to collect and enter clinical trial-associated monthly workload data into a web-based tool for 6 consecutive months. Descriptive statistics were computed for self-reported program characteristics and workload data, including staff acuity scores and number of patient encounters. Fifty-one research programs that represented 30 states participated. Median staff acuity scores were highest for staff with patients enrolled in studies and receiving treatment, relative to staff with patients in follow-up status. Treatment trials typically resulted in higher median staff acuity, relative to cancer control, observational/registry, and prevention trials. Industry trials exhibited higher median staff acuity scores than trials sponsored by the National Institutes of Health/National Cancer Institute, academic institutions, or others. The results from this project demonstrate that trial-specific acuity measurement is a better measure of workload than simply counting the number of patients. The tool was shown to be feasible and useable in diverse community-based research settings. Copyright © 2016 by American Society of Clinical Oncology.

  18. Modifying the Clinical Research Infrastructure at a Dedicated Clinical Trials Unit: Assessment of Trial Development, Activation, and Participant Accrual.

    Science.gov (United States)

    Tang, Chad; Hess, Kenneth R; Sanders, Dwana; Davis, Suzanne E; Buzdar, Aman U; Kurzrock, Razelle; Lee, J Jack; Meric-Bernstam, Funda; Hong, David S

    2017-03-15

    Purpose: Information on processes for trials assessing investigational therapeutics is sparse. We assessed the trial development processes within the Department of Investigational Cancer Therapeutics (ICT) at MD Anderson Cancer Center (Houston, TX) and analyzed their effects on the trial activation timeline and enrolment. Experimental Design: Data were from a prospectively maintained registry that tracks all clinical studies at MD Anderson. From this database, we identified 2,261 activated phase I-III trials; 221 were done at the ICT. ICT trials were matched to trials from other MD Anderson departments by phase, sponsorship, and submission year. Trial performance metrics were compared with paired Wilcoxon signed rank tests. Results: We identified three facets of the ICT research infrastructure: parallel processing of trial approval steps; a physician-led research team; and regular weekly meetings to foster research accountability. Separate analyses were conducted stratified by sponsorship [industry (133 ICT and 133 non-ICT trials) or institutional (68 ICT and 68 non-ICT trials)]. ICT trial development was faster from IRB approval to activation (median difference of 1.1 months for industry-sponsored trials vs. 2.3 months for institutional) and from activation to first enrolment (median difference of 0.3 months for industry vs. 1.2 months for institutional; all matched P infrastructure within a large academic cancer center was associated with efficient trial development and participant accrual. Clin Cancer Res; 23(6); 1407-13. ©2016 AACR . ©2016 American Association for Cancer Research.

  19. Advancing Disarmament Verification Tools: A Task for Europe?

    International Nuclear Information System (INIS)

    Göttsche, Malte; Kütt, Moritz; Neuneck, Götz; Niemeyer, Irmgard

    2015-01-01

    A number of scientific-technical activities have been carried out to establish more robust and irreversible disarmament verification schemes. Regardless of the actual path towards deeper reductions in nuclear arsenals or their total elimination in the future, disarmament verification will require new verification procedures and techniques. This paper discusses the information that would be required as a basis for building confidence in disarmament, how it could be principally verified and the role Europe could play. Various ongoing activities are presented that could be brought together to produce a more intensified research and development environment in Europe. The paper argues that if ‘effective multilateralism’ is the main goal of the European Union’s (EU) disarmament policy, EU efforts should be combined and strengthened to create a coordinated multilateral disarmament verification capacity in the EU and other European countries. The paper concludes with several recommendations that would have a significant impact on future developments. Among other things, the paper proposes a one-year review process that should include all relevant European actors. In the long run, an EU Centre for Disarmament Verification could be envisaged to optimize verification needs, technologies and procedures.

  20. A Quantitative Approach to the Formal Verification of Real-Time Systems.

    Science.gov (United States)

    1996-09-01

    Computer Science A Quantitative Approach to the Formal Verification of Real - Time Systems Sergio Vale Aguiar Campos September 1996 CMU-CS-96-199...ptisiic raieaiSI v Diambimos Lboiamtad _^ A Quantitative Approach to the Formal Verification of Real - Time Systems Sergio Vale Aguiar Campos...implied, of NSF, the Semiconduc- tor Research Corporation, ARPA or the U.S. government. Keywords: real - time systems , formal verification, symbolic

  1. What Value Can Qualitative Research Add to Quantitative Research Design? An Example From an Adolescent Idiopathic Scoliosis Trial Feasibility Study.

    Science.gov (United States)

    Toye, Francine; Williamson, Esther; Williams, Mark A; Fairbank, Jeremy; Lamb, Sarah E

    2016-08-09

    Using an example of qualitative research embedded in a non-surgical feasibility trial, we explore the benefits of including qualitative research in trial design and reflect on epistemological challenges. We interviewed 18 trial participants and used methods of Interpretive Phenomenological Analysis. Our findings demonstrate that qualitative research can make a valuable contribution by allowing trial stakeholders to see things from alternative perspectives. Specifically, it can help to make specific recommendations for improved trial design, generate questions which contextualize findings, and also explore disease experience beyond the trial. To make the most out of qualitative research embedded in quantitative design it would be useful to (a) agree specific qualitative study aims that underpin research design, (b) understand the impact of differences in epistemological truth claims, (c) provide clear thematic interpretations for trial researchers to utilize, and (d) include qualitative findings that explore experience beyond the trial setting within the impact plan. © The Author(s) 2016.

  2. Research on Linux Trusted Boot Method Based on Reverse Integrity Verification

    Directory of Open Access Journals (Sweden)

    Chenlin Huang

    2016-01-01

    Full Text Available Trusted computing aims to build a trusted computing environment for information systems with the help of secure hardware TPM, which has been proved to be an effective way against network security threats. However, the TPM chips are not yet widely deployed in most computing devices so far, thus limiting the applied scope of trusted computing technology. To solve the problem of lacking trusted hardware in existing computing platform, an alternative security hardware USBKey is introduced in this paper to simulate the basic functions of TPM and a new reverse USBKey-based integrity verification model is proposed to implement the reverse integrity verification of the operating system boot process, which can achieve the effect of trusted boot of the operating system in end systems without TPMs. A Linux operating system booting method based on reverse integrity verification is designed and implemented in this paper, with which the integrity of data and executable files in the operating system are verified and protected during the trusted boot process phase by phase. It implements the trusted boot of operation system without TPM and supports remote attestation of the platform. Enhanced by our method, the flexibility of the trusted computing technology is greatly improved and it is possible for trusted computing to be applied in large-scale computing environment.

  3. Bureaucracy stifles medical research in Britain: a tale of three trials.

    Science.gov (United States)

    Snooks, Helen; Hutchings, Hayley; Seagrove, Anne; Stewart-Brown, Sarah; Williams, John; Russell, Ian

    2012-08-16

    Recent developments aiming to standardise and streamline processes of gaining the necessary approvals to carry out research in the National Health Service (NHS) in the United Kingdom (UK), have resulted in lengthy and costly delays. The national UK governmental Department of Health's Research Governance Framework (RGF) for Health and Social Care requires that appropriate checks be conducted before research involving human participants, their organs, tissues or data can commence in the NHS. As a result, medical research has been subjected to increased regulation and governance, with the requirement for approvals from numerous regulatory and monitoring bodies. In addition, the processes and outcomes of the attribution of costs in NHS research have caused additional difficulties for researchers. The purpose of this paper is to illustrate, through three trial case studies, the difficulties encountered during the set-up and recruitment phases of these trials, related to gaining the necessary ethical and governance approvals and applying for NHS costs to undertake and deliver the research. Empirical evidence about delays and difficulties related to regulation and governance of medical research was gathered during the period 2009-2010 from three UK randomised controlled trials with sites in England, Wales and Scotland (1. SAFER 2- an emergency care based trial of a protocol for paramedics to refer patients directly to community based falls services; 2. COnStRUCT- a trial of two drugs for acute ulcerative colitis; and 3. Family Links - a trial of a public health intervention, a 10 week community based parenting programme). Findings and recommendations were reported in response to a call for evidence from The Academy of Medical Sciences regarding difficulties encountered in conducting medical research arising from R&D governance and regulation, to inform national policy. Difficulties and delays in navigating and gaining the appropriate approvals and NHS costs required to

  4. Getting added value from using qualitative research with randomized controlled trials: a qualitative interview study

    Science.gov (United States)

    2014-01-01

    Background Qualitative research is undertaken with randomized controlled trials of health interventions. Our aim was to explore the perceptions of researchers with experience of this endeavour to understand the added value of qualitative research to the trial in practice. Methods A telephone semi-structured interview study with 18 researchers with experience of undertaking the trial and/or the qualitative research. Results Interviewees described the added value of qualitative research for the trial, explaining how it solved problems at the pretrial stage, explained findings, and helped to increase the utility of the evidence generated by the trial. From the interviews, we identified three models of relationship of the qualitative research to the trial. In ‘the peripheral’ model, the trial was an opportunity to undertake qualitative research, with no intention that it would add value to the trial. In ‘the add-on’ model, the qualitative researcher understood the potential value of the qualitative research but it was viewed as a separate and complementary endeavour by the trial lead investigator and wider team. Interviewees described how this could limit the value of the qualitative research to the trial. Finally ‘the integral’ model played out in two ways. In ‘integral-in-theory’ studies, the lead investigator viewed the qualitative research as essential to the trial. However, in practice the qualitative research was under-resourced relative to the trial, potentially limiting its ability to add value to the trial. In ‘integral-in-practice’ studies, interviewees described how the qualitative research was planned from the beginning of the study, senior qualitative expertise was on the team from beginning to end, and staff and time were dedicated to the qualitative research. In these studies interviewees described the qualitative research adding value to the trial although this value was not necessarily visible beyond the original research team due

  5. Getting added value from using qualitative research with randomized controlled trials: a qualitative interview study.

    Science.gov (United States)

    O'Cathain, Alicia; Goode, Jackie; Drabble, Sarah J; Thomas, Kate J; Rudolph, Anne; Hewison, Jenny

    2014-06-09

    Qualitative research is undertaken with randomized controlled trials of health interventions. Our aim was to explore the perceptions of researchers with experience of this endeavour to understand the added value of qualitative research to the trial in practice. A telephone semi-structured interview study with 18 researchers with experience of undertaking the trial and/or the qualitative research. Interviewees described the added value of qualitative research for the trial, explaining how it solved problems at the pretrial stage, explained findings, and helped to increase the utility of the evidence generated by the trial. From the interviews, we identified three models of relationship of the qualitative research to the trial. In 'the peripheral' model, the trial was an opportunity to undertake qualitative research, with no intention that it would add value to the trial. In 'the add-on' model, the qualitative researcher understood the potential value of the qualitative research but it was viewed as a separate and complementary endeavour by the trial lead investigator and wider team. Interviewees described how this could limit the value of the qualitative research to the trial. Finally 'the integral' model played out in two ways. In 'integral-in-theory' studies, the lead investigator viewed the qualitative research as essential to the trial. However, in practice the qualitative research was under-resourced relative to the trial, potentially limiting its ability to add value to the trial. In 'integral-in-practice' studies, interviewees described how the qualitative research was planned from the beginning of the study, senior qualitative expertise was on the team from beginning to end, and staff and time were dedicated to the qualitative research. In these studies interviewees described the qualitative research adding value to the trial although this value was not necessarily visible beyond the original research team due to the challenges of publishing this research

  6. RAZORBACK - A Research Reactor Transient Analysis Code Version 1.0 - Volume 3: Verification and Validation Report.

    Energy Technology Data Exchange (ETDEWEB)

    Talley, Darren G.

    2017-04-01

    This report describes the work and results of the verification and validation (V&V) of the version 1.0 release of the Razorback code. Razorback is a computer code designed to simulate the operation of a research reactor (such as the Annular Core Research Reactor (ACRR)) by a coupled numerical solution of the point reactor kinetics equations, the energy conservation equation for fuel element heat transfer, the equation of motion for fuel element thermal expansion, and the mass, momentum, and energy conservation equations for the water cooling of the fuel elements. This V&V effort was intended to confirm that the code shows good agreement between simulation and actual ACRR operations.

  7. Static Verification for Code Contracts

    Science.gov (United States)

    Fähndrich, Manuel

    The Code Contracts project [3] at Microsoft Research enables programmers on the .NET platform to author specifications in existing languages such as C# and VisualBasic. To take advantage of these specifications, we provide tools for documentation generation, runtime contract checking, and static contract verification.

  8. Research gaps identified during systematic reviews of clinical trials: glass-ionomer cements.

    Science.gov (United States)

    Mickenautsch, Steffen

    2012-06-29

    To report the results of an audit concerning research gaps in clinical trials that were accepted for appraisal in authored and published systematic reviews regarding the application of glass-ionomer cements (GIC) in dental practice Information concerning research gaps in trial precision was extracted, following a framework that included classification of the research gap reasons: 'imprecision of information (results)', 'biased information', 'inconsistency or unknown consistency' and 'not the right information', as well as research gap characterization using PICOS elements: population (P), intervention (I), comparison (C), outcomes (O) and setting (S). Internal trial validity assessment was based on the understanding that successful control for systematic error cannot be assured on the basis of inclusion of adequate methods alone, but also requires empirical evidence about whether such attempt was successful. A comprehensive and interconnected coverage of GIC-related clinical topics was established. The most common reasons found for gaps in trial precision were lack of sufficient trials and lack of sufficient large sample size. Only a few research gaps were ascribed to 'Lack of information' caused by focus on mainly surrogate trial outcomes. According to the chosen assessment criteria, a lack of adequate randomisation, allocation concealment and blinding/masking in trials covering all reviewed GIC topics was noted (selection- and detection/performance bias risk). Trial results appear to be less affected by loss-to-follow-up (attrition bias risk). This audit represents an adjunct of the systematic review articles it has covered. Its results do not change the systematic review's conclusions but highlight existing research gaps concerning the precision and internal validity of reviewed trials in detail. These gaps should be addressed in future GIC-related clinical research.

  9. Construction of ethics in clinical research: clinical trials registration

    OpenAIRE

    C. A. Caramori

    2007-01-01

    Scientific development that has been achieved through decades finds in clinical research a great possibility of translating findings to human health application. Evidence given by clinical trials allows everyone to have access to the best health services. However, the millionaire world of pharmaceutical industries has stained clinical research with doubt and improbability. Study results (fruits of controlled clinical trials) and scientific publications (selective, manipulated and with wrong c...

  10. On-site verification trials using fly ash for reclamation behind bulkheads; Sekitanbai wo gogan uraumezai ni riyosuru genba jissho chosa

    Energy Technology Data Exchange (ETDEWEB)

    Kozawa, K [Center for Coal Utilization, Japan, Tokyo (Japan); Yoshida, T [Toyo Construction Co. Ltd., Tokyo (Japan); Miyagawa, H; Kobayashi, M

    1996-09-01

    As a method to utilize coal ash generated from coal burning power plants more effectively in bulk, its use has been studied as a reclamation material behind bulkhead structures in harbors and airports. Verification trials for the study results were performed at the Hekinan power plant of the Chubu Electric Power Company. The trials included the following: an experiment to verify horizontal soil pressure and active earth pressure when slurry made of fly ash added with cement and seawater was placed in frameworks installed behind bulkheads of a harbor, a slurry hardening test, environmental impact investigation, and constructibility investigation. As a result, a large number of findings were obtained, including the following matters: earth pressure of slurry which has been placed in a soil tank in about ten minutes would be measured as pressure of liquid, but it shifts to behavior as a soil in a relatively short time; the earth pressure after three hours agreed with static earth pressure calculated under provision of K{sub o} = 0.2; and a hardened body made with cement under a certain mixing ratio was obtained, which stands by itself at a height of 7.5 m at compression strength of 1.77 kgf/cm {sup 2}. 11 figs., 2 tabs.

  11. Describing qualitative research undertaken with randomised controlled trials in grant proposals: a documentary analysis.

    Science.gov (United States)

    Drabble, Sarah J; O'Cathain, Alicia; Thomas, Kate J; Rudolph, Anne; Hewison, Jenny

    2014-02-18

    There is growing recognition of the value of conducting qualitative research with trials in health research. It is timely to reflect on how this qualitative research is presented in grant proposals to identify lessons for researchers and research commissioners. As part of a larger study focusing on how to maximise the value of undertaking qualitative research with trials, we undertook a documentary analysis of proposals of funded studies. Using the metaRegister of Controlled Trials (mRCT) database we identified trials funded in the United Kingdom, ongoing between 2001 and 2010, and reporting the use of qualitative research. We requested copies of proposals from lead researchers. We extracted data from the proposals using closed and open questions, analysed using descriptive statistics and content analysis respectively. 2% (89/3812) of trials in the mRCT database described the use of qualitative research undertaken with the trial. From these 89 trials, we received copies of 36 full proposals, of which 32 met our inclusion criteria. 25% used less than a single paragraph to describe the qualitative research. The aims of the qualitative research described in these proposals focused mainly on the intervention or trial conduct. Just over half (56%) of the proposals included an explicit rationale for conducting the qualitative research with the trial, the most frequent being to optimise implementation into clinical practice or to interpret trial findings. Key information about methods, expertise and resources was missing in a large minority of proposals, in particular sample size, type of analysis, and non-personnel resources. 28% specifically stated that qualitative researchers would conduct the qualitative research. Our review of proposals of successfully funded studies identified good practice but also identified limited space given to describing the qualitative research, with an associated lack of attention to the rationale for doing the qualitative research and

  12. Design of verification platform for wireless vision sensor networks

    Science.gov (United States)

    Ye, Juanjuan; Shang, Fei; Yu, Chuang

    2017-08-01

    At present, the majority of research for wireless vision sensor networks (WVSNs) still remains in the software simulation stage, and the verification platforms of WVSNs that available for use are very few. This situation seriously restricts the transformation from theory research of WVSNs to practical application. Therefore, it is necessary to study the construction of verification platform of WVSNs. This paper combines wireless transceiver module, visual information acquisition module and power acquisition module, designs a high-performance wireless vision sensor node whose core is ARM11 microprocessor and selects AODV as the routing protocol to set up a verification platform called AdvanWorks for WVSNs. Experiments show that the AdvanWorks can successfully achieve functions of image acquisition, coding, wireless transmission, and obtain the effective distance parameters between nodes, which lays a good foundation for the follow-up application of WVSNs.

  13. Viability Study for an Unattended UF6 Cylinder Verification Station: Phase I Final Report

    Energy Technology Data Exchange (ETDEWEB)

    Smith, Leon E. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Miller, Karen A. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Garner, James R. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Branney, Sean [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); McDonald, Benjamin S. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Webster, Jennifer B. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Zalavadia, Mital A. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Todd, Lindsay C. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Kulisek, Jonathan A. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Nordquist, Heather [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Deshmukh, Nikhil S. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Stewart, Scott [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2016-05-31

    In recent years, the International Atomic Energy Agency (IAEA) has pursued innovative techniques and an integrated suite of safeguards measures to address the verification challenges posed by the front end of the nuclear fuel cycle. Among the unattended instruments currently being explored by the IAEA is an Unattended Cylinder Verification Station (UCVS) that could provide automated, independent verification of the declared relative enrichment, 235U mass, total uranium mass and identification for all declared UF6 cylinders in a facility (e.g., uranium enrichment plants and fuel fabrication plants). Under the auspices of the United States and European Commission Support Programs to the IAEA, a project was undertaken to assess the technical and practical viability of the UCVS concept. The US Support Program team consisted of Pacific Northwest National Laboratory (PNNL, lead), Los Alamos National Laboratory (LANL), Oak Ridge National Laboratory (ORNL) and Savanah River National Laboratory (SRNL). At the core of the viability study is a long-term field trial of a prototype UCVS system at a Westinghouse fuel fabrication facility. A key outcome of the study is a quantitative performance evaluation of two nondestructive assay (NDA) methods being considered for inclusion in a UCVS: Hybrid Enrichment Verification Array (HEVA), and Passive Neutron Enrichment Meter (PNEM). This report provides context for the UCVS concept and the field trial: potential UCVS implementation concepts at an enrichment facility; an overview of UCVS prototype design; field trial objectives and activities. Field trial results and interpretation are presented, with a focus on the performance of PNEM and HEVA for the assay of over 200 “typical” Type 30B cylinders, and the viability of an “NDA Fingerprint” concept as a high-fidelity means to periodically verify that the contents of a given cylinder are consistent with previous scans. A modeling study, combined with field

  14. Embedded software verification and debugging

    CERN Document Server

    Winterholer, Markus

    2017-01-01

    This book provides comprehensive coverage of verification and debugging techniques for embedded software, which is frequently used in safety critical applications (e.g., automotive), where failures are unacceptable. Since the verification of complex systems needs to encompass the verification of both hardware and embedded software modules, this book focuses on verification and debugging approaches for embedded software with hardware dependencies. Coverage includes the entire flow of design, verification and debugging of embedded software and all key approaches to debugging, dynamic, static, and hybrid verification. This book discusses the current, industrial embedded software verification flow, as well as emerging trends with focus on formal and hybrid verification and debugging approaches. Includes in a single source the entire flow of design, verification and debugging of embedded software; Addresses the main techniques that are currently being used in the industry for assuring the quality of embedded softw...

  15. Verification in Referral-Based Crowdsourcing

    Science.gov (United States)

    Naroditskiy, Victor; Rahwan, Iyad; Cebrian, Manuel; Jennings, Nicholas R.

    2012-01-01

    Online social networks offer unprecedented potential for rallying a large number of people to accomplish a given task. Here we focus on information gathering tasks where rare information is sought through “referral-based crowdsourcing”: the information request is propagated recursively through invitations among members of a social network. Whereas previous work analyzed incentives for the referral process in a setting with only correct reports, misreporting is known to be both pervasive in crowdsourcing applications, and difficult/costly to filter out. A motivating example for our work is the DARPA Red Balloon Challenge where the level of misreporting was very high. In order to undertake a formal study of verification, we introduce a model where agents can exert costly effort to perform verification and false reports can be penalized. This is the first model of verification and it provides many directions for future research, which we point out. Our main theoretical result is the compensation scheme that minimizes the cost of retrieving the correct answer. Notably, this optimal compensation scheme coincides with the winning strategy of the Red Balloon Challenge. PMID:23071530

  16. Research gaps identified during systematic reviews of clinical trials: glass-ionomer cements

    Directory of Open Access Journals (Sweden)

    Mickenautsch Steffen

    2012-06-01

    Full Text Available Abstract Background To report the results of an audit concerning research gaps in clinical trials that were accepted for appraisal in authored and published systematic reviews regarding the application of glass-ionomer cements (GIC in dental practice Methods Information concerning research gaps in trial precision was extracted, following a framework that included classification of the research gap reasons: ‘imprecision of information (results’, ‘biased information’, ‘inconsistency or unknown consistency’ and ‘not the right information’, as well as research gap characterization using PICOS elements: population (P, intervention (I, comparison (C, outcomes (O and setting (S. Internal trial validity assessment was based on the understanding that successful control for systematic error cannot be assured on the basis of inclusion of adequate methods alone, but also requires empirical evidence about whether such attempt was successful. Results A comprehensive and interconnected coverage of GIC-related clinical topics was established. The most common reasons found for gaps in trial precision were lack of sufficient trials and lack of sufficient large sample size. Only a few research gaps were ascribed to ‘Lack of information’ caused by focus on mainly surrogate trial outcomes. According to the chosen assessment criteria, a lack of adequate randomisation, allocation concealment and blinding/masking in trials covering all reviewed GIC topics was noted (selection- and detection/performance bias risk. Trial results appear to be less affected by loss-to-follow-up (attrition bias risk. Conclusion This audit represents an adjunct of the systematic review articles it has covered. Its results do not change the systematic review’s conclusions but highlight existing research gaps concerning the precision and internal validity of reviewed trials in detail. These gaps should be addressed in future GIC-related clinical research.

  17. Functions of social support and self-verification in association with loneliness, depression, and stress.

    Science.gov (United States)

    Wright, Kevin B; King, Shawn; Rosenberg, Jenny

    2014-01-01

    This study investigated the influence of social support and self-verification on loneliness, depression, and stress among 477 college students. The authors propose and test a theoretical model using structural equation modeling. The results indicated empirical support for the model, with self-verification mediating the relation between social support and health outcomes. The results have implications for social support and self-verification research, which are discussed along with directions for future research and limitations of the study.

  18. Consumer input into research: the Australian Cancer Trials website.

    Science.gov (United States)

    Dear, Rachel F; Barratt, Alexandra L; Crossing, Sally; Butow, Phyllis N; Hanson, Susan; Tattersall, Martin Hn

    2011-06-26

    The Australian Cancer Trials website (ACTO) was publicly launched in 2010 to help people search for cancer clinical trials recruiting in Australia, provide information about clinical trials and assist with doctor-patient communication about trials. We describe consumer involvement in the design and development of ACTO and report our preliminary patient evaluation of the website. Consumers, led by Cancer Voices NSW, provided the impetus to develop the website. Consumer representative groups were consulted by the research team during the design and development of ACTO which combines a search engine, trial details, general information about trial participation and question prompt lists. Website use was analysed. A patient evaluation questionnaire was completed at one hospital, one week after exposure to the website. ACTO's main features and content reflect consumer input. In February 2011, it covered 1, 042 cancer trials. Since ACTO's public launch in November 2010, until the end of February 2011, the website has had 2, 549 new visits and generated 17, 833 page views. In a sub-study of 47 patient users, 89% found the website helpful for learning about clinical trials and all respondents thought patients should have access to ACTO. The development of ACTO is an example of consumers working with doctors, researchers and policy makers to improve the information available to people whose lives are affected by cancer and to help them participate in their treatment decisions, including consideration of clinical trial enrolment. Consumer input has ensured that the website is informative, targets consumer priorities and is user-friendly. ACTO serves as a model for other health conditions.

  19. Software verification for nuclear industry

    International Nuclear Information System (INIS)

    Wilburn, N.P.

    1985-08-01

    Why verification of software products throughout the software life cycle is necessary is considered. Concepts of verification, software verification planning, and some verification methodologies for products generated throughout the software life cycle are then discussed

  20. The design of verification regimes

    International Nuclear Information System (INIS)

    Gallagher, N.W.

    1991-01-01

    Verification of a nuclear agreement requires more than knowledge of relevant technologies and institutional arrangements. It also demands thorough understanding of the nature of verification and the politics of verification design. Arms control efforts have been stymied in the past because key players agreed to verification in principle, only to disagree radically over verification in practice. In this chapter, it is shown that the success and stability of arms control endeavors can be undermined by verification designs which promote unilateral rather than cooperative approaches to security, and which may reduce, rather than enhance, the security of both sides. Drawing on logical analysis and practical lessons from previous superpower verification experience, this chapter summarizes the logic and politics of verification and suggests implications for South Asia. The discussion begins by determining what properties all forms of verification have in common, regardless of the participants or the substance and form of their agreement. Viewing verification as the political process of making decisions regarding the occurrence of cooperation points to four critical components: (1) determination of principles, (2) information gathering, (3) analysis and (4) projection. It is shown that verification arrangements differ primarily in regards to how effectively and by whom these four stages are carried out

  1. Modality Switching in a Property Verification Task: An ERP Study of What Happens When Candles Flicker after High Heels Click.

    Science.gov (United States)

    Collins, Jennifer; Pecher, Diane; Zeelenberg, René; Coulson, Seana

    2011-01-01

    The perceptual modalities associated with property words, such as flicker or click, have previously been demonstrated to affect subsequent property verification judgments (Pecher et al., 2003). Known as the conceptual modality switch effect, this finding supports the claim that brain systems for perception and action help subserve the representation of concepts. The present study addressed the cognitive and neural substrate of this effect by recording event-related potentials (ERPs) as participants performed a property verification task with visual or auditory properties in key trials. We found that for visual property verifications, modality switching was associated with an increased amplitude N400. For auditory verifications, switching led to a larger late positive complex. Observed ERP effects of modality switching suggest property words access perceptual brain systems. Moreover, the timing and pattern of the effects suggest perceptual systems impact the decision-making stage in the verification of auditory properties, and the semantic stage in the verification of visual properties.

  2. Describing qualitative research undertaken with randomised controlled trials in grant proposals: a documentary analysis

    Science.gov (United States)

    2014-01-01

    Background There is growing recognition of the value of conducting qualitative research with trials in health research. It is timely to reflect on how this qualitative research is presented in grant proposals to identify lessons for researchers and research commissioners. As part of a larger study focusing on how to maximise the value of undertaking qualitative research with trials, we undertook a documentary analysis of proposals of funded studies. Methods Using the metaRegister of Controlled Trials (mRCT) database we identified trials funded in the United Kingdom, ongoing between 2001 and 2010, and reporting the use of qualitative research. We requested copies of proposals from lead researchers. We extracted data from the proposals using closed and open questions, analysed using descriptive statistics and content analysis respectively. Results 2% (89/3812) of trials in the mRCT database described the use of qualitative research undertaken with the trial. From these 89 trials, we received copies of 36 full proposals, of which 32 met our inclusion criteria. 25% used less than a single paragraph to describe the qualitative research. The aims of the qualitative research described in these proposals focused mainly on the intervention or trial conduct. Just over half (56%) of the proposals included an explicit rationale for conducting the qualitative research with the trial, the most frequent being to optimise implementation into clinical practice or to interpret trial findings. Key information about methods, expertise and resources was missing in a large minority of proposals, in particular sample size, type of analysis, and non-personnel resources. 28% specifically stated that qualitative researchers would conduct the qualitative research. Conclusions Our review of proposals of successfully funded studies identified good practice but also identified limited space given to describing the qualitative research, with an associated lack of attention to the rationale for

  3. When ethics constrains clinical research: trial design of control arms in "greater than minimal risk" pediatric trials.

    Science.gov (United States)

    de Melo-Martín, Inmaculada; Sondhi, Dolan; Crystal, Ronald G

    2011-09-01

    For more than three decades clinical research in the United States has been explicitly guided by the idea that ethical considerations must be central to research design and practice. In spite of the centrality of this idea, attempting to balance the sometimes conflicting values of advancing scientific knowledge and protecting human subjects continues to pose challenges. Possible conflicts between the standards of scientific research and those of ethics are particularly salient in relation to trial design. Specifically, the choice of a control arm is an aspect of trial design in which ethical and scientific issues are deeply entwined. Although ethical quandaries related to the choice of control arms may arise when conducting any type of clinical trials, they are conspicuous in early phase gene transfer trials that involve highly novel approaches and surgical procedures and have children as the research subjects. Because of children's and their parents' vulnerabilities, in trials that investigate therapies for fatal, rare diseases affecting minors, the scientific and ethical concerns related to choosing appropriate controls are particularly significant. In this paper we use direct gene transfer to the central nervous system to treat late infantile neuronal ceroid lipofuscinosis to illustrate some of these ethical issues and explore possible solutions to real and apparent conflicts between scientific and ethical considerations.

  4. Dosimetry for audit and clinical trials: challenges and requirements

    International Nuclear Information System (INIS)

    Kron, T; Haworth, A; Williams, I

    2013-01-01

    Many important dosimetry audit networks for radiotherapy have their roots in clinical trial quality assurance (QA). In both scenarios it is essential to test two issues: does the treatment plan conform with the clinical requirements and is the plan a reasonable representation of what is actually delivered to a patient throughout their course of treatment. Part of a sound quality program would be an external audit of these issues with verification of the equivalence of plan and treatment typically referred to as a dosimetry audit. The increasing complexity of radiotherapy planning and delivery makes audits challenging. While verification of absolute dose delivered at a reference point was the standard of external dosimetry audits two decades ago this is often deemed inadequate for verification of treatment approaches such as Intensity Modulated Radiation Therapy (IMRT) and Volumetric Modulated Arc Therapy (VMAT). As such, most dosimetry audit networks have successfully introduced more complex tests of dose delivery using anthropomorphic phantoms that can be imaged, planned and treated as a patient would. The new challenge is to adapt this approach to ever more diversified radiotherapy procedures with image guided/adaptive radiotherapy, motion management and brachytherapy being the focus of current research.

  5. DEPSCOR: Research on ARL's Intelligent Control Architecture: Hierarchical Hybrid-Model Based Design, Verification, Simulation, and Synthesis of Mission Control for Autonomous Underwater Vehicles

    National Research Council Canada - National Science Library

    Kumar, Ratnesh; Holloway, Lawrence E

    2007-01-01

    ... modeling, verification, simulation and automated synthesis of coordinators has lead to research in this area. We have worked and are working on these issues with Applied Research Laboratory (ARL) at Pennsylvania State University (PSU) who have designed autonomous underwater vehicles for over 50 years primarily under the support of the U.S. Navy through the Office of Naval Research (ONR).

  6. Physics Verification Overview

    Energy Technology Data Exchange (ETDEWEB)

    Doebling, Scott William [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-09-12

    The purpose of the verification project is to establish, through rigorous convergence analysis, that each ASC computational physics code correctly implements a set of physics models and algorithms (code verification); Evaluate and analyze the uncertainties of code outputs associated with the choice of temporal and spatial discretization (solution or calculation verification); and Develop and maintain the capability to expand and update these analyses on demand. This presentation describes project milestones.

  7. Identifying research priorities for effective retention strategies in clinical trials.

    Science.gov (United States)

    Kearney, Anna; Daykin, Anne; Shaw, Alison R G; Lane, Athene J; Blazeby, Jane M; Clarke, Mike; Williamson, Paula; Gamble, Carrol

    2017-08-31

    The failure to retain patients or collect primary-outcome data is a common challenge for trials and reduces the statistical power and potentially introduces bias into the analysis. Identifying strategies to minimise missing data was the second highest methodological research priority in a Delphi survey of the Directors of UK Clinical Trial Units (CTUs) and is important to minimise waste in research. Our aim was to assess the current retention practices within the UK and priorities for future research to evaluate the effectiveness of strategies to reduce attrition. Seventy-five chief investigators of NIHR Health Technology Assessment (HTA)-funded trials starting between 2009 and 2012 were surveyed to elicit their awareness about causes of missing data within their trial and recommended practices for improving retention. Forty-seven CTUs registered within the UKCRC network were surveyed separately to identify approaches and strategies being used to mitigate missing data across trials. Responses from the current practice surveys were used to inform a subsequent two-round Delphi survey with registered CTUs. A consensus list of retention research strategies was produced and ranked by priority. Fifty out of seventy-five (67%) chief investigators and 33/47 (70%) registered CTUs completed the current practice surveys. Seventy-eight percent of trialists were aware of retention challenges and implemented strategies at trial design. Patient-initiated withdrawal was the most common cause of missing data. Registered CTUs routinely used newsletters, timeline of participant visits, and telephone reminders to mitigate missing data. Whilst 36 out of 59 strategies presented had been formally or informally evaluated, some frequently used strategies, such as site initiation training, have had no research to inform practice. Thirty-five registered CTUs (74%) participated in the Delphi survey. Research into the effectiveness of site initiation training, frequency of patient contact

  8. Consumer input into research: the Australian Cancer Trials website

    Directory of Open Access Journals (Sweden)

    Butow Phyllis N

    2011-06-01

    Full Text Available Abstract Background The Australian Cancer Trials website (ACTO was publicly launched in 2010 to help people search for cancer clinical trials recruiting in Australia, provide information about clinical trials and assist with doctor-patient communication about trials. We describe consumer involvement in the design and development of ACTO and report our preliminary patient evaluation of the website. Methods Consumers, led by Cancer Voices NSW, provided the impetus to develop the website. Consumer representative groups were consulted by the research team during the design and development of ACTO which combines a search engine, trial details, general information about trial participation and question prompt lists. Website use was analysed. A patient evaluation questionnaire was completed at one hospital, one week after exposure to the website. Results ACTO's main features and content reflect consumer input. In February 2011, it covered 1, 042 cancer trials. Since ACTO's public launch in November 2010, until the end of February 2011, the website has had 2, 549 new visits and generated 17, 833 page views. In a sub-study of 47 patient users, 89% found the website helpful for learning about clinical trials and all respondents thought patients should have access to ACTO. Conclusions The development of ACTO is an example of consumers working with doctors, researchers and policy makers to improve the information available to people whose lives are affected by cancer and to help them participate in their treatment decisions, including consideration of clinical trial enrolment. Consumer input has ensured that the website is informative, targets consumer priorities and is user-friendly. ACTO serves as a model for other health conditions.

  9. Understanding and Improving Recruitment to Randomised Controlled Trials: Qualitative Research Approaches.

    Science.gov (United States)

    Elliott, Daisy; Husbands, Samantha; Hamdy, Freddie C; Holmberg, Lars; Donovan, Jenny L

    2017-11-01

    The importance of evidence from randomised trials is now widely recognised, although recruitment is often difficult. Qualitative research has shown promise in identifying the key barriers to recruitment, and interventions have been developed to reduce organisational difficulties and support clinicians undertaking recruitment. This article provides an introduction to qualitative research techniques and explains how this approach can be used to understand-and subsequently improve-recruitment and informed consent within a range of clinical trials. A literature search was performed using Medline, Embase, and CINAHL. All studies with qualitative research methods that focused on the recruitment activity of clinicians were included in the review. The majority of studies reported that organisational difficulties and lack of time for clinical staff were key barriers to recruitment. However, a synthesis of qualitative studies highlighted the intellectual and emotional challenges that arise when combining research with clinical roles, particularly in relation to equipoise and patient eligibility. To support recruiters to become more comfortable with the design and principles of randomised controlled trials, interventions have been developed, including the QuinteT Recruitment Intervention, which comprises in-depth investigation of recruitment obstacles in real time, followed by implementation of tailored strategies to address these challenges as the trial proceeds. Qualitative research can provide important insights into the complexities of recruitment to trials and inform the development of interventions, and provide support and training initiatives as required. Investigators should consider implementing such methods in trials expected to be challenging or recruiting below target. Qualitative research is a term used to describe a range of methods that can be implemented to understand participants' perspectives and behaviours. Data are gathered from interviews, focus groups

  10. Does clinical equipoise apply to cluster randomized trials in health research?

    Science.gov (United States)

    2011-01-01

    This article is part of a series of papers examining ethical issues in cluster randomized trials (CRTs) in health research. In the introductory paper in this series, Weijer and colleagues set out six areas of inquiry that must be addressed if the cluster trial is to be set on a firm ethical foundation. This paper addresses the third of the questions posed, namely, does clinical equipoise apply to CRTs in health research? The ethical principle of beneficence is the moral obligation not to harm needlessly and, when possible, to promote the welfare of research subjects. Two related ethical problems have been discussed in the CRT literature. First, are control groups that receive only usual care unduly disadvantaged? Second, when accumulating data suggests the superiority of one intervention in a trial, is there an ethical obligation to act? In individually randomized trials involving patients, similar questions are addressed by the concept of clinical equipoise, that is, the ethical requirement that, at the start of a trial, there be a state of honest, professional disagreement in the community of expert practitioners as to the preferred treatment. Since CRTs may not involve physician-researchers and patient-subjects, the applicability of clinical equipoise to CRTs is uncertain. Here we argue that clinical equipoise may be usefully grounded in a trust relationship between the state and research subjects, and, as a result, clinical equipoise is applicable to CRTs. Clinical equipoise is used to argue that control groups receiving only usual care are not disadvantaged so long as the evidence supporting the experimental and control interventions is such that experts would disagree as to which is preferred. Further, while data accumulating during the course of a CRT may favor one intervention over another, clinical equipoise supports continuing the trial until the results are likely to be broadly convincing, often coinciding with the planned completion of the trial

  11. Evaluation verification facilities (EVF) at MINT: concept and implementation

    International Nuclear Information System (INIS)

    Mohamed Hairul Hasmoni; Abd Nassir Ibrahim; Ab Razak Hamzah

    2003-01-01

    EVF facilities and components available are described comprehensively. Objective of establishing EVF as a National Centre for non-destructive testing (NDT) are discussed for various activities of method and equipment validation, R and D on quantitative NDT technique, training and certification, and defect characterization. For a successful activity available at EVF, it is vital that industry participates through input of funding, sponsorship and knowledge sharing. The Malaysian Institute for Nuclear Technology Research (MINT) invested a lot in this facility and ready to share this facility under various mechanisms such as memorandum of understanding (MOU), memorandum of agreement (MOA), contract research or letter of agreement. The facility would be open to industry. Member of NDT community are welcomed to conduct trial and discuss particular areas of interest with others in the industry. Optimising the facility by utilising the facility available and adding new components would make EVF a national centre for NDT and centre of excellence. This paper reviews the concept and implementation of an Evaluation Verification Facility (EVF) at MINT. The types and designs of facilities available are described and characterised by usage NDT. (Author)

  12. Building a Simulated Environment for the Study of Multilateral Approaches to Nuclear Materials Verification

    International Nuclear Information System (INIS)

    Moul, R.; Persbo, A.; Keir, D.

    2015-01-01

    Verification research can be resource-intensive, particularly when it relies on practical or field exercises. These exercises can also involve substantial logistical preparations and are difficult to run in an iterative manner to produce data sets that can be later utilized in verification research. This paper presents the conceptual framework, methodology and preliminary findings from part of a multi-year research project, led by VERTIC. The multi-component simulated environment that we have generated, using existing computer models for nuclear reactors and other components of fuel cycles, can be used to investigate options for future multilateral nuclear verification, at a variety of locations and time points in a nuclear complex. We have constructed detailed fuel cycle simulations for two fictional, and very different, states. In addition to these mass-flow models, a 3-dimensional, avatarbased simulation of a nuclear facility is under development. We have also developed accompanying scenarios-that provide legal and procedural assumptions that will control the process of our fictional verification solutions. These tools have all been produced using open source information and software. While these tools are valuable for research purposes, they can also play an important role in support of training and education in the field of nuclear materials verification, in a variety of settings and circumstances. (author)

  13. Top-down design and verification methodology for analog mixed-signal integrated circuits

    NARCIS (Netherlands)

    Beviz, P.

    2016-01-01

    The current report contains the introduction of a novel Top-Down Design and Verification methodology for AMS integrated circuits. With the introduction of new design and verification flow, more reliable and efficient development of AMS ICs is possible. The assignment incorporated the research on the

  14. FMCT verification: Case studies

    International Nuclear Information System (INIS)

    Hui Zhang

    2001-01-01

    Full text: How to manage the trade-off between the need for transparency and the concern about the disclosure of sensitive information would be a key issue during the negotiations of FMCT verification provision. This paper will explore the general concerns on FMCT verification; and demonstrate what verification measures might be applied to those reprocessing and enrichment plants. A primary goal of an FMCT will be to have the five declared nuclear weapon states and the three that operate unsafeguarded nuclear facilities become parties. One focus in negotiating the FMCT will be verification. Appropriate verification measures should be applied in each case. Most importantly, FMCT verification would focus, in the first instance, on these states' fissile material production facilities. After the FMCT enters into force, all these facilities should be declared. Some would continue operating to produce civil nuclear power or to produce fissile material for non- explosive military uses. The verification measures necessary for these operating facilities would be essentially IAEA safeguards, as currently being applied to non-nuclear weapon states under the NPT. However, some production facilities would be declared and shut down. Thus, one important task of the FMCT verifications will be to confirm the status of these closed facilities. As case studies, this paper will focus on the verification of those shutdown facilities. The FMCT verification system for former military facilities would have to differ in some ways from traditional IAEA safeguards. For example, there could be concerns about the potential loss of sensitive information at these facilities or at collocated facilities. Eventually, some safeguards measures such as environmental sampling might be seen as too intrusive. Thus, effective but less intrusive verification measures may be needed. Some sensitive nuclear facilities would be subject for the first time to international inspections, which could raise concerns

  15. Inspector measurement verification activities

    International Nuclear Information System (INIS)

    George, R.S.; Crouch, R.

    e most difficult and complex activity facing a safeguards inspector involves the verification of measurements and the performance of the measurement system. Remeasurement is the key to measurement verification activities. Remeasurerements using the facility's measurement system provide the bulk of the data needed for determining the performance of the measurement system. Remeasurements by reference laboratories are also important for evaluation of the measurement system and determination of systematic errors. The use of these measurement verification activities in conjunction with accepted inventory verification practices provides a better basis for accepting or rejecting an inventory. (U.S.)

  16. Verification and disarmament

    Energy Technology Data Exchange (ETDEWEB)

    Blix, H. [IAEA, Vienna (Austria)

    1998-07-01

    The main features are described of the IAEA safeguards verification system that non-nuclear weapon states parties of the NPT are obliged to accept. Verification activities/problems in Iraq and North Korea are discussed.

  17. Verification and disarmament

    International Nuclear Information System (INIS)

    Blix, H.

    1998-01-01

    The main features are described of the IAEA safeguards verification system that non-nuclear weapon states parties of the NPT are obliged to accept. Verification activities/problems in Iraq and North Korea are discussed

  18. The Ethics of Clinical Trials Research in Severe Mood Disorders.

    Science.gov (United States)

    Nugent, Allison C; Miller, Franklin G; Henter, Ioline D; Zarate, Carlos A

    2017-07-01

    Mood disorders, including major depressive disorder (MDD) and bipolar disorder (BD), are highly prevalent, frequently disabling, and sometimes deadly. Additional research and more effective medications are desperately needed, but clinical trials research in mood disorders is fraught with ethical issues. Although many authors have discussed these issues, most do so from a theoretical viewpoint. This manuscript uses available empirical data to inform a discussion of the primary ethical issues raised in mood disorders research. These include issues of consent and decision-making capacity, including patients' motivations for participating in research. We also address drug withdrawals, placebo controls, and the overall safety of research. Finally, we examine the extant literature for studies discussing potential indirect benefits of clinical trials research to participants. Taken together, the evidence suggests that clinical trials research incorporating drug withdrawals and placebo controls can be conducted safely and ethically, even in patients with severe or treatment-resistant mood disorders. In fact, given the dearth of effective treatment options for this population, it is our opinion that a moral imperative exists to extend the offer of research participation to severely ill or treatment-resistant groups. Published 2017. This article is a U.S. Government work and is in the public domain in the USA.

  19. Supporting open access to clinical trial data for researchers: The Duke Clinical Research Institute-Bristol-Myers Squibb Supporting Open Access to Researchers Initiative.

    Science.gov (United States)

    Pencina, Michael J; Louzao, Darcy M; McCourt, Brian J; Adams, Monique R; Tayyabkhan, Rehbar H; Ronco, Peter; Peterson, Eric D

    2016-02-01

    There are growing calls for sponsors to increase transparency by providing access to clinical trial data. In response, Bristol-Myers Squibb and the Duke Clinical Research Institute have collaborated on a new initiative, Supporting Open Access to Researchers. The aim is to facilitate open sharing of Bristol-Myers Squibb trial data with interested researchers. Key features of the Supporting Open Access to Researchers data sharing model include an independent review committee that ensures expert consideration of each proposal, stringent data deidentification/anonymization and protection of patient privacy, requirement of prespecified statistical analysis plans, and independent review of manuscripts before submission for publication. We believe that these approaches will promote open science by allowing investigators to verify trial results as well as to pursue interesting secondary uses of trial data without compromising scientific integrity. Copyright © 2015 Elsevier Inc. All rights reserved.

  20. The research for the design verification of nuclear power plant based on VR dynamic plant

    International Nuclear Information System (INIS)

    Wang Yong; Yu Xiao

    2015-01-01

    This paper studies a new method of design verification through the VR plant, in order to perform verification and validation the design of plant conform to the requirements of accident emergency. The VR dynamic plant is established by 3D design model and digital maps that composed of GIS system and indoor maps, and driven by the analyze data of design analyzer. The VR plant could present the operation conditions and accident conditions of power plant. This paper simulates the execution of accident procedures, the development of accidents, the evacuation planning of people and so on, based on VR dynamic plant, and ensure that the plant design will not cause bad effect. Besides design verification, simulated result also can be used for optimization of the accident emergency plan, the training of accident plan and emergency accident treatment. (author)

  1. An Unattended Verification Station for UF6 Cylinders: Development Status

    International Nuclear Information System (INIS)

    Smith, E.; McDonald, B.; Miller, K.; Garner, J.; March-Leuba, J.; Poland, R.

    2015-01-01

    In recent years, the International Atomic Energy Agency (IAEA) has pursued innovative techniques and an integrated suite of safeguards measures to address the verification challenges posed by advanced centrifuge technologies and the growth in separative work unit capacity at modern centrifuge enrichment plants. These measures would include permanently installed, unattended instruments capable of performing the routine and repetitive measurements previously performed by inspectors. Among the unattended instruments currently being explored by the IAEA is an Unattended Cylinder Verification Stations (UCVS) that could provide independent verification of the declared relative enrichment, U-235 mass and total uranium mass of all declared cylinders moving through the plant, as well as the application and verification of a ''Non-destructive Assay Fingerprint'' to preserve verification knowledge on the contents of each cylinder throughout its life in the facility. As IAEA's vision for a UCVS has evolved, Pacific Northwest National Laboratory (PNNL) and Los Alamos National Laboratory have been developing and testing candidate non-destructive assay (NDA) methods for inclusion in a UCVS. Modeling and multiple field campaigns have indicated that these methods are capable of assaying relative cylinder enrichment with a precision comparable to or substantially better than today's high-resolution handheld devices, without the need for manual wall-thickness corrections. In addition, the methods interrogate the full volume of the cylinder, thereby offering the IAEA a new capability to assay the absolute U-235 mass in the cylinder, and much-improved sensitivity to substituted or removed material. Building on this prior work, and under the auspices of the United States Support Programme to the IAEA, a UCVS field prototype is being developed and tested. This paper provides an overview of: a) hardware and software design of the prototypes, b) preparation

  2. Practical and conceptual issues of clinical trial registration for Brazilian researchers

    Directory of Open Access Journals (Sweden)

    Carolina Gomes Freitas

    Full Text Available CONTEXT AND OBJECTIVE: Clinical trial registration is a prerequisite for publication in respected scientific journals. Recent Brazilian regulations also require registration of some clinical trials in the Brazilian Clinical Trials Registry (ReBEC but there is little information available about practical issues involved in the registration process. This article discusses the importance of clinical trial registration and the practical issues involved in this process. DESIGN AND SETTING: Descriptive study conducted by researchers within a postgraduate program at a public university in São Paulo, Brazil. METHODS: Information was obtained from clinical trial registry platforms, article reference lists and websites (last search: September 2014 on the following topics: definition of a clinical trial, history, purpose and importance of registry platforms, the information that should be registered and the registration process. RESULTS: Clinical trial registration aims to avoid publication bias and is required by Brazilian journals indexed in LILACS and SciELO and by journals affiliated to the International Committee of Medical Journal Editors (ICMJE. Recent Brazilian regulations require that all clinical trials (phases I to IV involving new drugs to be marketed in this country must be registered in ReBEC. The pros and cons of using different clinical trial registration platforms are discussed. CONCLUSIONS: Clinical trial registration is important and various mechanisms to enforce its implementation now exist. Researchers should take into account national regulations and publication requirements when choosing the platform on which they will register their trial.

  3. Clinical Trials: A Crucial Key to Human Health Research

    Science.gov (United States)

    ... Past Issues Clinical Trials: A Crucial Key to Human Health Research Past Issues / Summer 2006 Table of Contents ... Javascript on. Photo: PhotoDisc At the forefront of human health research today are clinical trials—studies that use ...

  4. Accuracy of self-reported smoking abstinence in clinical trials of hospital-initiated smoking interventions.

    Science.gov (United States)

    Scheuermann, Taneisha S; Richter, Kimber P; Rigotti, Nancy A; Cummins, Sharon E; Harrington, Kathleen F; Sherman, Scott E; Zhu, Shu-Hong; Tindle, Hilary A; Preacher, Kristopher J

    2017-12-01

    To estimate the prevalence and predictors of failed biochemical verification of self-reported abstinence among participants enrolled in trials of hospital-initiated smoking cessation interventions. Comparison of characteristics between participants who verified and those who failed to verify self-reported abstinence. Multi-site randomized clinical trials conducted between 2010 and 2014 in hospitals throughout the United States. Recently hospitalized smokers who reported tobacco abstinence 6 months post-randomization and provided a saliva sample for verification purposes (n = 822). Outcomes were salivary cotinine-verified smoking abstinence at 10 and 15 ng/ml cut-points. Predictors and correlates included participant demographics and tobacco use; hospital diagnoses and treatment; and study characteristics collected via surveys and electronic medical records. Usable samples were returned by 69.8% of the 1178 eligible trial participants who reported 7-day point prevalence abstinence. The proportion of participants verified as quit was 57.8% [95% confidence interval (CI) = 54.4, 61.2; 10 ng/ml cut-off] or 60.6% (95% CI = 57.2, 63.9; 15 ng/ml). Factors associated independently with verification at 10 ng/ml were education beyond high school education [odds ratio (OR) = 1.51; 95% CI = 1.07, 2.11], continuous abstinence since hospitalization (OR = 2.82; 95% CI = 2.02, 3.94), mailed versus in-person sample (OR = 3.20; 95% CI = 1.96, 5.21) and race. African American participants were less likely to verify abstinence than white participants (OR = 0.64; 95% CI = 0.44, 0.93). Findings were similar for verification at 15 ng/ml. Verification rates did not differ by treatment group. In the United States, high rates (40%) of recently hospitalized smokers enrolled in smoking cessation trials fail biochemical verification of their self-reported abstinence. © 2017 Society for the Study of Addiction.

  5. Shedding light on research participation effects in behaviour change trials: a qualitative study examining research participant experiences.

    Science.gov (United States)

    MacNeill, Virginia; Foley, Marian; Quirk, Alan; McCambridge, Jim

    2016-01-29

    The sequence of events in a behaviour change trial involves interactions between research participants and the trial process. Taking part in such a study has the potential to influence the behaviour of the participant, and if it does, this can engender bias in trial outcomes. Since participants' experience has received scant attention, the aim of this study is thus to generate hypotheses about which aspects of the conduct of behaviour change trials might matter most to participants, and thus have potential to alter subsequent behaviours and bias trial outcomes Twenty participants were opportunistically screened for a health compromising behaviour (unhealthy diet, lack of exercise, smoking or alcohol consumption) and recruited if eligible. Semi structured face to face interviews were conducted, after going through the usual processes involved in trial recruitment, baseline assessment and randomisation. Participants were given information on the contents of an intervention or control condition in a behaviour change trial, which was not actually implemented. Three months later they returned to reflect on these experiences and whether they had any effect on their behaviour during the intervening period. Data from the latter interview were analysed thematically using a modified grounded theory approach. The early processes of trial participation raised awareness of unhealthy behaviours, although most reported having had only fleeting intentions to change their behaviour as a result of taking part in this study, in the absence of interventions. However, careful examination of the accounts revealed evidence of subtle research participation effects, which varied according to the health behaviour, and its perceived social acceptability. Participants' relationships with the research study were viewed as somewhat important in stimulating thinking about whether and how to make lifestyle changes. These participants described no dramatic impacts attributable to taking part in

  6. Behaviour Protocols Verification: Fighting State Explosion

    Czech Academy of Sciences Publication Activity Database

    Mach, M.; Plášil, František; Kofroň, Jan

    2005-01-01

    Roč. 6, č. 2 (2005), s. 22-30 ISSN 1525-9293 R&D Projects: GA ČR(CZ) GA102/03/0672 Institutional research plan: CEZ:AV0Z10300504 Keywords : formal verification * software components * stateexplos ion * behavior protocols * parse trees Subject RIV: JC - Computer Hardware ; Software

  7. Big-pharmaceuticalisation: clinical trials and Contract Research Organisations in India.

    Science.gov (United States)

    Sariola, Salla; Ravindran, Deapica; Kumar, Anand; Jeffery, Roger

    2015-04-01

    The World Trade Organisation's Trade Related Intellectual Property Rights [TRIPS] agreement aimed to harmonise intellectual property rights and patent protection globally. In India, the signing of this agreement resulted in a sharp increase in clinical trials since 2005. The Indian government, along with larger Indian pharmaceutical companies, believed that they could change existing commercial research cultures through the promotion of basic research as well as attracting international clinical trials, and thus create an international level, innovation-based drug industry. The effects of the growth of these outsourced and off-shored clinical trials on local commercial knowledge production in India are still unclear. What has been the impact of the increasing scale and commercialisation of clinical research on corporate science in India? In this paper we describe Big-pharmaceuticalisation in India, whereby the local pharmaceutical industry is moving from generic manufacturing to innovative research. Using conceptual frameworks of pharmaceuticalisation and innovation, this paper analyses data from research conducted in 2010-2012 and describes how Contract Research Organisations (CROs) enable outsourcing of randomised control trials to India. Focussing on twenty-five semi-structured interviews CRO staff, we chart the changes in Indian pharmaceutical industry, and implications for local research cultures. We use Big-pharmaceuticalisation to extend the notion of pharmaceuticalisation to describe the spread of pharmaceutical research globally and illustrate how TRIPS has encouraged a concentration of capital in India, with large companies gaining increasing market share and using their market power to rewrite regulations and introduce new regulatory practices in their own interest. Contract Research Organisations, with relevant, new, epistemic skills and capacities, are both manifestations of the changes in commercial research cultures, as well as the vehicles to

  8. Verification and the safeguards legacy

    International Nuclear Information System (INIS)

    Perricos, Demetrius

    2001-01-01

    A number of inspection or monitoring systems throughout the world over the last decades have been structured drawing upon the IAEA experience of setting up and operating its safeguards system. The first global verification system was born with the creation of the IAEA safeguards system, about 35 years ago. With the conclusion of the NPT in 1968, inspections were to be performed under safeguards agreements, concluded directly between the IAEA and non-nuclear weapon states parties to the Treaty. The IAEA developed the safeguards system within the limitations reflected in the Blue Book (INFCIRC 153), such as limitations of routine access by the inspectors to 'strategic points', including 'key measurement points', and the focusing of verification on declared nuclear material in declared installations. The system, based as it was on nuclear material accountancy. It was expected to detect a diversion of nuclear material with a high probability and within a given time and therefore determine also that there had been no diversion of nuclear material from peaceful purposes. The most vital element of any verification system is the inspector. Technology can assist but cannot replace the inspector in the field. Their experience, knowledge, intuition and initiative are invaluable factors contributing to the success of any inspection regime. The IAEA inspectors are however not part of an international police force that will intervene to prevent a violation taking place. To be credible they should be technically qualified with substantial experience in industry or in research and development before they are recruited. An extensive training program has to make sure that the inspectors retain their professional capabilities and that it provides them with new skills. Over the years, the inspectors and through them the safeguards verification system gained experience in: organization and management of large teams; examination of records and evaluation of material balances

  9. [An Investigation of the Role Responsibilities of Clinical Research Nurses in Conducting Clinical Trials].

    Science.gov (United States)

    Kao, Chi-Yin; Huang, Guey-Shiun; Dai, Yu-Tzu; Pai, Ya-Ying; Hu, Wen-Yu

    2015-06-01

    Clinical research nurses (CRNs) play an important role in improving the quality of clinical trials. In Taiwan, the increasing number of clinical trials has increased the number of practicing CRNs. Understanding the role responsibilities of CRNs is necessary to promote professionalism in this nursing category. This study investigates the role responsibilities of CRNs in conducting clinical trials / research. A questionnaire survey was conducted in a medical center in Taipei City, Taiwan. Eighty CRNs that were registered to facilitate and conduct clinical trials at this research site completed the survey. "Subject protection" was the CRN role responsibility most recognized by participants, followed by "research coordination and management", "subject clinical care", and "advanced professional nursing". Higher recognition scores were associated with higher importance scores and lower difficulty scores. Participants with trial training had significantly higher difficulty scores for "subject clinical care" and "research coordination and management" than their peers without this training (p research coordination and management" (p clinical practice.

  10. The UK clinical research network--has it been a success for dermatology clinical trials?

    Science.gov (United States)

    Thomas, Kim S; Koller, Karin; Foster, Katharine; Perdue, Jo; Charlesworth, Lisa; Chalmers, Joanne R

    2011-06-16

    Following the successful introduction of five topic-specific research networks in the UK, the Comprehensive Local Research Network (CLRN) was established in 2008 in order to provide a blanket level of support across the whole country regardless of the clinical discipline. The role of the CLRN was to facilitate recruitment into clinical trials, and to encourage greater engagement in research throughout the National Health Service (NHS). This report evaluates the impact of clinical research networks in supporting clinical trials in the UK, with particular reference to our experiences from two non-commercial dermatology trials. It covers our experience of engaging with the CLRN (and other research networks) using two non-commercial dermatology trials as case studies. We present the circumstances that led to our approach to the research networks for support, and the impact that this support had on the delivery of these trials. In both cases, recruitment was boosted considerably following the provision of additional support, although other factors such as the availability of experienced personnel, and the role of advertising and media coverage in promoting the trials were also important in translating this additional resource into increased recruitment. Recruitment into clinical trials is a complex task that can be influenced by many factors. A world-class clinical research infrastructure is now in place in England (with similar support available in Scotland and Wales), and it is the responsibility of the research community to ensure that this unique resource is used effectively and responsibly.

  11. Proposal for a verification facility of ADS in China

    International Nuclear Information System (INIS)

    Guan Xialing; Luo Zhanglin

    1999-01-01

    The concept, general layout and some specifications of a proposed verification facility of the accelerator driven radioactive clean nuclear power system (AD-RCNPS) in China are described. It is composed of a 150 MeV/3 mA low energy accelerator, a swimming pool reactor and some basic research facilities. The 150 MeV accelerator consists of an ECR proton source, LEBT, RFQ, CCDTL and SCC. As the sub-critical reactor, the swimming pool reactor is an existing research reactor at the China Institute of Atomic Energy, whose maximum output power is 3.5 MW. The effect of the instability of proton beam and possibility of simulation tests on the verification facility have been analysed. (author)

  12. Proposal for a verification facility of ADS in China

    International Nuclear Information System (INIS)

    Guan Xialing; Luo Zhanglin

    2000-01-01

    The concept, the general layout and some specifications of a proposed verification facility of the accelerator driven radioactive clean nuclear power system (AD-RCNPS) in China has been described. It is composed of a 150 MeV/3 mA low energy accelerator, a swimming pool reactor and some basic research facility. The 150 MeV accelerator consists of an ECR proton source, LEBT, RFQ, CCDTL and SCC. As the sub-critical reactor, the swimming pool reactor is an existing research reactor in China Institute of Atomic Energy, its maximum output power is 3.5 MW. The effect of the instability of proton beam and possibility of simulation test on the verification facility have been analyzed

  13. Researchers', Regulators', and Sponsors' Views on Pediatric Clinical Trials: A Multinational Study.

    Science.gov (United States)

    Joseph, Pathma D; Craig, Jonathan C; Tong, Allison; Caldwell, Patrina H Y

    2016-10-01

    The last decade has seen dramatic changes in the regulatory landscape to support more trials involving children, but child-specific challenges and inequitable conduct across income regions persist. The goal of this study was to describe the attitudes and opinions of stakeholders toward trials in children, to inform additional strategies to promote more high-quality, relevant pediatric trials across the globe. Key informant semi-structured interviews were conducted with stakeholders (researchers, regulators, and sponsors) who were purposively sampled from low- to middle-income countries and high-income countries. The transcripts were thematically analyzed. Thirty-five stakeholders from 10 countries were interviewed. Five major themes were identified: addressing pervasive inequities (paucity of safety and efficacy data, knowledge disparities, volatile environment, double standards, contextual relevance, market-driven forces, industry sponsorship bias and prohibitive costs); contending with infrastructural barriers (resource constraints, dearth of pediatric trial expertise, and logistical complexities); navigating complex ethical and regulatory frameworks ("draconian" oversight, ambiguous requirements, exploitation, excessive paternalism and precariousness of coercion versus volunteerism); respecting uniqueness of children (pediatric research paradigms, child-appropriate approaches, and family-centered empowerment); and driving evidence-based child health (advocacy, opportunities, treatment access, best practices, and research prioritization). Stakeholders acknowledge that changes in the regulatory environment have encouraged more trials in children, but they contend that inequities and political, regulatory, and resource barriers continue to exist. Embedding trials as part of routine clinical care, addressing the unique needs of children, and streamlining regulatory approvals were suggested. Stakeholders recommended increasing international collaboration

  14. Verification of Software Components: Addressing Unbounded Paralelism

    Czech Academy of Sciences Publication Activity Database

    Adámek, Jiří

    2007-01-01

    Roč. 8, č. 2 (2007), s. 300-309 ISSN 1525-9293 R&D Projects: GA AV ČR 1ET400300504 Institutional research plan: CEZ:AV0Z10300504 Keywords : software components * formal verification * unbounded parallelism Subject RIV: JC - Computer Hardware ; Software

  15. Likelihood-ratio-based biometric verification

    NARCIS (Netherlands)

    Bazen, A.M.; Veldhuis, Raymond N.J.

    2002-01-01

    This paper presents results on optimal similarity measures for biometric verification based on fixed-length feature vectors. First, we show that the verification of a single user is equivalent to the detection problem, which implies that for single-user verification the likelihood ratio is optimal.

  16. Likelihood Ratio-Based Biometric Verification

    NARCIS (Netherlands)

    Bazen, A.M.; Veldhuis, Raymond N.J.

    The paper presents results on optimal similarity measures for biometric verification based on fixed-length feature vectors. First, we show that the verification of a single user is equivalent to the detection problem, which implies that, for single-user verification, the likelihood ratio is optimal.

  17. SU-F-T-440: The Feasibility Research of Checking Cervical Cancer IMRT Pre- Treatment Dose Verification by Automated Treatment Planning Verification System

    Energy Technology Data Exchange (ETDEWEB)

    Liu, X; Yin, Y; Lin, X [Shandong Cancer Hospital and Institute, China, Jinan, Shandong (China)

    2016-06-15

    Purpose: To assess the preliminary feasibility of automated treatment planning verification system in cervical cancer IMRT pre-treatment dose verification. Methods: The study selected randomly clinical IMRT treatment planning data for twenty patients with cervical cancer, all IMRT plans were divided into 7 fields to meet the dosimetric goals using a commercial treatment planning system(PianncleVersion 9.2and the EclipseVersion 13.5). The plans were exported to the Mobius 3D (M3D)server percentage differences of volume of a region of interest (ROI) and dose calculation of target region and organ at risk were evaluated, in order to validate the accuracy automated treatment planning verification system. Results: The difference of volume for Pinnacle to M3D was less than results for Eclipse to M3D in ROI, the biggest difference was 0.22± 0.69%, 3.5±1.89% for Pinnacle and Eclipse respectively. M3D showed slightly better agreement in dose of target and organ at risk compared with TPS. But after recalculating plans by M3D, dose difference for Pinnacle was less than Eclipse on average, results were within 3%. Conclusion: The method of utilizing the automated treatment planning system to validate the accuracy of plans is convenientbut the scope of differences still need more clinical patient cases to determine. At present, it should be used as a secondary check tool to improve safety in the clinical treatment planning.

  18. Biased and inadequate citation of prior research in reports of cardiovascular trials is a continuing source of waste in research.

    Science.gov (United States)

    Sawin, Veronica I; Robinson, Karen A

    2016-01-01

    We assessed citation of prior research over time and the association of citation with the agreement of results between the trial being reported and the prior trial. Groups of pharmacologic trials in cardiovascular disease were created using meta-analyses, and we assessed citation within these groups. We calculated the proportion of prior trials cited, the proportion of study participants captured in citations, and agreement of results between citing and cited trials. Analysis included 86 meta-analyses with 580 trials published between 1982 and 2011. Reports of trials cited 25% (median; 95% confidence interval [CI], 23-27%) of prior trials, capturing 31% (95% CI, 25-36%) of trial participants. Neither measure differed by publication of the citing trial before vs. after 2005. Prior trials with results that agreed with the reports of trials (supportive trials) were significantly more likely to be cited than nonsupportive trials (relative risk 1.45; 95% CI, 1.30-1.61, P < 0.001). Selective undercitation of prior research continues; three quarters of existing evidence is ignored. This source of waste may result in unnecessary, unethical, and unscientific studies. Copyright © 2016 Elsevier Inc. All rights reserved.

  19. Maximising the impact of qualitative research in feasibility studies for randomised controlled trials: guidance for researchers

    NARCIS (Netherlands)

    O’Cathain, A.; Hoddinott, P.; Lewin, S.; Thomas, K.J.; Young, B.; Adamson, J.; Jansen, J.F.M.; Mills, N.; Moore, G.; Donovan, J.L.

    2015-01-01

    Feasibility studies are increasingly undertaken in preparation for randomised controlled trials in order to explore uncertainties and enable trialists to optimise the intervention or the conduct of the trial. Qualitative research can be used to examine and address key uncertainties prior to a full

  20. Shedding light on research participation effects in behaviour change trials: a qualitative study examining research participant experiences

    Directory of Open Access Journals (Sweden)

    Virginia MacNeill

    2016-01-01

    Full Text Available Abstract Background The sequence of events in a behaviour change trial involves interactions between research participants and the trial process. Taking part in such a study has the potential to influence the behaviour of the participant, and if it does, this can engender bias in trial outcomes. Since participants’ experience has received scant attention, the aim of this study is thus to generate hypotheses about which aspects of the conduct of behaviour change trials might matter most to participants, and thus have potential to alter subsequent behaviours and bias trial outcomes Methods Twenty participants were opportunistically screened for a health compromising behaviour (unhealthy diet, lack of exercise, smoking or alcohol consumption and recruited if eligible. Semi structured face to face interviews were conducted, after going through the usual processes involved in trial recruitment, baseline assessment and randomisation. Participants were given information on the contents of an intervention or control condition in a behaviour change trial, which was not actually implemented. Three months later they returned to reflect on these experiences and whether they had any effect on their behaviour during the intervening period. Data from the latter interview were analysed thematically using a modified grounded theory approach. Results The early processes of trial participation raised awareness of unhealthy behaviours, although most reported having had only fleeting intentions to change their behaviour as a result of taking part in this study, in the absence of interventions. However, careful examination of the accounts revealed evidence of subtle research participation effects, which varied according to the health behaviour, and its perceived social acceptability. Participants’ relationships with the research study were viewed as somewhat important in stimulating thinking about whether and how to make lifestyle changes. Conclusion These

  1. Developing a clinical trial unit to advance research in an academic institution.

    Science.gov (United States)

    Croghan, Ivana T; Viker, Steven D; Limper, Andrew H; Evans, Tamara K; Cornell, Alissa R; Ebbert, Jon O; Gertz, Morie A

    2015-11-01

    Research, clinical care, and education are the three cornerstones of academic health centers in the United States. The research climate has always been riddled with ebbs and flows, depending on funding availability. During a time of reduced funding, the number and scope of research studies have been reduced, and in some instances, a field of study has been eliminated. Recent reductions in the research funding landscape have led institutions to explore new ways to continue supporting research. Mayo Clinic in Rochester, MN has developed a clinical trial unit within the Department of Medicine, which provides shared resources for many researchers and serves as a solution for training and mentoring new investigators and study teams. By building on existing infrastructure and providing supplemental resources to existing research, the Department of Medicine clinical trial unit has evolved into an effective mechanism for conducting research. This article discusses the creation of a central unit to provide research support in clinical trials and presents the advantages, disadvantages, and required building blocks for such a unit. Copyright © 2015 Mayo Clinic. Published by Elsevier Inc. All rights reserved.

  2. The UK clinical research network - has it been a success for dermatology clinical trials?

    Directory of Open Access Journals (Sweden)

    Charlesworth Lisa

    2011-06-01

    Full Text Available Abstract Background Following the successful introduction of five topic-specific research networks in the UK, the Comprehensive Local Research Network (CLRN was established in 2008 in order to provide a blanket level of support across the whole country regardless of the clinical discipline. The role of the CLRN was to facilitate recruitment into clinical trials, and to encourage greater engagement in research throughout the National Health Service (NHS. Methods This report evaluates the impact of clinical research networks in supporting clinical trials in the UK, with particular reference to our experiences from two non-commercial dermatology trials. It covers our experience of engaging with the CLRN (and other research networks using two non-commercial dermatology trials as case studies. We present the circumstances that led to our approach to the research networks for support, and the impact that this support had on the delivery of these trials. Results In both cases, recruitment was boosted considerably following the provision of additional support, although other factors such as the availability of experienced personnel, and the role of advertising and media coverage in promoting the trials were also important in translating this additional resource into increased recruitment. Conclusions Recruitment into clinical trials is a complex task that can be influenced by many factors. A world-class clinical research infrastructure is now in place in England (with similar support available in Scotland and Wales, and it is the responsibility of the research community to ensure that this unique resource is used effectively and responsibly.

  3. Challenges in the Verification of Reinforcement Learning Algorithms

    Science.gov (United States)

    Van Wesel, Perry; Goodloe, Alwyn E.

    2017-01-01

    Machine learning (ML) is increasingly being applied to a wide array of domains from search engines to autonomous vehicles. These algorithms, however, are notoriously complex and hard to verify. This work looks at the assumptions underlying machine learning algorithms as well as some of the challenges in trying to verify ML algorithms. Furthermore, we focus on the specific challenges of verifying reinforcement learning algorithms. These are highlighted using a specific example. Ultimately, we do not offer a solution to the complex problem of ML verification, but point out possible approaches for verification and interesting research opportunities.

  4. Clinical Trial Design for HIV Prevention Research: Determining Standards of Prevention.

    Science.gov (United States)

    Dawson, Liza; Zwerski, Sheryl

    2015-06-01

    This article seeks to advance ethical dialogue on choosing standards of prevention in clinical trials testing improved biomedical prevention methods for HIV. The stakes in this area of research are high, given the continued high rates of infection in many countries and the budget limitations that have constrained efforts to expand treatment for all who are currently HIV-infected. New prevention methods are still needed; at the same time, some existing prevention and treatment interventions have been proven effective but are not yet widely available in the countries where they most urgently needed. The ethical tensions in this field of clinical research are well known and have been the subject of extensive debate. There is no single clinical trial design that can optimize all the ethically important goals and commitments involved in research. Several recent articles have described the current ethical difficulties in designing HIV prevention trials, especially in resource limited settings; however, there is no consensus on how to handle clinical trial design decisions, and existing international ethical guidelines offer conflicting advice. This article acknowledges these deep ethical dilemmas and moves beyond a simple descriptive approach to advance an organized method for considering what clinical trial designs will be ethically acceptable for HIV prevention trials, balancing the relevant criteria and providing justification for specific design decisions. Published 2014. This article is a U.S. Government work and is in the public domain in the USA.

  5. Safety Verification for Probabilistic Hybrid Systems

    Czech Academy of Sciences Publication Activity Database

    Zhang, J.; She, Z.; Ratschan, Stefan; Hermanns, H.; Hahn, E.M.

    2012-01-01

    Roč. 18, č. 6 (2012), s. 572-587 ISSN 0947-3580 R&D Projects: GA MŠk OC10048; GA ČR GC201/08/J020 Institutional research plan: CEZ:AV0Z10300504 Keywords : model checking * hybrid system s * formal verification Subject RIV: IN - Informatics, Computer Science Impact factor: 1.250, year: 2012

  6. The trials methodological research agenda: results from a priority setting exercise

    Science.gov (United States)

    2014-01-01

    Background Research into the methods used in the design, conduct, analysis, and reporting of clinical trials is essential to ensure that effective methods are available and that clinical decisions made using results from trials are based on the best available evidence, which is reliable and robust. Methods An on-line Delphi survey of 48 UK Clinical Research Collaboration registered Clinical Trials Units (CTUs) was undertaken. During round one, CTU Directors were asked to identify important topics that require methodological research. During round two, their opinion about the level of importance of each topic was recorded, and during round three, they were asked to review the group’s average opinion and revise their previous opinion if appropriate. Direct reminders were sent to maximise the number of responses at each round. Results are summarised using descriptive methods. Results Forty one (85%) CTU Directors responded to at least one round of the Delphi process: 25 (52%) responded in round one, 32 (67%) responded in round two, 24 (50%) responded in round three. There were only 12 (25%) who responded to all three rounds and 18 (38%) who responded to both rounds two and three. Consensus was achieved amongst CTU Directors that the top three priorities for trials methodological research were ‘Research into methods to boost recruitment in trials’ (considered the highest priority), ‘Methods to minimise attrition’ and ‘Choosing appropriate outcomes to measure’. Fifty other topics were included in the list of priorities and consensus was reached that two topics, ‘Radiotherapy study designs’ and ‘Low carbon trials’, were not priorities. Conclusions This priority setting exercise has identified the research topics felt to be most important to the key stakeholder group of Directors of UKCRC registered CTUs. The use of robust methodology to identify these priorities will help ensure that this work informs the trials methodological research agenda, with

  7. Digital pathology in nephrology clinical trials, research, and pathology practice.

    Science.gov (United States)

    Barisoni, Laura; Hodgin, Jeffrey B

    2017-11-01

    In this review, we will discuss (i) how the recent advancements in digital technology and computational engineering are currently applied to nephropathology in the setting of clinical research, trials, and practice; (ii) the benefits of the new digital environment; (iii) how recognizing its challenges provides opportunities for transformation; and (iv) nephropathology in the upcoming era of kidney precision and predictive medicine. Recent studies highlighted how new standardized protocols facilitate the harmonization of digital pathology database infrastructure and morphologic, morphometric, and computer-aided quantitative analyses. Digital pathology enables robust protocols for clinical trials and research, with the potential to identify previously underused or unrecognized clinically useful parameters. The integration of digital pathology with molecular signatures is leading the way to establishing clinically relevant morpho-omic taxonomies of renal diseases. The introduction of digital pathology in clinical research and trials, and the progressive implementation of the modern software ecosystem, opens opportunities for the development of new predictive diagnostic paradigms and computer-aided algorithms, transforming the practice of renal disease into a modern computational science.

  8. More ethical and more efficient clinical research: multiplex trial design.

    Science.gov (United States)

    Keus, Frederik; van der Horst, Iwan C C; Nijsten, Maarten W

    2014-08-14

    Today's clinical research faces challenges such as a lack of clinical equipoise between treatment arms, reluctance in randomizing for multiple treatments simultaneously, inability to address interactions and increasingly restricted resources. Furthermore, many trials are biased by extensive exclusion criteria, relatively small sample size and less appropriate outcome measures. We propose a 'Multiplex' trial design that preserves clinical equipoise with a continuous and factorial trial design that will also result in more efficient use of resources. This multiplex design accommodates subtrials with appropriate choice of treatment arms within each subtrial. Clinical equipoise should increase consent rates while the factorial design is the best way to identify interactions. The multiplex design may evolve naturally from today's research limitations and challenges, while principal objections seem absent. However this new design poses important infrastructural, organisational and psychological challenges that need in depth consideration.

  9. Functional verification of dynamically reconfigurable FPGA-based systems

    CERN Document Server

    Gong, Lingkan

    2015-01-01

    This book analyzes the challenges in verifying Dynamically Reconfigurable Systems (DRS) with respect to the user design and the physical implementation of such systems. The authors describe the use of a simulation-only layer to emulate the behavior of target FPGAs and accurately model the characteristic features of reconfiguration. Readers are enabled with this simulation-only layer to maintain verification productivity by abstracting away the physical details of the FPGA fabric.  Two implementations of the simulation-only layer are included: Extended ReChannel is a SystemC library that can be used to check DRS designs at a high level; ReSim is a library to support RTL simulation of a DRS reconfiguring both its logic and state. Through a number of case studies, the authors demonstrate how their approach integrates seamlessly with existing, mainstream DRS design flows and with well-established verification methodologies such as top-down modeling and coverage-driven verification. Provides researchers with an i...

  10. SU-E-T-602: Patient-Specific Online Dose Verification Based On Transmission Detector Measurements

    Energy Technology Data Exchange (ETDEWEB)

    Thoelking, J; Yuvaraj, S; Jens, F; Lohr, F; Wenz, F; Wertz, H; Wertz, H [University Medical Center Mannheim, University of Heidelberg, Mannheim, Baden-Wuerttemberg (Germany)

    2015-06-15

    verification. Funding Support, Disclosures, and Conflict of Interest: COIs: Frank Lohr: Elekta: research grant, travel grants, teaching honoraria IBA: research grant, travel grants, teaching honoraria, advisory board C-Rad: board honoraria, travel grants Frederik Wenz: Elekta: research grant, teaching honoraria, consultant, advisory board Zeiss: research grant, teaching honoraria, patent Hansjoerg Wertz: Elekta: research grant, teaching honoraria IBA: research grant.

  11. SU-E-T-602: Patient-Specific Online Dose Verification Based On Transmission Detector Measurements

    International Nuclear Information System (INIS)

    Thoelking, J; Yuvaraj, S; Jens, F; Lohr, F; Wenz, F; Wertz, H; Wertz, H

    2015-01-01

    verification. Funding Support, Disclosures, and Conflict of Interest: COIs: Frank Lohr: Elekta: research grant, travel grants, teaching honoraria IBA: research grant, travel grants, teaching honoraria, advisory board C-Rad: board honoraria, travel grants Frederik Wenz: Elekta: research grant, teaching honoraria, consultant, advisory board Zeiss: research grant, teaching honoraria, patent Hansjoerg Wertz: Elekta: research grant, teaching honoraria IBA: research grant

  12. Utterance Verification for Text-Dependent Speaker Recognition

    DEFF Research Database (Denmark)

    Kinnunen, Tomi; Sahidullah, Md; Kukanov, Ivan

    2016-01-01

    Text-dependent automatic speaker verification naturally calls for the simultaneous verification of speaker identity and spoken content. These two tasks can be achieved with automatic speaker verification (ASV) and utterance verification (UV) technologies. While both have been addressed previously...

  13. Verification and validation in computational fluid dynamics

    Science.gov (United States)

    Oberkampf, William L.; Trucano, Timothy G.

    2002-04-01

    Verification and validation (V&V) are the primary means to assess accuracy and reliability in computational simulations. This paper presents an extensive review of the literature in V&V in computational fluid dynamics (CFD), discusses methods and procedures for assessing V&V, and develops a number of extensions to existing ideas. The review of the development of V&V terminology and methodology points out the contributions from members of the operations research, statistics, and CFD communities. Fundamental issues in V&V are addressed, such as code verification versus solution verification, model validation versus solution validation, the distinction between error and uncertainty, conceptual sources of error and uncertainty, and the relationship between validation and prediction. The fundamental strategy of verification is the identification and quantification of errors in the computational model and its solution. In verification activities, the accuracy of a computational solution is primarily measured relative to two types of highly accurate solutions: analytical solutions and highly accurate numerical solutions. Methods for determining the accuracy of numerical solutions are presented and the importance of software testing during verification activities is emphasized. The fundamental strategy of validation is to assess how accurately the computational results compare with the experimental data, with quantified error and uncertainty estimates for both. This strategy employs a hierarchical methodology that segregates and simplifies the physical and coupling phenomena involved in the complex engineering system of interest. A hypersonic cruise missile is used as an example of how this hierarchical structure is formulated. The discussion of validation assessment also encompasses a number of other important topics. A set of guidelines is proposed for designing and conducting validation experiments, supported by an explanation of how validation experiments are different

  14. ENVIRONMENTAL TECHNOLOGY VERIFICATION REPORT, ENVIRONMENTAL DECISION SUPPORT SOFTWARE, UNIVERSITY OF TENNESSEE RESEARCH CORPORATION, SPATIAL ANALYSIS AND DECISION ASSISTANCE (SADA)

    Science.gov (United States)

    The U.S. Environmental Protection Agency (EPA) has created the Environmental Technology Verification Program (ETV) to facilitate the deployment of innovative or improved environmental technologies through performance verification and dissemination of information. The goal of the...

  15. Sensor-fusion-based biometric identity verification

    International Nuclear Information System (INIS)

    Carlson, J.J.; Bouchard, A.M.; Osbourn, G.C.; Martinez, R.F.; Bartholomew, J.W.; Jordan, J.B.; Flachs, G.M.; Bao, Z.; Zhu, L.

    1998-02-01

    Future generation automated human biometric identification and verification will require multiple features/sensors together with internal and external information sources to achieve high performance, accuracy, and reliability in uncontrolled environments. The primary objective of the proposed research is to develop a theoretical and practical basis for identifying and verifying people using standoff biometric features that can be obtained with minimal inconvenience during the verification process. The basic problem involves selecting sensors and discovering features that provide sufficient information to reliably verify a person's identity under the uncertainties caused by measurement errors and tactics of uncooperative subjects. A system was developed for discovering hand, face, ear, and voice features and fusing them to verify the identity of people. The system obtains its robustness and reliability by fusing many coarse and easily measured features into a near minimal probability of error decision algorithm

  16. A Practitioners Perspective on Verification

    Science.gov (United States)

    Steenburgh, R. A.

    2017-12-01

    NOAAs Space Weather Prediction Center offers a wide range of products and services to meet the needs of an equally wide range of customers. A robust verification program is essential to the informed use of model guidance and other tools by both forecasters and end users alike. In this talk, we present current SWPC practices and results, and examine emerging requirements and potential approaches to satisfy them. We explore the varying verification needs of forecasters and end users, as well as the role of subjective and objective verification. Finally, we describe a vehicle used in the meteorological community to unify approaches to model verification and facilitate intercomparison.

  17. Turning Failure into Success: Trials of the Heart Failure Clinical Research Network.

    Science.gov (United States)

    Joyce, Emer; Givertz, Michael M

    2016-12-01

    The Heart Failure Clinical Research Network (HFN) was established in 2008 on behalf of the NIH National Heart, Lung and Blood Institute, with the primary goal of improving outcomes in heart failure (HF) by designing and conducting high-quality concurrent clinical trials testing interventions across the spectrum of HF. Completed HFN trials have answered several important and relevant clinical questions concerning the safety and efficacy of different decongestive and adjunctive vasodilator therapies in hospitalized acute HF, phosphodiesterase-5 inhibition and nitrate therapies in HF with preserved ejection fraction, and the role of xanthine oxidase inhibition in hyperuricemic HF. These successes, independent of the "positive" or "negative" result of each individual trial, have helped to shape the current clinical care of HF patients and serve as a platform to inform future research directions and trial designs.

  18. Getting Ready for Ion-Beam Therapy Research in Austria - Building-up Research in Parallel with a Facility

    International Nuclear Information System (INIS)

    Georg, Dietmar; Knaeusl; Kuess, Peter; Fuchs, Hermann; Poetter, Richard; Schreiner, Thomas

    2015-01-01

    With participation in ion-beam projects funded nationally or by the European Commission (EC), ion-beam research activities were started at the Medical University of Vienna in parallel with the design and construction of the ion-beam center MedAustron in Wiener Neustadt, 50 km from the Austrian capital. The current medical radiation physics research activities that will be presented comprise: (1) Dose calculation and optimization: ion-beam centers focus mostly on proton and carbon-ion therapy. However, there are other ion species with great potential for clinical applications. Helium ions are currently under investigation from a theoretical physics and biology perspective. (2) Image guided and adaptive ion-beam therapy: organ motion and anatomic changes have a severe influence in ion-beam therapy since variations in heterogeneity along the beam path have a significant impact on the particle range. Ongoing research focuses on possibilities to account for temporal variations of the anatomy during radiotherapy. Both during and between fractions also considering temporal variations in tumor biology. Furthermore, research focuses on particle therapy positron emission tomography (PT-PET) verification and the detection of prompt gammas for on-line verification of ion-beam delivery. (3) Basic and applied dosimetry: an end-to-end procedure was designed and successfully tested in both scanned proton and carbon-ion beams, which may also serve as a dosimetric credentialing procedure for clinical trials in the future. (Author)

  19. Nuclear disarmament verification

    International Nuclear Information System (INIS)

    DeVolpi, A.

    1993-01-01

    Arms control treaties, unilateral actions, and cooperative activities -- reflecting the defusing of East-West tensions -- are causing nuclear weapons to be disarmed and dismantled worldwide. In order to provide for future reductions and to build confidence in the permanency of this disarmament, verification procedures and technologies would play an important role. This paper outlines arms-control objectives, treaty organization, and actions that could be undertaken. For the purposes of this Workshop on Verification, nuclear disarmament has been divided into five topical subareas: Converting nuclear-weapons production complexes, Eliminating and monitoring nuclear-weapons delivery systems, Disabling and destroying nuclear warheads, Demilitarizing or non-military utilization of special nuclear materials, and Inhibiting nuclear arms in non-nuclear-weapons states. This paper concludes with an overview of potential methods for verification

  20. Verification Account Management System (VAMS)

    Data.gov (United States)

    Social Security Administration — The Verification Account Management System (VAMS) is the centralized location for maintaining SSA's verification and data exchange accounts. VAMS account management...

  1. Reporting of clinical trials: a review of research funders' guidelines

    Directory of Open Access Journals (Sweden)

    Williamson Paula R

    2008-11-01

    Full Text Available Abstract Background Randomised controlled trials (RCTs represent the gold standard methodological design to evaluate the effectiveness of an intervention in humans but they are subject to bias, including study publication bias and outcome reporting bias. National and international organisations and charities give recommendations for good research practice in relation to RCTs but to date no review of these guidelines has been undertaken with respect to reporting bias. Methods National and international organisations and UK based charities listed on the Association for Medical Research Charities website were contacted in 2007; they were considered eligible for this review if they funded RCTs. Guidelines were obtained and assessed in relation to what was written about trial registration, protocol adherence and trial publication. It was also noted whether any monitoring against these guidelines was undertaken. This information was necessary to discover how much guidance researchers are given on the publication of results, in order to prevent study publication bias and outcome reporting bias. Results Seventeen organisations and 56 charities were eligible of 140 surveyed for this review, although there was no response from 12. Trial registration, protocol adherence, trial publication and monitoring against the guidelines were often explicitly discussed or implicitly referred too. However, only eleven of these organisations or charities mentioned the publication of negative as well as positive outcomes and just three of the organisations specifically stated that the statistical analysis plan should be strictly adhered to and all changes should be reported. Conclusion Our review indicates that there is a need to provide more detailed guidance for those conducting and reporting clinical trials to help prevent the selective reporting of results. Statements found in the guidelines generally refer to publication bias rather than outcome reporting bias

  2. Quantum money with classical verification

    Energy Technology Data Exchange (ETDEWEB)

    Gavinsky, Dmitry [NEC Laboratories America, Princeton, NJ (United States)

    2014-12-04

    We propose and construct a quantum money scheme that allows verification through classical communication with a bank. This is the first demonstration that a secure quantum money scheme exists that does not require quantum communication for coin verification. Our scheme is secure against adaptive adversaries - this property is not directly related to the possibility of classical verification, nevertheless none of the earlier quantum money constructions is known to possess it.

  3. Quantum money with classical verification

    International Nuclear Information System (INIS)

    Gavinsky, Dmitry

    2014-01-01

    We propose and construct a quantum money scheme that allows verification through classical communication with a bank. This is the first demonstration that a secure quantum money scheme exists that does not require quantum communication for coin verification. Our scheme is secure against adaptive adversaries - this property is not directly related to the possibility of classical verification, nevertheless none of the earlier quantum money constructions is known to possess it

  4. Standardized End Point Definitions for Coronary Intervention Trials: The Academic Research Consortium-2 Consensus Document.

    Science.gov (United States)

    Garcia-Garcia, Hector M; McFadden, Eugène P; Farb, Andrew; Mehran, Roxana; Stone, Gregg W; Spertus, John; Onuma, Yoshinobu; Morel, Marie-Angèle; van Es, Gerrit-Anne; Zuckerman, Bram; Fearon, William F; Taggart, David; Kappetein, Arie-Pieter; Krucoff, Mitchell W; Vranckx, Pascal; Windecker, Stephan; Cutlip, Donald; Serruys, Patrick W

    2018-06-14

    The Academic Research Consortium (ARC)-2 initiative revisited the clinical and angiographic end point definitions in coronary device trials, proposed in 2007, to make them more suitable for use in clinical trials that include increasingly complex lesion and patient populations and incorporate novel devices such as bioresorbable vascular scaffolds. In addition, recommendations for the incorporation of patient-related outcomes in clinical trials are proposed. Academic Research Consortium-2 is a collaborative effort between academic research organizations in the United States and Europe, device manufacturers, and European, US, and Asian regulatory bodies. Several in-person meetings were held to discuss the changes that have occurred in the device landscape and in clinical trials and regulatory pathways in the last decade. The consensus-based end point definitions in this document are endorsed by the stakeholders of this document and strongly advocated for clinical trial purposes. This Academic Research Consortium-2 document provides further standardization of end point definitions for coronary device trials, incorporating advances in technology and knowledge. Their use will aid interpretation of trial outcomes and comparison among studies, thus facilitating the evaluation of the safety and effectiveness of these devices.

  5. Particularities of Verification Processes for Distributed Informatics Applications

    Directory of Open Access Journals (Sweden)

    Ion IVAN

    2013-01-01

    Full Text Available This paper presents distributed informatics applications and characteristics of their development cycle. It defines the concept of verification and there are identified the differences from software testing. Particularities of the software testing and software verification processes are described. The verification steps and necessary conditions are presented and there are established influence factors of quality verification. Software optimality verification is analyzed and some metrics are defined for the verification process.

  6. Development and Verification of Body Armor Target Geometry Created Using Computed Tomography Scans

    Science.gov (United States)

    2017-07-13

    Computed Tomography Scans by Autumn R Kulaga, Kathryn L Loftis, and Eric Murray Approved for public release; distribution is...Army Research Laboratory Development and Verification of Body Armor Target Geometry Created Using Computed Tomography Scans by Autumn R Kulaga...Development and Verification of Body Armor Target Geometry Created Using Computed Tomography Scans 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c

  7. Can emergency medicine research benefit from adaptive design clinical trials?

    Science.gov (United States)

    Flight, Laura; Julious, Steven A; Goodacre, Steve

    2017-04-01

    Adaptive design clinical trials use preplanned interim analyses to determine whether studies should be stopped or modified before recruitment is complete. Emergency medicine trials are well suited to these designs as many have a short time to primary outcome relative to the length of recruitment. We hypothesised that the majority of published emergency medicine trials have the potential to use a simple adaptive trial design. We reviewed clinical trials published in three emergency medicine journals between January 2003 and December 2013. We determined the proportion that used an adaptive design as well as the proportion that could have used a simple adaptive design based on the time to primary outcome and length of recruitment. Only 19 of 188 trials included in the review were considered to have used an adaptive trial design. A total of 154/165 trials that were fixed in design had the potential to use an adaptive design. Currently, there seems to be limited uptake in the use of adaptive trial designs in emergency medicine despite their potential benefits to save time and resources. Failing to take advantage of adaptive designs could be costly to patients and research. It is recommended that where practical and logistical considerations allow, adaptive designs should be used for all emergency medicine clinical trials. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  8. Supporting Policy In health with Research: an Intervention Trial (SPIRIT)—protocol for a stepped wedge trial

    Science.gov (United States)

    2014-01-01

    Introduction Governments in different countries have committed to better use of evidence from research in policy. Although many programmes are directed at assisting agencies to better use research, there have been few tests of the effectiveness of such programmes. This paper describes the protocol for SPIRIT (Supporting Policy In health with Research: an Intervention Trial), a trial designed to test the effectiveness of a multifaceted programme to build organisational capacity for the use of research evidence in policy and programme development. The primary aim is to determine whether SPIRIT results in an increase in the extent to which research and research expertise is sought, appraised, generated and used in the development of specific policy products produced by health policy agencies. Methods and analysis A stepped wedge cluster randomised trial involving six health policy agencies located in Sydney, Australia. Policy agencies are the unit of randomisation and intervention. Agencies were randomly allocated to one of three start dates (steps) to receive the 1-year intervention programme, underpinned by an action framework. The SPIRIT intervention is tailored to suit the interests and needs of each agency and includes audit, feedback and goal setting; a leadership programme; staff training; the opportunity to test systems to assist in the use of research in policies; and exchange with researchers. Outcome measures will be collected at each agency every 6 months for 30 months (starting at the beginning of step 1). Ethics and dissemination Ethics approval was granted by the University of Western Sydney Human Research and Ethics Committee HREC Approval H8855. The findings of this study will be disseminated broadly through peer-reviewed publications and presentations at conferences and used to inform future strategies. PMID:24989620

  9. IDEF method for designing seismic information system in CTBT verification

    International Nuclear Information System (INIS)

    Zheng Xuefeng; Shen Junyi; Jin Ping; Zhang Huimin; Zheng Jiangling; Sun Peng

    2004-01-01

    Seismic information system is of great importance for improving the capability of CTBT verification. A large amount of money has been appropriated for the research in this field in the U.S. and some other countries in recent years. However, designing and developing a seismic information system involves various technologies about complex system design. This paper discusses the IDEF0 method to construct function models and the IDEF1x method to make information models systemically, as well as how they are used in designing seismic information system in CTBT verification. (authors)

  10. [Informed consent process in clinical trials: Insights of researchers, patients and general practitioners].

    Science.gov (United States)

    Giménez, Nuria; Pedrazas, David; Redondo, Susana; Quintana, Salvador

    2016-10-01

    Adequate information for patients and respect for their autonomy are mandatory in research. This article examined insights of researchers, patients and general practitioners (GPs) on the informed consent process in clinical trials, and the role of the GP. A cross-sectional study using three questionnaires, informed consent reviews, medical records, and hospital discharge reports. GPs, researchers and patients involved in clinical trials. Included, 504 GPs, 108 researchers, and 71 patients. Consulting the GP was recommended in 50% of the informed consents. Participation in clinical trials was shown in 33% of the medical records and 3% of the hospital discharge reports. GPs scored 3.54 points (on a 1-10 scale) on the assessment of the information received by the principal investigator. The readability of the informed consent sheet was rated 8.03 points by researchers, and the understanding was rated 7.68 points by patients. Patient satisfaction was positively associated with more time for reflection. GPs were not satisfied with the information received on the participation of patients under their in clinical trials. Researchers were satisfied with the information they offered to patients, and were aware of the need to improve the information GPs received. Patients collaborated greatly towards biomedical research, expressed satisfaction with the overall process, and minimised the difficulties associated with participation. Copyright © 2015 Elsevier España, S.L.U. All rights reserved.

  11. Remote source document verification in two national clinical trials networks: a pilot study.

    Directory of Open Access Journals (Sweden)

    Meredith Mealer

    Full Text Available OBJECTIVE: Barriers to executing large-scale randomized controlled trials include costs, complexity, and regulatory requirements. We hypothesized that source document verification (SDV via remote electronic monitoring is feasible. METHODS: Five hospitals from two NIH sponsored networks provided remote electronic access to study monitors. We evaluated pre-visit remote SDV compared to traditional on-site SDV using a randomized convenience sample of all study subjects due for a monitoring visit. The number of data values verified and the time to perform remote and on-site SDV was collected. RESULTS: Thirty-two study subjects were randomized to either remote SDV (N=16 or traditional on-site SDV (N=16. Technical capabilities, remote access policies and regulatory requirements varied widely across sites. In the adult network, only 14 of 2965 data values (0.47% could not be located remotely. In the traditional on-site SDV arm, 3 of 2608 data values (0.12% required coordinator help. In the pediatric network, all 198 data values in the remote SDV arm and all 183 data values in the on-site SDV arm were located. Although not statistically significant there was a consistent trend for more time consumed per data value (minutes +/- SD: Adult 0.50 +/- 0.17 min vs. 0.39 +/- 0.10 min (two-tailed t-test p=0.11; Pediatric 0.99 +/- 1.07 min vs. 0.56 +/- 0.61 min (p=0.37 and time per case report form: Adult: 4.60 +/- 1.42 min vs. 3.60 +/- 0.96 min (p=0.10; Pediatric: 11.64 +/- 7.54 min vs. 6.07 +/- 3.18 min (p=0.10 using remote SDV. CONCLUSIONS: Because each site had different policies, requirements, and technologies, a common approach to assimilating monitors into the access management system could not be implemented. Despite substantial technology differences, more than 99% of data values were successfully monitored remotely. This pilot study demonstrates the feasibility of remote monitoring and the need to develop consistent access policies for remote study

  12. Measures of outcome for stimulant trials: ACTTION recommendations and research agenda.

    Science.gov (United States)

    Kiluk, Brian D; Carroll, Kathleen M; Duhig, Amy; Falk, Daniel E; Kampman, Kyle; Lai, Shengan; Litten, Raye Z; McCann, David J; Montoya, Ivan D; Preston, Kenzie L; Skolnick, Phil; Weisner, Constance; Woody, George; Chandler, Redonna; Detke, Michael J; Dunn, Kelly; Dworkin, Robert H; Fertig, Joanne; Gewandter, Jennifer; Moeller, F Gerard; Ramey, Tatiana; Ryan, Megan; Silverman, Kenneth; Strain, Eric C

    2016-01-01

    The development and approval of an efficacious pharmacotherapy for stimulant use disorders has been limited by the lack of a meaningful indicator of treatment success, other than sustained abstinence. In March, 2015, a meeting sponsored by Analgesic, Anesthetic, and Addiction Clinical Trial Translations, Innovations, Opportunities, and Networks (ACTTION) was convened to discuss the current state of the evidence regarding meaningful outcome measures in clinical trials for stimulant use disorders. Attendees included members of academia, funding and regulatory agencies, pharmaceutical companies, and healthcare organizations. The goal was to establish a research agenda for the development of a meaningful outcome measure that may be used as an endpoint in clinical trials for stimulant use disorders. Based on guidelines for the selection of clinical trial endpoints, the lessons learned from prior addiction clinical trials, and the process that led to identification of a meaningful indicator of treatment success for alcohol use disorders, several recommendations for future research were generated. These include a focus on the validation of patient reported outcome measures of functioning, the exploration of patterns of stimulant abstinence that may be associated with physical and/or psychosocial benefits, the role of urine testing for validating self-reported measures of stimulant abstinence, and the operational definitions for reduction-based measures in terms of frequency rather than quantity of stimulant use. These recommendations may be useful for secondary analyses of clinical trial data, and in the design of future clinical trials that may help establish a meaningful indicator of treatment success. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  13. Verification of industrial x-ray machine: MINTs experience

    International Nuclear Information System (INIS)

    Aziz Amat; Saidi Rajab; Eesan Pasupathi; Saipo Bahari Abdul Ratan; Shaharudin Sayuti; Abd Nassir Ibrahim; Abd Razak Hamzah

    2005-01-01

    Radiation and electrical safety of the industrial x-ray equipment required to meet Atomic Energy Licensing Board(AELB) guidelines ( LEM/TEK/42 ) at the time of installation and subsequently a periodic verification should be ensured. The purpose of the guide is to explain the requirements employed in conducting the test on industrial x-ray apparatus and be certified in meeting with our local legislative and regulation. Verification is aimed to provide safety assurance information on electrical requirements and the minimum radiation exposure to the operator. This regulation is introduced on new models imported into the Malaysian market. Since June, 1997, Malaysian Institute for Nuclear Technology Research (MINT) has been approved by AELB to provide verification services to private company, government and corporate body throughout Malaysia. Early January 1997, AELB has made it mandatory that all x-ray equipment for industrial purpose (especially Industrial Radiography) must fulfill certain performance test based on the LEM/TEK/42 guidelines. MINT as the third party verification encourages user to improve maintenance of the equipment. MINT experiences in measuring the performance on intermittent and continuous duty rating single-phase industrial x-ray machine in the year 2004 indicated that all of irradiating apparatus tested pass the test and met the requirements of the guideline. From MINT record, 1997 to 2005 , three x-ray models did not meet the requirement and thus not allowed to be used unless the manufacturers willing to modify it to meet AELB requirement. This verification procedures on electrical and radiation safety on industrial x-ray has significantly improved the the maintenance cultures and safety awareness in the usage of x-ray apparatus in the industrial environment. (Author)

  14. Verification of Space Weather Forecasts using Terrestrial Weather Approaches

    Science.gov (United States)

    Henley, E.; Murray, S.; Pope, E.; Stephenson, D.; Sharpe, M.; Bingham, S.; Jackson, D.

    2015-12-01

    The Met Office Space Weather Operations Centre (MOSWOC) provides a range of 24/7 operational space weather forecasts, alerts, and warnings, which provide valuable information on space weather that can degrade electricity grids, radio communications, and satellite electronics. Forecasts issued include arrival times of coronal mass ejections (CMEs), and probabilistic forecasts for flares, geomagnetic storm indices, and energetic particle fluxes and fluences. These forecasts are produced twice daily using a combination of output from models such as Enlil, near-real-time observations, and forecaster experience. Verification of forecasts is crucial for users, researchers, and forecasters to understand the strengths and limitations of forecasters, and to assess forecaster added value. To this end, the Met Office (in collaboration with Exeter University) has been adapting verification techniques from terrestrial weather, and has been working closely with the International Space Environment Service (ISES) to standardise verification procedures. We will present the results of part of this work, analysing forecast and observed CME arrival times, assessing skill using 2x2 contingency tables. These MOSWOC forecasts can be objectively compared to those produced by the NASA Community Coordinated Modelling Center - a useful benchmark. This approach cannot be taken for the other forecasts, as they are probabilistic and categorical (e.g., geomagnetic storm forecasts give probabilities of exceeding levels from minor to extreme). We will present appropriate verification techniques being developed to address these forecasts, such as rank probability skill score, and comparing forecasts against climatology and persistence benchmarks. As part of this, we will outline the use of discrete time Markov chains to assess and improve the performance of our geomagnetic storm forecasts. We will also discuss work to adapt a terrestrial verification visualisation system to space weather, to help

  15. Java bytecode verification via static single assignment form

    DEFF Research Database (Denmark)

    Gal, Andreas; Probst, Christian W.; Franz, Michael

    2008-01-01

    Java Virtual Machines (JVMs) traditionally perform bytecode verification by way of an iterative data-flow analysis. Bytecode verification is necessary to ensure type safety because temporary variables in the JVM are not statically typed. We present an alternative verification mechanism that trans......Java Virtual Machines (JVMs) traditionally perform bytecode verification by way of an iterative data-flow analysis. Bytecode verification is necessary to ensure type safety because temporary variables in the JVM are not statically typed. We present an alternative verification mechanism...

  16. Research on verification and validation strategy of detonation fluid dynamics code of LAD2D

    Science.gov (United States)

    Wang, R. L.; Liang, X.; Liu, X. Z.

    2017-07-01

    The verification and validation (V&V) is an important approach in the software quality assurance of code in complex engineering application. Reasonable and efficient V&V strategy can achieve twice the result with half the effort. This article introduces the software-Lagrangian adaptive hydrodynamics code in 2D space (LAD2D), which is self-developed software in detonation CFD with plastic-elastic structure. The V&V strategy of this detonation CFD code is presented based on the foundation of V&V methodology for scientific software. The basic framework of the module verification and the function validation is proposed, composing the detonation fluid dynamics model V&V strategy of LAD2D.

  17. Formal verification of algorithms for critical systems

    Science.gov (United States)

    Rushby, John M.; Von Henke, Friedrich

    1993-01-01

    We describe our experience with formal, machine-checked verification of algorithms for critical applications, concentrating on a Byzantine fault-tolerant algorithm for synchronizing the clocks in the replicated computers of a digital flight control system. First, we explain the problems encountered in unsynchronized systems and the necessity, and criticality, of fault-tolerant synchronization. We give an overview of one such algorithm, and of the arguments for its correctness. Next, we describe a verification of the algorithm that we performed using our EHDM system for formal specification and verification. We indicate the errors we found in the published analysis of the algorithm, and other benefits that we derived from the verification. Based on our experience, we derive some key requirements for a formal specification and verification system adequate to the task of verifying algorithms of the type considered. Finally, we summarize our conclusions regarding the benefits of formal verification in this domain, and the capabilities required of verification systems in order to realize those benefits.

  18. Challenges for effective WMD verification

    International Nuclear Information System (INIS)

    Andemicael, B.

    2006-01-01

    Effective verification is crucial to the fulfillment of the objectives of any disarmament treaty, not least as regards the proliferation of weapons of mass destruction (WMD). The effectiveness of the verification package depends on a number of factors, some inherent in the agreed structure and others related to the type of responses demanded by emerging challenges. The verification systems of three global agencies-the IAEA, the Comprehensive Nuclear-Test-Ban Treaty Organization (CTBTO, currently the Preparatory Commission), and the Organization for the Prohibition of Chemical Weapons (OPCW)-share similarities in their broad objectives of confidence-building and deterrence by assuring members that rigorous verification would deter or otherwise detect non-compliance. Yet they are up against various constraints and other issues, both internal and external to the treaty regime. These constraints pose major challenges to the effectiveness and reliability of the verification operations. In the nuclear field, the IAEA safeguards process was the first to evolve incrementally from modest Statute beginnings to a robust verification system under the global Treaty on the Non-Proliferation of Nuclear Weapons (NPT). The nuclear non-proliferation regime is now being supplemented by a technology-intensive verification system of the nuclear test-ban treaty (CTBT), a product of over three decades of negotiation. However, there still remain fundamental gaps and loopholes in the regime as a whole, which tend to diminish the combined effectiveness of the IAEA and the CTBT verification capabilities. He three major problems are (a) the lack of universality of membership, essentially because of the absence of three nuclear weapon-capable States-India, Pakistan and Israel-from both the NPT and the CTBT, (b) the changes in US disarmament policy, especially in the nuclear field, and (c) the failure of the Conference on Disarmament to conclude a fissile material cut-off treaty. The world is

  19. Non-commercial vs. commercial clinical trials: a retrospective study of the applications submitted to a research ethics committee.

    Science.gov (United States)

    Fuentes Camps, Inmaculada; Rodríguez, Alexis; Agustí, Antonia

    2018-02-15

    There are many difficulties in undertaking independent clinical research without support from the pharmaceutical industry. In this retrospective observational study, some design characteristics, the clinical trial public register and the publication rate of noncommercial clinical trials were compared to those of commercial clinical trials. A total of 809 applications of drug-evaluation clinical trials were submitted from May 2004 to May 2009 to the research ethics committee of a tertiary hospital, and 16.3% of trials were noncommercial. They were mainly phase IV, multicentre national, and unmasked controlled trials, compared to the commercial trials that were mainly phase II or III, multicentre international, and double-blind masked trials. The commercial trials were registered and published more often than noncommercial trials. More funding for noncommercial research is still needed. The results of the research, commercial or noncommercial, should be disseminated in order not to compromise either its scientific or its social value. © 2018 The British Pharmacological Society.

  20. A Syntactic-Semantic Approach to Incremental Verification

    OpenAIRE

    Bianculli, Domenico; Filieri, Antonio; Ghezzi, Carlo; Mandrioli, Dino

    2013-01-01

    Software verification of evolving systems is challenging mainstream methodologies and tools. Formal verification techniques often conflict with the time constraints imposed by change management practices for evolving systems. Since changes in these systems are often local to restricted parts, an incremental verification approach could be beneficial. This paper introduces SiDECAR, a general framework for the definition of verification procedures, which are made incremental by the framework...

  1. How many research nurses for how many clinical trials in an oncology setting? Definition of the Nursing Time Required by Clinical Trial-Assessment Tool (NTRCT-AT).

    Science.gov (United States)

    Milani, Alessandra; Mazzocco, Ketti; Stucchi, Sara; Magon, Giorgio; Pravettoni, Gabriella; Passoni, Claudia; Ciccarelli, Chiara; Tonali, Alessandra; Profeta, Teresa; Saiani, Luisa

    2017-02-01

    Few resources are available to quantify clinical trial-associated workload, needed to guide staffing and budgetary planning. The aim of the study is to describe a tool to measure clinical trials nurses' workload expressed in time spent to complete core activities. Clinical trials nurses drew up a list of nursing core activities, integrating results from literature searches with personal experience. The final 30 core activities were timed for each research nurse by an outside observer during daily practice in May and June 2014. Average times spent by nurses for each activity were calculated. The "Nursing Time Required by Clinical Trial-Assessment Tool" was created as an electronic sheet that combines the average times per specified activities and mathematic functions to return the total estimated time required by a research nurse for each specific trial. The tool was tested retrospectively on 141 clinical trials. The increasing complexity of clinical research requires structured approaches to determine workforce requirements. This study provides a tool to describe the activities of a clinical trials nurse and to estimate the associated time required to deliver individual trials. The application of the proposed tool in clinical research practice could provide a consistent structure for clinical trials nursing workload estimation internationally. © 2016 John Wiley & Sons Australia, Ltd.

  2. Verification and validation benchmarks.

    Energy Technology Data Exchange (ETDEWEB)

    Oberkampf, William Louis; Trucano, Timothy Guy

    2007-02-01

    Verification and validation (V&V) are the primary means to assess the accuracy and reliability of computational simulations. V&V methods and procedures have fundamentally improved the credibility of simulations in several high-consequence fields, such as nuclear reactor safety, underground nuclear waste storage, and nuclear weapon safety. Although the terminology is not uniform across engineering disciplines, code verification deals with assessing the reliability of the software coding, and solution verification deals with assessing the numerical accuracy of the solution to a computational model. Validation addresses the physics modeling accuracy of a computational simulation by comparing the computational results with experimental data. Code verification benchmarks and validation benchmarks have been constructed for a number of years in every field of computational simulation. However, no comprehensive guidelines have been proposed for the construction and use of V&V benchmarks. For example, the field of nuclear reactor safety has not focused on code verification benchmarks, but it has placed great emphasis on developing validation benchmarks. Many of these validation benchmarks are closely related to the operations of actual reactors at near-safety-critical conditions, as opposed to being more fundamental-physics benchmarks. This paper presents recommendations for the effective design and use of code verification benchmarks based on manufactured solutions, classical analytical solutions, and highly accurate numerical solutions. In addition, this paper presents recommendations for the design and use of validation benchmarks, highlighting the careful design of building-block experiments, the estimation of experimental measurement uncertainty for both inputs and outputs to the code, validation metrics, and the role of model calibration in validation. It is argued that the understanding of predictive capability of a computational model is built on the level of

  3. Verification of Ceramic Structures

    Science.gov (United States)

    Behar-Lafenetre, Stephanie; Cornillon, Laurence; Rancurel, Michael; De Graaf, Dennis; Hartmann, Peter; Coe, Graham; Laine, Benoit

    2012-07-01

    In the framework of the “Mechanical Design and Verification Methodologies for Ceramic Structures” contract [1] awarded by ESA, Thales Alenia Space has investigated literature and practices in affiliated industries to propose a methodological guideline for verification of ceramic spacecraft and instrument structures. It has been written in order to be applicable to most types of ceramic or glass-ceramic materials - typically Cesic®, HBCesic®, Silicon Nitride, Silicon Carbide and ZERODUR®. The proposed guideline describes the activities to be performed at material level in order to cover all the specific aspects of ceramics (Weibull distribution, brittle behaviour, sub-critical crack growth). Elementary tests and their post-processing methods are described, and recommendations for optimization of the test plan are given in order to have a consistent database. The application of this method is shown on an example in a dedicated article [7]. Then the verification activities to be performed at system level are described. This includes classical verification activities based on relevant standard (ECSS Verification [4]), plus specific analytical, testing and inspection features. The analysis methodology takes into account the specific behaviour of ceramic materials, especially the statistical distribution of failures (Weibull) and the method to transfer it from elementary data to a full-scale structure. The demonstration of the efficiency of this method is described in a dedicated article [8]. The verification is completed by classical full-scale testing activities. Indications about proof testing, case of use and implementation are given and specific inspection and protection measures are described. These additional activities are necessary to ensure the required reliability. The aim of the guideline is to describe how to reach the same reliability level as for structures made of more classical materials (metals, composites).

  4. Design and Verification of Critical Pressurised Windows for Manned Spaceflight

    Science.gov (United States)

    Lamoure, Richard; Busto, Lara; Novo, Francisco; Sinnema, Gerben; Leal, Mendes M.

    2014-06-01

    The Window Design for Manned Spaceflight (WDMS) project was tasked with establishing the state-of-art and explore possible improvements to the current structural integrity verification and fracture control methodologies for manned spacecraft windows.A critical review of the state-of-art in spacecraft window design, materials and verification practice was conducted. Shortcomings of the methodology in terms of analysis, inspection and testing were identified. Schemes for improving verification practices and reducing conservatism whilst maintaining the required safety levels were then proposed.An experimental materials characterisation programme was defined and carried out with the support of the 'Glass and Façade Technology Research Group', at the University of Cambridge. Results of the sample testing campaign were analysed, post-processed and subsequently applied to the design of a breadboard window demonstrator.Two Fused Silica glass window panes were procured and subjected to dedicated analyses, inspection and testing comprising both qualification and acceptance programmes specifically tailored to the objectives of the activity.Finally, main outcomes have been compiled into a Structural Verification Guide for Pressurised Windows in manned spacecraft, incorporating best practices and lessons learned throughout this project.

  5. Is flow verification necessary

    International Nuclear Information System (INIS)

    Beetle, T.M.

    1986-01-01

    Safeguards test statistics are used in an attempt to detect diversion of special nuclear material. Under assumptions concerning possible manipulation (falsification) of safeguards accounting data, the effects on the statistics due to diversion and data manipulation are described algebraically. A comprehensive set of statistics that is capable of detecting any diversion of material is defined in terms of the algebraic properties of the effects. When the assumptions exclude collusion between persons in two material balance areas, then three sets of accounting statistics are shown to be comprehensive. Two of the sets contain widely known accountancy statistics. One of them does not require physical flow verification - comparisons of operator and inspector data for receipts and shipments. The third set contains a single statistic which does not require physical flow verification. In addition to not requiring technically difficult and expensive flow verification, this single statistic has several advantages over other comprehensive sets of statistics. This algebraic approach as an alternative to flow verification for safeguards accountancy is discussed in this paper

  6. Verification for excess reactivity on beginning equilibrium core of RSG GAS

    International Nuclear Information System (INIS)

    Daddy Setyawan; Budi Rohman

    2011-01-01

    BAPETEN is an institution authorized to control the use of nuclear energy in Indonesia. Control for the use of nuclear energy is carried out through three pillars: regulation, licensing, and inspection. In order to assure the safety of the operating research reactors, the assessment unit of BAPETEN is carrying out independent assessment in order to verify safety related parameters in the SAR including neutronic aspect. The work includes verification to the Power Peaking Factor in the equilibrium silicide core of RSG GAS reactor by computational method using MCNP-ORIGEN. This verification calculation results for is 9.4 %. Meanwhile, the RSG-GAS safety analysis report shows that the excess reactivity on equilibrium core of RSG GAS is 9.7 %. The verification calculation results show a good agreement with the report. (author)

  7. Procedure generation and verification

    International Nuclear Information System (INIS)

    Sheely, W.F.

    1986-01-01

    The Department of Energy has used Artificial Intelligence of ''AI'' concepts to develop two powerful new computer-based techniques to enhance safety in nuclear applications. The Procedure Generation System, and the Procedure Verification System, can be adapted to other commercial applications, such as a manufacturing plant. The Procedure Generation System can create a procedure to deal with the off-normal condition. The operator can then take correct actions on the system in minimal time. The Verification System evaluates the logic of the Procedure Generator's conclusions. This evaluation uses logic techniques totally independent of the Procedure Generator. The rapid, accurate generation and verification of corrective procedures can greatly reduce the human error, possible in a complex (stressful/high stress) situation

  8. A Scalable Approach for Hardware Semiformal Verification

    OpenAIRE

    Grimm, Tomas; Lettnin, Djones; Hübner, Michael

    2018-01-01

    The current verification flow of complex systems uses different engines synergistically: virtual prototyping, formal verification, simulation, emulation and FPGA prototyping. However, none is able to verify a complete architecture. Furthermore, hybrid approaches aiming at complete verification use techniques that lower the overall complexity by increasing the abstraction level. This work focuses on the verification of complex systems at the RT level to handle the hardware peculiarities. Our r...

  9. 78 FR 28812 - Energy Efficiency Program for Industrial Equipment: Petition of UL Verification Services Inc. for...

    Science.gov (United States)

    2013-05-16

    ... are engineers. UL today is comprised of five businesses, Product Safety, Verification Services, Life..., Director--Global Technical Research, UL Verification Services. Subscribed and sworn to before me this 20... (431.447(c)(4)) General Personnel Overview UL is a global independent safety science company with more...

  10. A safeguards verification technique for solution homogeneity and volume measurements in process tanks

    International Nuclear Information System (INIS)

    Suda, S.; Franssen, F.

    1987-01-01

    A safeguards verification technique is being developed for determining whether process-liquid homogeneity has been achieved in process tanks and for authenticating volume-measurement algorithms involving temperature corrections. It is proposed that, in new designs for bulk-handling plants employing automated process lines, bubbler probes and thermocouples be installed at several heights in key accountability tanks. High-accuracy measurements of density using an electromanometer can now be made which match or even exceed analytical-laboratory accuracies. Together with regional determination of tank temperatures, these measurements provide density, liquid-column weight and temperature gradients over the fill range of the tank that can be used to ascertain when the tank solution has reached equilibrium. Temperature-correction algorithms can be authenticated by comparing the volumes obtained from the several bubbler-probe liquid-height measurements, each based on different amounts of liquid above and below the probe. The verification technique is based on the automated electromanometer system developed by Brookhaven National Laboratory (BNL). The IAEA has recently approved the purchase of a stainless-steel tank equipped with multiple bubbler and thermocouple probes for installation in its Bulk Calibration Laboratory at IAEA Headquarters, Vienna. The verification technique is scheduled for preliminary trials in late 1987

  11. Investigation of novel spent fuel verification system for safeguard application

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Haneol; Yim, Man-Sung [KAIST, Daejeon (Korea, Republic of)

    2016-10-15

    Radioactive waste, especially spent fuel, is generated from the operation of nuclear power plants. The final stage of radioactive waste management is disposal which isolates radioactive waste from the accessible environment and allows it to decay. The safety, security, and safeguard of a spent fuel repository have to be evaluated before its operation. Many researchers have evaluated the safety of a repository. These researchers calculated dose to public after the repository is closed depending on their scenario. Because most spent fuel repositories are non-retrievable, research on security or safeguards of spent fuel repositories have to be performed. Design based security or safeguard have to be developed for future repository designs. This study summarizes the requirements of future spent fuel repositories especially safeguards, and suggests a novel system which meets the safeguard requirements. Applying safeguards to a spent fuel repository is becoming increasingly important. The future requirements for a spent fuel repository are suggested by several expert groups, such as ASTOR in IAEA. The requirements emphasizes surveillance and verification. The surveillance and verification of spent fuel is currently accomplished by using the Cerenkov radiation detector while spent fuel is being stored in a fuel pool. This research investigated an advanced spent fuel verification system using a system which converts spent fuel radiation into electricity. The system generates electricity while it is conveyed from a transportation cask to a disposal cask. The electricity conversion system was verified in a lab scale experiment using an 8.51GBq Cs-137 gamma source.

  12. Investigation of novel spent fuel verification system for safeguard application

    International Nuclear Information System (INIS)

    Lee, Haneol; Yim, Man-Sung

    2016-01-01

    Radioactive waste, especially spent fuel, is generated from the operation of nuclear power plants. The final stage of radioactive waste management is disposal which isolates radioactive waste from the accessible environment and allows it to decay. The safety, security, and safeguard of a spent fuel repository have to be evaluated before its operation. Many researchers have evaluated the safety of a repository. These researchers calculated dose to public after the repository is closed depending on their scenario. Because most spent fuel repositories are non-retrievable, research on security or safeguards of spent fuel repositories have to be performed. Design based security or safeguard have to be developed for future repository designs. This study summarizes the requirements of future spent fuel repositories especially safeguards, and suggests a novel system which meets the safeguard requirements. Applying safeguards to a spent fuel repository is becoming increasingly important. The future requirements for a spent fuel repository are suggested by several expert groups, such as ASTOR in IAEA. The requirements emphasizes surveillance and verification. The surveillance and verification of spent fuel is currently accomplished by using the Cerenkov radiation detector while spent fuel is being stored in a fuel pool. This research investigated an advanced spent fuel verification system using a system which converts spent fuel radiation into electricity. The system generates electricity while it is conveyed from a transportation cask to a disposal cask. The electricity conversion system was verified in a lab scale experiment using an 8.51GBq Cs-137 gamma source

  13. Trial Promoter: A Web-Based Tool for Boosting the Promotion of Clinical Research Through Social Media.

    Science.gov (United States)

    Reuter, Katja; Ukpolo, Francis; Ward, Edward; Wilson, Melissa L; Angyan, Praveen

    2016-06-29

    Scarce information about clinical research, in particular clinical trials, is among the top reasons why potential participants do not take part in clinical studies. Without volunteers, on the other hand, clinical research and the development of novel approaches to preventing, diagnosing, and treating disease are impossible. Promising digital options such as social media have the potential to work alongside traditional methods to boost the promotion of clinical research. However, investigators and research institutions are challenged to leverage these innovations while saving time and resources. To develop and test the efficiency of a Web-based tool that automates the generation and distribution of user-friendly social media messages about clinical trials. Trial Promoter is developed in Ruby on Rails, HTML, cascading style sheet (CSS), and JavaScript. In order to test the tool and the correctness of the generated messages, clinical trials (n=46) were randomized into social media messages and distributed via the microblogging social media platform Twitter and the social network Facebook. The percent correct was calculated to determine the probability with which Trial Promoter generates accurate messages. During a 10-week testing phase, Trial Promoter automatically generated and published 525 user-friendly social media messages on Twitter and Facebook. On average, Trial Promoter correctly used the message templates and substituted the message parameters (text, URLs, and disease hashtags) 97.7% of the time (1563/1600). Trial Promoter may serve as a promising tool to render clinical trial promotion more efficient while requiring limited resources. It supports the distribution of any research or other types of content. The Trial Promoter code and installation instructions are freely available online.

  14. Survey on Offline Finger Print Verification System

    NARCIS (Netherlands)

    Suman, R.; Kaur, R.

    2012-01-01

    The fingerprint verification, means where "verification" implies a user matching a fingerprint against a single fingerprint associated with the identity that the user claims. Biometrics can be classified into two types Behavioral (signature verification, keystroke dynamics, etc.) And Physiological

  15. In-core Instrument Subcritical Verification (INCISV) - Core Design Verification Method - 358

    International Nuclear Information System (INIS)

    Prible, M.C.; Heibel, M.D.; Conner, S.L.; Sebastiani, P.J.; Kistler, D.P.

    2010-01-01

    According to the standard on reload startup physics testing, ANSI/ANS 19.6.1, a plant must verify that the constructed core behaves sufficiently close to the designed core to confirm that the various safety analyses bound the actual behavior of the plant. A large portion of this verification must occur before the reactor operates at power. The INCISV Core Design Verification Method uses the unique characteristics of a Westinghouse Electric Company fixed in-core self powered detector design to perform core design verification after a core reload before power operation. A Vanadium self powered detector that spans the length of the active fuel region is capable of confirming the required core characteristics prior to power ascension; reactivity balance, shutdown margin, temperature coefficient and power distribution. Using a detector element that spans the length of the active fuel region inside the core provides a signal of total integrated flux. Measuring the integrated flux distributions and changes at various rodded conditions and plant temperatures, and comparing them to predicted flux levels, validates all core necessary core design characteristics. INCISV eliminates the dependence on various corrections and assumptions between the ex-core detectors and the core for traditional physics testing programs. This program also eliminates the need for special rod maneuvers which are infrequently performed by plant operators during typical core design verification testing and allows for safer startup activities. (authors)

  16. Mathematical verification of a nuclear power plant protection system function with combined CPN and PVS

    Energy Technology Data Exchange (ETDEWEB)

    Koo, Seo Ryong; Son, Han Seong; Seong, Poong Hyun [Korea Advanced Institute of Science and Technology, Taejon (Korea, Republic of)

    1999-12-31

    In this work, an automatic software verification method for Nuclear Power Plant (NPP) protection system is developed. This method utilizes Colored Petri Net (CPN) for modeling and Prototype Verification System (PVS) for mathematical verification. In order to help flow-through from modeling by CPN to mathematical proof by PVS, a translator has been developed in this work. The combined method has been applied to a protection system function of Wolsong NPP SDS2(Steam Generator Low Level Trip) and found to be promising for further research and applications. 7 refs., 10 figs. (Author)

  17. Mathematical verification of a nuclear power plant protection system function with combined CPN and PVS

    Energy Technology Data Exchange (ETDEWEB)

    Koo, Seo Ryong; Son, Han Seong; Seong, Poong Hyun [Korea Advanced Institute of Science and Technology, Taejon (Korea, Republic of)

    1998-12-31

    In this work, an automatic software verification method for Nuclear Power Plant (NPP) protection system is developed. This method utilizes Colored Petri Net (CPN) for modeling and Prototype Verification System (PVS) for mathematical verification. In order to help flow-through from modeling by CPN to mathematical proof by PVS, a translator has been developed in this work. The combined method has been applied to a protection system function of Wolsong NPP SDS2(Steam Generator Low Level Trip) and found to be promising for further research and applications. 7 refs., 10 figs. (Author)

  18. Achievement report for fiscal 1999 on project for supporting the formation of energy/environmental technology verification project. International joint verification research project (Verification project relative to ignition and NOx reduction using plasma sub-burner in pulverized coal-fired furnace); 1999 nendo plasma sabubana ni yoru bifuntan nenshoro ni okeru chakka oyobi NO{sub x} teigen gijutsu ni kansuru jissho project seika hokokusho

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2000-03-01

    This project is executed through the cooperation of a Russian research institute, Akita Prefectural University, and the Ishikawajima-Harima Heavy Industries Co., Ltd. In the development of a plasma sub-burner and the basic research for its verification, a pulverized coal burning plasma sub-burner is designed and fabricated, a basic burning experiment is conducted for the plasma sub-burner, and plasma stabilization in a pulverized coal flow is simulated. In the verification study of the ignition by the plasma sub-burner in a pulverized coal-fired furnace, it is found that the newly-developed plasma sub-burner satisfies the prescribed operating conditions in the system and that the ignition of pulverized coal takes place across the air ratio range of 0.5-1.5 when pulverized coal is fed to the sub-burner. It is also found that NOx is reduced a great deal when a plasma operating on an orifice gas of air or nitrogen is generated in a gas which contains NOx. (NEDO)

  19. Sensitivity Verification of PWR Monitoring System Using Neuro-Expert For LOCA Detection

    International Nuclear Information System (INIS)

    Muhammad Subekti

    2009-01-01

    Sensitivity Verification of PWR Monitoring System Using Neuro-Expert For LOCA Detection. The present research was done for verification of previous developed method on Loss of Coolant Accident (LOCA) detection and perform simulations for knowing the sensitivity of the PWR monitoring system that applied neuro-expert method. The previous research continuing on present research, has developed and has tested the neuro-expert method for several anomaly detections in Nuclear Power Plant (NPP) typed Pressurized Water Reactor (PWR). Neuro-expert can detect the LOCA anomaly with sensitivity of primary coolant leakage of 7 gallon/min and the conventional method could not detect the primary coolant leakage of 30 gallon/min. Neuro expert method detects significantly LOCA anomaly faster than conventional system in Surry-1 NPP as well so that the impact risk is reducible. (author)

  20. Influences on visit retention in clinical trials: insights from qualitative research during the VOICE trial in Johannesburg, South Africa.

    Science.gov (United States)

    Magazi, Busisiwe; Stadler, Jonathan; Delany-Moretlwe, Sinead; Montgomery, Elizabeth; Mathebula, Florence; Hartmann, Miriam; van der Straten, Ariane

    2014-07-28

    Although significant progress has been made in clinical trials of women-controlled methods of HIV prevention such as microbicides and Pre-exposure Prophylaxis (PrEP), low adherence to experimental study products remains a major obstacle to being able to establish their efficacy in preventing HIV infection. One factor that influences adherence is the ability of trial participants to attend regular clinic visits at which trial products are dispensed, adherence counseling is administered, and participant safety is monitored. We conducted a qualitative study of the social contextual factors that influenced adherence in the VOICE (MTN-003) trial in Johannesburg, South Africa, focusing on study participation in general, and study visits in particular. The research used qualitative methodologies, including in-depth interviews (IDI), serial ethnographic interviews (EI), and focus group discussions (FGD) among a random sub-sample of 102 female trial participants, 18 to 40 years of age. A socio-ecological framework that explored those factors that shaped trial participation and adherence to study products, guided the analysis. Key codes were developed to standardize subsequent coding and a node search was used to identify texts relating to obstacles to visit adherence. Our analysis includes coded transcripts from seven FGD (N = 40), 41 IDI, and 64 serial EI (N = 21 women). Women's kinship, social, and economic roles shaped their ability to participate in the clinical trial. Although participants expressed strong commitments to attend study visits, clinic visit schedules and lengthy waiting times interfered with their multiple obligations as care givers, wage earners, housekeepers, and students. The research findings highlight the importance of the social context in shaping participation in HIV prevention trials, beyond focusing solely on individual characteristics. This points to the need to focus interventions to improve visit attendance by promoting a culture of

  1. Fingerprint verification prediction model in hand dermatitis.

    Science.gov (United States)

    Lee, Chew K; Chang, Choong C; Johor, Asmah; Othman, Puwira; Baba, Roshidah

    2015-07-01

    Hand dermatitis associated fingerprint changes is a significant problem and affects fingerprint verification processes. This study was done to develop a clinically useful prediction model for fingerprint verification in patients with hand dermatitis. A case-control study involving 100 patients with hand dermatitis. All patients verified their thumbprints against their identity card. Registered fingerprints were randomized into a model derivation and model validation group. Predictive model was derived using multiple logistic regression. Validation was done using the goodness-of-fit test. The fingerprint verification prediction model consists of a major criterion (fingerprint dystrophy area of ≥ 25%) and two minor criteria (long horizontal lines and long vertical lines). The presence of the major criterion predicts it will almost always fail verification, while presence of both minor criteria and presence of one minor criterion predict high and low risk of fingerprint verification failure, respectively. When none of the criteria are met, the fingerprint almost always passes the verification. The area under the receiver operating characteristic curve was 0.937, and the goodness-of-fit test showed agreement between the observed and expected number (P = 0.26). The derived fingerprint verification failure prediction model is validated and highly discriminatory in predicting risk of fingerprint verification in patients with hand dermatitis. © 2014 The International Society of Dermatology.

  2. Evaluation of cluster-randomized trials on maternal and child health research in developing countries

    DEFF Research Database (Denmark)

    Handlos, Line Neerup; Chakraborty, Hrishikesh; Sen, Pranab Kumar

    2009-01-01

    To summarize and evaluate all publications including cluster-randomized trials used for maternal and child health research in developing countries during the last 10 years. METHODS: All cluster-randomized trials published between 1998 and 2008 were reviewed, and those that met our criteria...... for inclusion were evaluated further. The criteria for inclusion were that the trial should have been conducted in maternal and child health care in a developing country and that the conclusions should have been made on an individual level. Methods of accounting for clustering in design and analysis were......, and the trials generally improved in quality. CONCLUSIONS: Shortcomings exist in the sample-size calculations and in the analysis of cluster-randomized trials conducted during maternal and child health research in developing countries. Even though there has been improvement over time, further progress in the way...

  3. The End-To-End Safety Verification Process Implemented to Ensure Safe Operations of the Columbus Research Module

    Science.gov (United States)

    Arndt, J.; Kreimer, J.

    2010-09-01

    The European Space Laboratory COLUMBUS was launched in February 2008 with NASA Space Shuttle Atlantis. Since successful docking and activation this manned laboratory forms part of the International Space Station(ISS). Depending on the objectives of the Mission Increments the on-orbit configuration of the COLUMBUS Module varies with each increment. This paper describes the end-to-end verification which has been implemented to ensure safe operations under the condition of a changing on-orbit configuration. That verification process has to cover not only the configuration changes as foreseen by the Mission Increment planning but also those configuration changes on short notice which become necessary due to near real-time requests initiated by crew or Flight Control, and changes - most challenging since unpredictable - due to on-orbit anomalies. Subject of the safety verification is on one hand the on orbit configuration itself including the hardware and software products, on the other hand the related Ground facilities needed for commanding of and communication to the on-orbit System. But also the operational products, e.g. the procedures prepared for crew and ground control in accordance to increment planning, are subject of the overall safety verification. In order to analyse the on-orbit configuration for potential hazards and to verify the implementation of the related Safety required hazard controls, a hierarchical approach is applied. The key element of the analytical safety integration of the whole COLUMBUS Payload Complement including hardware owned by International Partners is the Integrated Experiment Hazard Assessment(IEHA). The IEHA especially identifies those hazardous scenarios which could potentially arise through physical and operational interaction of experiments. A major challenge is the implementation of a Safety process which owns quite some rigidity in order to provide reliable verification of on-board Safety and which likewise provides enough

  4. Ethical and policy issues in cluster randomized trials: rationale and design of a mixed methods research study

    Directory of Open Access Journals (Sweden)

    Chaudhry Shazia H

    2009-07-01

    Full Text Available Abstract Background Cluster randomized trials are an increasingly important methodological tool in health research. In cluster randomized trials, intact social units or groups of individuals, such as medical practices, schools, or entire communities – rather than individual themselves – are randomly allocated to intervention or control conditions, while outcomes are then observed on individual cluster members. The substantial methodological differences between cluster randomized trials and conventional randomized trials pose serious challenges to the current conceptual framework for research ethics. The ethical implications of randomizing groups rather than individuals are not addressed in current research ethics guidelines, nor have they even been thoroughly explored. The main objectives of this research are to: (1 identify ethical issues arising in cluster trials and learn how they are currently being addressed; (2 understand how ethics reviews of cluster trials are carried out in different countries (Canada, the USA and the UK; (3 elicit the views and experiences of trial participants and cluster representatives; (4 develop well-grounded guidelines for the ethical conduct and review of cluster trials by conducting an extensive ethical analysis and organizing a consensus process; (5 disseminate the guidelines to researchers, research ethics boards (REBs, journal editors, and research funders. Methods We will use a mixed-methods (qualitative and quantitative approach incorporating both empirical and conceptual work. Empirical work will include a systematic review of a random sample of published trials, a survey and in-depth interviews with trialists, a survey of REBs, and in-depth interviews and focus group discussions with trial participants and gatekeepers. The empirical work will inform the concurrent ethical analysis which will lead to a guidance document laying out principles, policy options, and rationale for proposed guidelines. An

  5. Reconciling research and community priorities in participatory trials: application to Padres Informados/Jovenes Preparados.

    Science.gov (United States)

    Allen, Michele L; Garcia-Huidobro, Diego; Bastian, Tiana; Hurtado, G Ali; Linares, Roxana; Svetaz, María Veronica

    2017-06-01

    Participatory research (PR) trials aim to achieve the dual, and at times competing, demands of producing an intervention and research process that address community perspectives and priorities, while establishing intervention effectiveness. To identify research and community priorities that must be reconciled in the areas of collaborative processes, study design and aim and study implementation quality in order to successfully conduct a participatory trial. We describe how this reconciliation was approached in the smoking prevention participatory trial Padres Informados/Jovenes Preparados (Informed Parents/Prepared Youth) and evaluate the success of our reconciled priorities. Data sources to evaluate success of the reconciliations included a survey of all partners regarding collaborative group processes, intervention participant recruitment and attendance and surveys of enrolled study participants assessing intervention outcomes. While we successfully achieved our reconciled collaborative processes and implementation quality goals, we did not achieve our reconciled goals in study aim and design. Due in part to the randomized wait-list control group design chosen in the reconciliation process, we were not able to demonstrate overall efficacy of the intervention or offer timely services to families in need of support. Achieving the goals of participatory trials is challenging but may yield community and research benefits. Innovative research designs are needed to better support the complex goals of participatory trials. © The Author 2017. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  6. Verification and validation benchmarks

    International Nuclear Information System (INIS)

    Oberkampf, William Louis; Trucano, Timothy Guy

    2007-01-01

    Verification and validation (V and V) are the primary means to assess the accuracy and reliability of computational simulations. V and V methods and procedures have fundamentally improved the credibility of simulations in several high-consequence fields, such as nuclear reactor safety, underground nuclear waste storage, and nuclear weapon safety. Although the terminology is not uniform across engineering disciplines, code verification deals with assessing the reliability of the software coding, and solution verification deals with assessing the numerical accuracy of the solution to a computational model. Validation addresses the physics modeling accuracy of a computational simulation by comparing the computational results with experimental data. Code verification benchmarks and validation benchmarks have been constructed for a number of years in every field of computational simulation. However, no comprehensive guidelines have been proposed for the construction and use of V and V benchmarks. For example, the field of nuclear reactor safety has not focused on code verification benchmarks, but it has placed great emphasis on developing validation benchmarks. Many of these validation benchmarks are closely related to the operations of actual reactors at near-safety-critical conditions, as opposed to being more fundamental-physics benchmarks. This paper presents recommendations for the effective design and use of code verification benchmarks based on manufactured solutions, classical analytical solutions, and highly accurate numerical solutions. In addition, this paper presents recommendations for the design and use of validation benchmarks, highlighting the careful design of building-block experiments, the estimation of experimental measurement uncertainty for both inputs and outputs to the code, validation metrics, and the role of model calibration in validation. It is argued that the understanding of predictive capability of a computational model is built on the

  7. Verification and validation benchmarks

    International Nuclear Information System (INIS)

    Oberkampf, William L.; Trucano, Timothy G.

    2008-01-01

    Verification and validation (V and V) are the primary means to assess the accuracy and reliability of computational simulations. V and V methods and procedures have fundamentally improved the credibility of simulations in several high-consequence fields, such as nuclear reactor safety, underground nuclear waste storage, and nuclear weapon safety. Although the terminology is not uniform across engineering disciplines, code verification deals with assessing the reliability of the software coding, and solution verification deals with assessing the numerical accuracy of the solution to a computational model. Validation addresses the physics modeling accuracy of a computational simulation by comparing the computational results with experimental data. Code verification benchmarks and validation benchmarks have been constructed for a number of years in every field of computational simulation. However, no comprehensive guidelines have been proposed for the construction and use of V and V benchmarks. For example, the field of nuclear reactor safety has not focused on code verification benchmarks, but it has placed great emphasis on developing validation benchmarks. Many of these validation benchmarks are closely related to the operations of actual reactors at near-safety-critical conditions, as opposed to being more fundamental-physics benchmarks. This paper presents recommendations for the effective design and use of code verification benchmarks based on manufactured solutions, classical analytical solutions, and highly accurate numerical solutions. In addition, this paper presents recommendations for the design and use of validation benchmarks, highlighting the careful design of building-block experiments, the estimation of experimental measurement uncertainty for both inputs and outputs to the code, validation metrics, and the role of model calibration in validation. It is argued that the understanding of predictive capability of a computational model is built on the

  8. Impact of radiation research on clinical trials in radiation oncology

    International Nuclear Information System (INIS)

    Rubin, P.; Van Ess, J.D.

    1989-01-01

    The authors present an outline review of the history of the formation of the cooperative group called International Clinical Trials in Radiation Oncology (ICTRO), and the following areas are briefly discussed together with some projections for the direction of clinical trials in radiation oncology into the 1990s:- radiosensitizers, radioprotectors, and their combination, drug-radiation interactions, dose/time/fractionation, hyperthermia, biological response modifiers and radiolabelled antibodies, high LET, particularly neutron therapy, large field irradiation and interoperative irradiation, research studies on specific sites. (U.K.)

  9. Potential of adaptive clinical trial designs in pharmacogenetic research, A simulation based on the IPASS trial

    NARCIS (Netherlands)

    Van Der Baan, Frederieke H.; Knol, Mirjam J.|info:eu-repo/dai/nl/304820350; Klungel, Olaf H.|info:eu-repo/dai/nl/181447649; Egberts, Toine C.G.|info:eu-repo/dai/nl/162850050; Grobbee, Diederick E.; Roes, Kit C.B.

    2011-01-01

    Background: An adaptive clinical trial design that allows population enrichment after interim analysis can be advantageous in pharmacogenetic research if previous evidence is not strong enough to exclude part of the patient population beforehand.With this design, underpowered studies or unnecessary

  10. Quantitative analysis of patient-specific dosimetric IMRT verification

    International Nuclear Information System (INIS)

    Budgell, G J; Perrin, B A; Mott, J H L; Fairfoul, J; Mackay, R I

    2005-01-01

    Patient-specific dosimetric verification methods for IMRT treatments are variable, time-consuming and frequently qualitative, preventing evidence-based reduction in the amount of verification performed. This paper addresses some of these issues by applying a quantitative analysis parameter to the dosimetric verification procedure. Film measurements in different planes were acquired for a series of ten IMRT prostate patients, analysed using the quantitative parameter, and compared to determine the most suitable verification plane. Film and ion chamber verification results for 61 patients were analysed to determine long-term accuracy, reproducibility and stability of the planning and delivery system. The reproducibility of the measurement and analysis system was also studied. The results show that verification results are strongly dependent on the plane chosen, with the coronal plane particularly insensitive to delivery error. Unexpectedly, no correlation could be found between the levels of error in different verification planes. Longer term verification results showed consistent patterns which suggest that the amount of patient-specific verification can be safely reduced, provided proper caution is exercised: an evidence-based model for such reduction is proposed. It is concluded that dose/distance to agreement (e.g., 3%/3 mm) should be used as a criterion of acceptability. Quantitative parameters calculated for a given criterion of acceptability should be adopted in conjunction with displays that show where discrepancies occur. Planning and delivery systems which cannot meet the required standards of accuracy, reproducibility and stability to reduce verification will not be accepted by the radiotherapy community

  11. Post-silicon and runtime verification for modern processors

    CERN Document Server

    Wagner, Ilya

    2010-01-01

    The purpose of this book is to survey the state of the art and evolving directions in post-silicon and runtime verification. The authors start by giving an overview of the state of the art in verification, particularly current post-silicon methodologies in use in the industry, both for the domain of processor pipeline design and for memory subsystems. They then dive into the presentation of several new post-silicon verification solutions aimed at boosting the verification coverage of modern processors, dedicating several chapters to this topic. The presentation of runtime verification solution

  12. A Correctness Verification Technique for Commercial FPGA Synthesis Tools

    International Nuclear Information System (INIS)

    Kim, Eui Sub; Yoo, Jun Beom; Choi, Jong Gyun; Kim, Jang Yeol; Lee, Jang Soo

    2014-01-01

    uses the 'Actel Libero IDE' (internally, 'Synopsys Synplify Pro') to synthesize Net list from the Verilog program, and also uses the 'EDIFtoBLIF-MV' to translate Net list into BLIF-MV. The VIS verification system is then used to prove the behavioral equivalence. This paper is organized as follows: Section 2 provides background information. Section 3 explains the developed tool, which translates EDIF to BLIF-MV. A case study with Verilog examples of a Korean nuclear power plant is presented in Section 4 and Section 5 concludes the paper and provides remarks on future research extension. This paper proposes a formal verification technique which can contribute to the correctness demonstration of commercial FPGA synthesis processes and tools in part. It formally checks the behavioral equivalence between Verilog and subsequently synthesized Net list with the VIS verification system. If the formal verification succeeds, then we can assure that the synthesis process from Verilog into Net list worked correctly at least for the Verilog program. In order to support the formal verification, we developed the mechanical translator 'EDIFtoBLIF-MV,' which translate EDIF into BLIF-MV, while preserving their behavior equivalence. The translation from EDIF into BLIF-MV consists of three steps Parsing, Pro-processing and Translation. We performed the case study with Verilog programs designed for a digital I and C system in Korea

  13. A Correctness Verification Technique for Commercial FPGA Synthesis Tools

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Eui Sub; Yoo, Jun Beom [Konkuk University, Seoul (Korea, Republic of); Choi, Jong Gyun; Kim, Jang Yeol; Lee, Jang Soo [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2014-10-15

    uses the 'Actel Libero IDE' (internally, 'Synopsys Synplify Pro') to synthesize Net list from the Verilog program, and also uses the 'EDIFtoBLIF-MV' to translate Net list into BLIF-MV. The VIS verification system is then used to prove the behavioral equivalence. This paper is organized as follows: Section 2 provides background information. Section 3 explains the developed tool, which translates EDIF to BLIF-MV. A case study with Verilog examples of a Korean nuclear power plant is presented in Section 4 and Section 5 concludes the paper and provides remarks on future research extension. This paper proposes a formal verification technique which can contribute to the correctness demonstration of commercial FPGA synthesis processes and tools in part. It formally checks the behavioral equivalence between Verilog and subsequently synthesized Net list with the VIS verification system. If the formal verification succeeds, then we can assure that the synthesis process from Verilog into Net list worked correctly at least for the Verilog program. In order to support the formal verification, we developed the mechanical translator 'EDIFtoBLIF-MV,' which translate EDIF into BLIF-MV, while preserving their behavior equivalence. The translation from EDIF into BLIF-MV consists of three steps Parsing, Pro-processing and Translation. We performed the case study with Verilog programs designed for a digital I and C system in Korea.

  14. Verification of FPGA-based NPP I and C systems. General approach and techniques

    International Nuclear Information System (INIS)

    Andrashov, Anton; Kharchenko, Vyacheslav; Sklyar, Volodymir; Reva, Lubov; Siora, Alexander

    2011-01-01

    This paper presents a general approach and techniques for design and verification of Field Programmable Gates Arrays (FPGA)-based Instrumentation and Control (I and C) systems for Nuclear Power Plants (NPP). Appropriate regulatory documents used for I and C systems design, development, verification and validation (V and V) are discussed considering the latest international standards and guidelines. Typical development and V and V processes of FPGA electronic design for FPGA-based NPP I and C systems are presented. Some safety-related features of implementation process are discussed. Corresponding development artifacts, related to design and implementation activities are outlined. An approach to test-based verification of FPGA electronic design algorithms, used in FPGA-based reactor trip systems is proposed. The results of application of test-based techniques for assessment of FPGA electronic design algorithms for reactor trip system (RTS) produced by Research and Production Corporation (RPC) 'Radiy' are presented. Some principles of invariant-oriented verification for FPGA-based safety-critical systems are outlined. (author)

  15. Experimental verification of internal dosimetry calculations. Annual progress report

    International Nuclear Information System (INIS)

    1980-05-01

    During the past year a dosimetry research program has been established in the School of Nuclear Engineering at the Georgia Institute of Technology. The major objective of this program has been to provide research results upon which a useful internal dosimetry system could be based. The important application of this dosimetry system will be the experimental verification of internal dosimetry calculations such as those published by the MIRD Committee

  16. FPGA Design and Verification Procedure for Nuclear Power Plant MMIS

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Dongil; Yoo, Kawnwoo; Ryoo, Kwangki [Hanbat National Univ., Daejeon (Korea, Republic of)

    2013-05-15

    In this paper, it is shown that it is possible to ensure reliability by performing the steps of the verification based on the FPGA development methodology, to ensure the safety of application to the NPP MMIS of the FPGA run along the step. Currently, the PLC (Programmable Logic Controller) which is being developed is composed of the FPGA (Field Programmable Gate Array) and CPU (Central Processing Unit). As the importance of the FPGA in the NPP (Nuclear Power Plant) MMIS (Man-Machine Interface System) has been increasing than before, the research on the verification of the FPGA has being more and more concentrated recently.

  17. Asthma: NIH-Sponsored Research and Clinical Trials | NIH MedlinePlus the Magazine

    Science.gov (United States)

    ... of this page please turn Javascript on. Feature: Asthma Asthma: NIH-Sponsored Research and Clinical Trials Past Issues / Fall 2011 Table of Contents NIH-Sponsored Research Asthma in the Inner City: Recognizing that asthma severity ...

  18. SU-F-T-494: A Multi-Institutional Study of Independent Dose Verification Using Golden Beam Data

    Energy Technology Data Exchange (ETDEWEB)

    Itano, M; Yamazaki, T [Inagi Municipal Hospital, Inagi, Tokyo (Japan); Tachibana, R; Uchida, Y [National Cancer Center Hospital East, Kashiwa, Chiba (Japan); Yamashita, M [Kobe City Medical Center General Hospital, Kobe, Hyogo (Japan); Shimizu, H [Kitasato University Medical Center, Kitamoto, Saitama (Japan); Sugawara, Y; Kotabe, K [National Center for Global Health and Medicine, Shinjuku, Tokyo (Japan); Kamima, T [Cancer Institute Hospital Japanese Foundation for Cancer Research, Koto, Tokyo (Japan); Takahashi, R [Cancer Institute Hospital of Japanese Foundation for Cancer Research, Koto, Tokyo (Japan); Ishibashi, S [Sasebo City General Hospital, Sasebo, Nagasaki (Japan); Tachibana, H [National Cancer Center, Kashiwa, Chiba (Japan)

    2016-06-15

    Purpose: In general, beam data of individual linac is measured for independent dose verification software program and the verification is performed as a secondary check. In this study, independent dose verification using golden beam data was compared to that using individual linac’s beam data. Methods: Six institutions were participated and three different beam data were prepared. The one was individual measured data (Original Beam Data, OBD) .The others were generated by all measurements from same linac model (Model-GBD) and all linac models (All-GBD). The three different beam data were registered to the independent verification software program for each institute. Subsequently, patient’s plans in eight sites (brain, head and neck, lung, esophagus, breast, abdomen, pelvis and bone) were analyzed using the verification program to compare doses calculated using the three different beam data. Results: 1116 plans were collected from six institutes. Compared to using the OBD, the results shows the variation using the Model-GBD based calculation and the All-GBD was 0.0 ± 0.3% and 0.0 ± 0.6%, respectively. The maximum variations were 1.2% and 2.3%, respectively. The plans with the variation over 1% shows the reference points were located away from the central axis with/without physical wedge. Conclusion: The confidence limit (2SD) using the Model-GBD and the All-GBD was within 0.6% and 1.2%, respectively. Thus, the use of golden beam data may be feasible for independent verification. In addition to it, the verification using golden beam data provide quality assurance of planning from the view of audit. This research is partially supported by Japan Agency for Medical Research and Development(AMED)

  19. SU-F-T-494: A Multi-Institutional Study of Independent Dose Verification Using Golden Beam Data

    International Nuclear Information System (INIS)

    Itano, M; Yamazaki, T; Tachibana, R; Uchida, Y; Yamashita, M; Shimizu, H; Sugawara, Y; Kotabe, K; Kamima, T; Takahashi, R; Ishibashi, S; Tachibana, H

    2016-01-01

    Purpose: In general, beam data of individual linac is measured for independent dose verification software program and the verification is performed as a secondary check. In this study, independent dose verification using golden beam data was compared to that using individual linac’s beam data. Methods: Six institutions were participated and three different beam data were prepared. The one was individual measured data (Original Beam Data, OBD) .The others were generated by all measurements from same linac model (Model-GBD) and all linac models (All-GBD). The three different beam data were registered to the independent verification software program for each institute. Subsequently, patient’s plans in eight sites (brain, head and neck, lung, esophagus, breast, abdomen, pelvis and bone) were analyzed using the verification program to compare doses calculated using the three different beam data. Results: 1116 plans were collected from six institutes. Compared to using the OBD, the results shows the variation using the Model-GBD based calculation and the All-GBD was 0.0 ± 0.3% and 0.0 ± 0.6%, respectively. The maximum variations were 1.2% and 2.3%, respectively. The plans with the variation over 1% shows the reference points were located away from the central axis with/without physical wedge. Conclusion: The confidence limit (2SD) using the Model-GBD and the All-GBD was within 0.6% and 1.2%, respectively. Thus, the use of golden beam data may be feasible for independent verification. In addition to it, the verification using golden beam data provide quality assurance of planning from the view of audit. This research is partially supported by Japan Agency for Medical Research and Development(AMED)

  20. Verification and nuclear material security

    International Nuclear Information System (INIS)

    ElBaradei, M.

    2001-01-01

    Full text: The Director General will open the symposium by presenting a series of challenges facing the international safeguards community: the need to ensure a robust system, with strong verification tools and a sound research and development programme; the importance of securing the necessary support for the system, in terms of resources; the effort to achieve universal participation in the non-proliferation regime; and the necessity of re-energizing disarmament efforts. Special focus will be given to the challenge underscored by recent events, of strengthening international efforts to combat nuclear terrorism. (author)

  1. Research staff training in a multisite randomized clinical trial: Methods and recommendations from the Stimulant Reduction Intervention using Dosed Exercise (STRIDE) trial.

    Science.gov (United States)

    Walker, Robrina; Morris, David W; Greer, Tracy L; Trivedi, Madhukar H

    2014-01-01

    Descriptions of and recommendations for meeting the challenges of training research staff for multisite studies are limited despite the recognized importance of training on trial outcomes. The STRIDE (STimulant Reduction Intervention using Dosed Exercise) study is a multisite randomized clinical trial that was conducted at nine addiction treatment programs across the United States within the National Drug Abuse Treatment Clinical Trials Network (CTN) and evaluated the addition of exercise to addiction treatment as usual (TAU), compared to health education added to TAU, for individuals with stimulant abuse or dependence. Research staff administered a variety of measures that required a range of interviewing, technical, and clinical skills. In order to address the absence of information on how research staff are trained for multisite clinical studies, the current manuscript describes the conceptual process of training and certifying research assistants for STRIDE. Training was conducted using a three-stage process to allow staff sufficient time for distributive learning, practice, and calibration leading up to implementation of this complex study. Training was successfully implemented with staff across nine sites. Staff demonstrated evidence of study and procedural knowledge via quizzes and skill demonstration on six measures requiring certification. Overall, while the majority of staff had little to no experience in the six measures, all research assistants demonstrated ability to correctly and reliably administer the measures throughout the study. Practical recommendations are provided for training research staff and are particularly applicable to the challenges encountered with large, multisite trials.

  2. RESRAD-BUILD verification

    International Nuclear Information System (INIS)

    Kamboj, S.; Yu, C.; Biwer, B. M.; Klett, T.

    2002-01-01

    The results generated by the RESRAD-BUILD code (version 3.0) were verified with hand or spreadsheet calculations using equations given in the RESRAD-BUILD manual for different pathways. For verification purposes, different radionuclides--H-3, C-14, Na-22, Al-26, Cl-36, Mn-54, Co-60, Au-195, Ra-226, Ra-228, Th-228, and U-238--were chosen to test all pathways and models. Tritium, Ra-226, and Th-228 were chosen because of the special tritium and radon models in the RESRAD-BUILD code. Other radionuclides were selected to represent a spectrum of radiation types and energies. Verification of the RESRAD-BUILD code was conducted with an initial check of all the input parameters for correctness against their original source documents. Verification of the calculations was performed external to the RESRAD-BUILD code with Microsoft Excel to verify all the major portions of the code. In some cases, RESRAD-BUILD results were compared with those of external codes, such as MCNP (Monte Carlo N-particle) and RESRAD. The verification was conducted on a step-by-step basis and used different test cases as templates. The following types of calculations were investigated: (1) source injection rate, (2) air concentration in the room, (3) air particulate deposition, (4) radon pathway model, (5) tritium model for volume source, (6) external exposure model, (7) different pathway doses, and (8) time dependence of dose. Some minor errors were identified in version 3.0; these errors have been corrected in later versions of the code. Some possible improvements in the code were also identified

  3. An Issues-Based Research Project: National Goals on Trial.

    Science.gov (United States)

    DeVille, Priscilla; And Others

    This paper summarizes the results of a research project completed by three doctoral students enrolled in an advanced curriculum development course at the University of Southern Mississippi (Hattiesburg). The students used a mock trial format to consider reasons to support establishment of a national curriculum (concerning the American public's…

  4. A virtual dosimetry audit - Towards transferability of gamma index analysis between clinical trial QA groups.

    Science.gov (United States)

    Hussein, Mohammad; Clementel, Enrico; Eaton, David J; Greer, Peter B; Haworth, Annette; Ishikura, Satoshi; Kry, Stephen F; Lehmann, Joerg; Lye, Jessica; Monti, Angelo F; Nakamura, Mitsuhiro; Hurkmans, Coen; Clark, Catharine H

    2017-12-01

    Quality assurance (QA) for clinical trials is important. Lack of compliance can affect trial outcome. Clinical trial QA groups have different methods of dose distribution verification and analysis, all with the ultimate aim of ensuring trial compliance. The aim of this study was to gain a better understanding of different processes to inform future dosimetry audit reciprocity. Six clinical trial QA groups participated. Intensity modulated treatment plans were generated for three different cases. A range of 17 virtual 'measurements' were generated by introducing a variety of simulated perturbations (such as MLC position deviations, dose differences, gantry rotation errors, Gaussian noise) to three different treatment plan cases. Participants were blinded to the 'measured' data details. Each group analysed the datasets using their own gamma index (γ) technique and using standardised parameters for passing criteria, lower dose threshold, γ normalisation and global γ. For the same virtual 'measured' datasets, different results were observed using local techniques. For the standardised γ, differences in the percentage of points passing with γ audit has been an informative step in understanding differences in the verification of measured dose distributions between different clinical trial QA groups. This work lays the foundations for audit reciprocity between groups, particularly with more clinical trials being open to international recruitment. Copyright © 2017 Elsevier B.V. All rights reserved.

  5. Verification of the computational dosimetry system in JAERI (JCDS) for boron neutron capture therapy

    International Nuclear Information System (INIS)

    Kumada, H; Yamamoto, K; Matsumura, A; Yamamoto, T; Nakagawa, Y; Nakai, K; Kageji, T

    2004-01-01

    Clinical trials for boron neutron capture therapy (BNCT) by using the medical irradiation facility installed in Japan Research Reactor No. 4 (JRR-4) at Japan Atomic Energy Research Institute (JAERI) have been performed since 1999. To carry out the BNCT procedure based on proper treatment planning and its precise implementation, the JAERI computational dosimetry system (JCDS) which is applicable to dose planning has been developed in JAERI. The aim of this study was to verify the performance of JCDS. The experimental data with a cylindrical water phantom were compared with the calculation results using JCDS. Data of measurements obtained from IOBNCT cases at JRR-4 were also compared with retrospective evaluation data with JCDS. In comparison with phantom experiments, the calculations and the measurements for thermal neutron flux and gamma-ray dose were in a good agreement, except at the surface of the phantom. Against the measurements of clinical cases, the discrepancy of JCDS's calculations was approximately 10%. These basic and clinical verifications demonstrated that JCDS has enough performance for the BNCT dosimetry. Further investigations are recommended for precise dose distribution and faster calculation environment

  6. Importance of mixed methods in pragmatic trials and dissemination and implementation research.

    Science.gov (United States)

    Albright, Karen; Gechter, Katherine; Kempe, Allison

    2013-01-01

    With increased attention to the importance of translating research to clinical practice and policy, recent years have seen a proliferation of particular types of research, including pragmatic trials and dissemination and implementation research. Such research seeks to understand how and why interventions function in real-world settings, as opposed to highly controlled settings involving conditions not likely to be repeated outside the research study. Because understanding the context in which interventions are implemented is imperative for effective pragmatic trials and dissemination and implementation research, the use of mixed methods is critical to understanding trial results and the success or failure of implementation efforts. This article discusses a number of dimensions of mixed methods research, utilizing at least one qualitative method and at least one quantitative method, that may be helpful when designing projects or preparing grant proposals. Although the strengths and emphases of qualitative and quantitative approaches differ substantially, methods may be combined in a variety of ways to achieve a deeper level of understanding than can be achieved by one method alone. However, researchers must understand when and how to integrate the data as well as the appropriate order, priority, and purpose of each method. The ability to demonstrate an understanding of the rationale for and benefits of mixed methods research is increasingly important in today's competitive funding environment, and many funding agencies now expect applicants to include mixed methods in proposals. The increasing demand for mixed methods research necessitates broader methodological training and deepened collaboration between medical, clinical, and social scientists. Although a number of challenges to conducting and disseminating mixed methods research remain, the potential for insight generated by such work is substantial. Copyright © 2013 Academic Pediatric Association. Published by

  7. The Globalization of Pediatric Research: An Analysis of Clinical Trials Completed for Pediatric Exclusivity

    Science.gov (United States)

    Pasquali, Sara K.; Burstein, Danielle S.; Benjamin, Daniel K.; Smith, P. Brian; Li, Jennifer S.

    2010-01-01

    Background Recent studies have examined the globalization of clinical research. These studies focused on adult trials, and the globalization of pediatric research has not been examined to date. We evaluated the setting of published studies conducted under the US Pediatric Exclusivity Program, which provides economic incentives to pharmaceutical companies to conduct drug studies in children. Methods Published studies containing the main results of trials conducted from 1998–2007 under the Pediatric Exclusivity Provision were included. Data were extracted from each study and described, including the therapeutic area of drug studied, number of patients enrolled, number of sites, and location where the study was conducted, if reported. Results Overall, 174 trials were included (sample size 8–27,065 patients); 9% did not report any information regarding the location or number of sites where the study was conducted. Of those that did report this information, 65% were conducted in at least one country outside the US, and 11% did not have any sites in the US. Fifty-four different countries were represented and 38% of trials enrolled patients in at least one site located in a developing/transition country, including more than one third of infectious disease, cardiovascular, and allergy/immunology trials. Conclusions The majority of published pediatric trials conducted under the Pediatric Exclusivity Provision included sites outside of the US, and over a third of trials enrolled patients in developing/transition countries. While there are many potential benefits to the globalization of pediatric research, this trend also raises certain scientific and ethical concerns which require further evaluation. PMID:20732941

  8. Crowd Sourced Formal Verification-Augmentation (CSFV-A)

    Science.gov (United States)

    2016-06-01

    Projects Agency (DARPA), Air Force Research Laboratory (AFRL), Charles River Analytics Inc., and TopCoder, Inc. will be holding a contest to reward...CROWD SOURCED FORMAL VERIFICATION – AUGMENTATION (CSFV-A) CHARLES RIVER ANALYTICS, INC. JUNE 2016 FINAL TECHNICAL REPORT APPROVED FOR PUBLIC...CSFV 5e. TASK NUMBER TC 5f. WORK UNIT NUMBER RA 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) Charles River Analytics, Inc. 625 Mount Auburn

  9. Telemedicine Provides Non-Inferior Research Informed Consent for Remote Study Enrollment: A Randomized Controlled Trial

    Science.gov (United States)

    Bobb, Morgan R.; Van Heukelom, Paul G.; Faine, Brett A.; Ahmed, Azeemuddin; Messerly, Jeffrey T.; Bell, Gregory; Harland, Karisa K.; Simon, Christian; Mohr, Nicholas M.

    2016-01-01

    Objective Telemedicine networks are beginning to provide an avenue for conducting emergency medicine research, but using telemedicine to recruit participants for clinical trials has not been validated. The goal of this consent study is to determine whether patient comprehension of telemedicine-enabled research informed consent is non-inferior to standard face-to-face research informed consent. Methods A prospective, open-label randomized controlled trial was performed in a 60,000-visit Midwestern academic Emergency Department (ED) to test whether telemedicine-enabled research informed consent provided non-inferior comprehension compared with standard consent. This study was conducted as part of a parent clinical trial evaluating the effectiveness of oral chlorhexidine gluconate 0.12% in preventing hospital-acquired pneumonia among adult ED patients with expected hospital admission. Prior to being recruited into the study, potential participants were randomized in a 1:1 allocation ratio to consent by telemedicine versus standard face-to-face consent. Telemedicine connectivity was provided using a commercially available interface (REACH platform, Vidyo Inc., Hackensack, NJ) to an emergency physician located in another part of the ED. Comprehension of research consent (primary outcome) was measured using the modified Quality of Informed Consent (QuIC) instrument, a validated tool for measuring research informed consent comprehension. Parent trial accrual rate and qualitative survey data were secondary outcomes. Results One-hundred thirty-one patients were randomized (n = 64, telemedicine), and 101 QuIC surveys were completed. Comprehension of research informed consent using telemedicine was not inferior to face-to-face consent (QuIC scores 74.4 ± 8.1 vs. 74.4 ± 6.9 on a 100-point scale, p = 0.999). Subjective understanding of consent (p=0.194) and parent trial study accrual rates (56% vs. 69%, p = 0.142) were similar. Conclusion Telemedicine is non-inferior to face

  10. Physician participation in clinical research and trials: issues and approaches

    Directory of Open Access Journals (Sweden)

    Sami F Shaban

    2011-03-01

    research culture’. This article examines the barriers to and benefits of physician participation in clinical research as well as interventions needed to increase their participation, including the specific role of undergraduate medical education. The main challenge is the unwillingness of many physicians and patients to participate in clinical trials. Barriers to participation include lack of time, lack of resources, trial-specific issues, communication difficulties, conflicts between the role of clinician and scientist, inadequate research experience and training for physicians, lack of rewards and recognition for physicians, and sometimes a scientifically uninteresting research question, among others. Strategies to encourage physician participation in clinical research include financial and nonfinancial incentives, adequate training, research questions that are in line with physician interests and have clear potential to improve patient care, and regular feedback. Finally, encouraging research culture and fostering the development of inquiry and research-based learning among medical students is now a high priority in order to develop more and better clinician-researchers.Keywords: physician, clinical research, clinical trial, medical education

  11. Future of monitoring and verification

    International Nuclear Information System (INIS)

    Wagenmakers, H.

    1991-01-01

    The organized verification entrusted to IAEA for the implementation of the NPT, of the Treaty of Tlatelolco and of the Treaty of Rarotonga, reaches reasonable standards. The current dispute with the Democratic People's Republic of Korea about the conclusion of a safeguards agreement with IAEA, by its exceptional nature, underscores rather than undermines the positive judgement to be passed on IAEA's overall performance. The additional task given to the Director General of IAEA under Security Council resolution 687 (1991) regarding Iraq's nuclear-weapons-usable material is particularly challenging. For the purposes of this paper, verification is defined as the process for establishing whether the States parties are complying with an agreement. In the final stage verification may lead into consideration of how to respond to non-compliance. Monitoring is perceived as the first level in the verification system. It is one generic form of collecting information on objects, activities or events and it involves a variety of instruments ranging from communications satellites to television cameras or human inspectors. Monitoring may also be used as a confidence-building measure

  12. Concepts for inventory verification in critical facilities

    International Nuclear Information System (INIS)

    Cobb, D.D.; Sapir, J.L.; Kern, E.A.; Dietz, R.J.

    1978-12-01

    Materials measurement and inventory verification concepts for safeguarding large critical facilities are presented. Inspection strategies and methods for applying international safeguards to such facilities are proposed. The conceptual approach to routine inventory verification includes frequent visits to the facility by one inspector, and the use of seals and nondestructive assay (NDA) measurements to verify the portion of the inventory maintained in vault storage. Periodic verification of the reactor inventory is accomplished by sampling and NDA measurement of in-core fuel elements combined with measurements of integral reactivity and related reactor parameters that are sensitive to the total fissile inventory. A combination of statistical sampling and NDA verification with measurements of reactor parameters is more effective than either technique used by itself. Special procedures for assessment and verification for abnormal safeguards conditions are also considered. When the inspection strategies and inventory verification methods are combined with strict containment and surveillance methods, they provide a high degree of assurance that any clandestine attempt to divert a significant quantity of fissile material from a critical facility inventory will be detected. Field testing of specific hardware systems and procedures to determine their sensitivity, reliability, and operational acceptability is recommended. 50 figures, 21 tables

  13. Verification and Examination Management of Complex Systems

    Directory of Open Access Journals (Sweden)

    Stian Ruud

    2014-10-01

    Full Text Available As ship systems become more complex, with an increasing number of safety-critical functions, many interconnected subsystems, tight integration to other systems, and a large amount of potential failure modes, several industry parties have identified the need for improved methods for managing the verification and examination efforts of such complex systems. Such needs are even more prominent now that the marine and offshore industries are targeting more activities and operations in the Arctic environment. In this paper, a set of requirements and a method for verification and examination management are proposed for allocating examination efforts to selected subsystems. The method is based on a definition of a verification risk function for a given system topology and given requirements. The marginal verification risks for the subsystems may then be evaluated, so that examination efforts for the subsystem can be allocated. Two cases of requirements and systems are used to demonstrate the proposed method. The method establishes a systematic relationship between the verification loss, the logic system topology, verification method performance, examination stop criterion, the required examination effort, and a proposed sequence of examinations to reach the examination stop criterion.

  14. Monitoring and verification R and D

    International Nuclear Information System (INIS)

    Pilat, Joseph F.; Budlong-Sylvester, Kory W.; Fearey, Bryan L.

    2011-01-01

    The 2010 Nuclear Posture Review (NPR) report outlined the Administration's approach to promoting the agenda put forward by President Obama in Prague on April 5, 2009. The NPR calls for a national monitoring and verification R and D program to meet future challenges arising from the Administration's nonproliferation, arms control and disarmament agenda. Verification of a follow-on to New START could have to address warheads and possibly components along with delivery capabilities. Deeper cuts and disarmament would need to address all of these elements along with nuclear weapon testing, nuclear material and weapon production facilities, virtual capabilities from old weapon and existing energy programs and undeclared capabilities. We only know how to address some elements of these challenges today, and the requirements may be more rigorous in the context of deeper cuts as well as disarmament. Moreover, there is a critical need for multiple options to sensitive problems and to address other challenges. There will be other verification challenges in a world of deeper cuts and disarmament, some of which we are already facing. At some point, if the reductions process is progressing, uncertainties about past nuclear materials and weapons production will have to be addressed. IAEA safeguards will need to continue to evolve to meet current and future challenges, and to take advantage of new technologies and approaches. Transparency/verification of nuclear and dual-use exports will also have to be addressed, and there will be a need to make nonproliferation measures more watertight and transparent. In this context, and recognizing we will face all of these challenges even if disarmament is not achieved, this paper will explore possible agreements and arrangements; verification challenges; gaps in monitoring and verification technologies and approaches; and the R and D required to address these gaps and other monitoring and verification challenges.

  15. Clinical trial network for the promotion of clinical research for rare diseases in Japan: muscular dystrophy clinical trial network.

    Science.gov (United States)

    Shimizu, Reiko; Ogata, Katsuhisa; Tamaura, Akemi; Kimura, En; Ohata, Maki; Takeshita, Eri; Nakamura, Harumasa; Takeda, Shin'ichi; Komaki, Hirofumi

    2016-07-11

    Duchenne muscular dystrophy (DMD) is the most commonly inherited neuromuscular disease. Therapeutic agents for the treatment of rare disease, namely "orphan drugs", have recently drawn the attention of researchers and pharmaceutical companies. To ensure the successful conduction of clinical trials to evaluate novel treatments for patients with rare diseases, an appropriate infrastructure is needed. One of the effective solutions for the lack of infrastructure is to establish a network of rare diseases. To accomplish the conduction of clinical trials in Japan, the Muscular dystrophy clinical trial network (MDCTN) was established by the clinical research group for muscular dystrophy, including the National Center of Neurology and Psychiatry, as well as national and university hospitals, all which have a long-standing history of research cooperation. Thirty-one medical institutions (17 national hospital organizations, 10 university hospitals, 1 national center, 2 public hospitals, and 1 private hospital) belong to this network and collaborate to facilitate clinical trials. The Care and Treatment Site Registry (CTSR) calculates and reports the proportion of patients with neuromuscular diseases in the cooperating sites. In total, there are 5,589 patients with neuromuscular diseases in Japan and the proportion of patients with each disease is as follows: DMD, 29 %; myotonic dystrophy type 1, 23 %; limb girdle muscular dystrophy, 11 %; Becker muscular dystrophy, 10 %. We work jointly to share updated health care information and standardized evaluations of clinical outcomes as well. The collaboration with the patient registry (CTSR), allows the MDCTN to recruit DMD participants with specific mutations and conditions, in a remarkably short period of time. Counting with a network that operates at a national level is important to address the corresponding national issues. Thus, our network will be able to contribute with international research activity, which can lead to

  16. Clinical Trials

    Medline Plus

    Full Text Available ... Clinical Trials About Clinical Trials Clinical trials are research studies that explore whether a medical strategy, treatment, or ... humans. What Are Clinical Trials? Clinical trials are research studies that explore whether a medical strategy, treatment, or ...

  17. Predictors of Missed Research Appointments in a Randomized Placebo-Controlled Trial

    Directory of Open Access Journals (Sweden)

    Stéphanie J.E. Becker

    2014-09-01

     Younger patients with no college education, who believe their health can be controlled, are more likely to miss a research appointment when enrolled in a randomized placebo injection-controlled trial

  18. Cognitive Bias in the Verification and Validation of Space Flight Systems

    Science.gov (United States)

    Larson, Steve

    2012-01-01

    Cognitive bias is generally recognized as playing a significant role in virtually all domains of human decision making. Insight into this role is informally built into many of the system engineering practices employed in the aerospace industry. The review process, for example, typically has features that help to counteract the effect of bias. This paper presents a discussion of how commonly recognized biases may affect the verification and validation process. Verifying and validating a system is arguably more challenging than development, both technically and cognitively. Whereas there may be a relatively limited number of options available for the design of a particular aspect of a system, there is a virtually unlimited number of potential verification scenarios that may be explored. The probability of any particular scenario occurring in operations is typically very difficult to estimate, which increases reliance on judgment that may be affected by bias. Implementing a verification activity often presents technical challenges that, if they can be overcome at all, often result in a departure from actual flight conditions (e.g., 1-g testing, simulation, time compression, artificial fault injection) that may raise additional questions about the meaningfulness of the results, and create opportunities for the introduction of additional biases. In addition to mitigating the biases it can introduce directly, the verification and validation process must also overcome the cumulative effect of biases introduced during all previous stages of development. A variety of cognitive biases will be described, with research results for illustration. A handful of case studies will be presented that show how cognitive bias may have affected the verification and validation process on recent JPL flight projects, identify areas of strength and weakness, and identify potential changes or additions to commonly used techniques that could provide a more robust verification and validation of

  19. Imputation of microsatellite alleles from dense SNP genotypes for parental verification

    Directory of Open Access Journals (Sweden)

    Matthew eMcclure

    2012-08-01

    Full Text Available Microsatellite (MS markers have recently been used for parental verification and are still the international standard despite higher cost, error rate, and turnaround time compared with Single Nucleotide Polymorphisms (SNP-based assays. Despite domestic and international interest from producers and research communities, no viable means currently exist to verify parentage for an individual unless all familial connections were analyzed using the same DNA marker type (MS or SNP. A simple and cost-effective method was devised to impute MS alleles from SNP haplotypes within breeds. For some MS, imputation results may allow inference across breeds. A total of 347 dairy cattle representing 4 dairy breeds (Brown Swiss, Guernsey, Holstein, and Jersey were used to generate reference haplotypes. This approach has been verified (>98% accurate for imputing the International Society of Animal Genetics (ISAG recommended panel of 12 MS for cattle parentage verification across a validation set of 1,307 dairy animals.. Implementation of this method will allow producers and breed associations to transition to SNP-based parentage verification utilizing MS genotypes from historical data on parents where SNP genotypes are missing. This approach may be applicable to additional cattle breeds and other species that wish to migrate from MS- to SNP- based parental verification.

  20. A research on the verification of models used in the computational codes and the uncertainty reduction method for the containment integrity evaluation

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Moo Hwan; Seo, Kyoung Woo [POSTECH, Pohang (Korea, Republic of)

    2001-03-15

    In the probability approach, the calculated CCFPs of all the scenarios were zero, which meant that it was expected that for all the accident scenarios the maximum pressure load induced by DCH was lower than the containment failure pressure obtained from the fragility curve. Thus, it can be stated that the KSNP containment is robust to the DCH threat. And uncertainty of computer codes used to be two (deterministic and probabilistic) approaches were reduced by the sensitivity tests and the research with the verification and comparison of the DCH models in each code. So, this research was to evaluate synthetic result of DCH issue and expose accurate methodology to assess containment integrity about operating PWR in Korea.

  1. EURATOM safeguards efforts in the development of spent fuel verification methods by non-destructive assay

    Energy Technology Data Exchange (ETDEWEB)

    Matloch, L.; Vaccaro, S.; Couland, M.; De Baere, P.; Schwalbach, P. [Euratom, Communaute europeenne de l' energie atomique - CEEA (European Commission (EC))

    2015-07-01

    The back end of the nuclear fuel cycle continues to develop. The European Commission, particularly the Nuclear Safeguards Directorate of the Directorate General for Energy, implements Euratom safeguards and needs to adapt to this situation. The verification methods for spent nuclear fuel, which EURATOM inspectors can use, require continuous improvement. Whereas the Euratom on-site laboratories provide accurate verification results for fuel undergoing reprocessing, the situation is different for spent fuel which is destined for final storage. In particular, new needs arise from the increasing number of cask loadings for interim dry storage and the advanced plans for the construction of encapsulation plants and geological repositories. Various scenarios present verification challenges. In this context, EURATOM Safeguards, often in cooperation with other stakeholders, is committed to further improvement of NDA methods for spent fuel verification. In this effort EURATOM plays various roles, ranging from definition of inspection needs to direct participation in development of measurement systems, including support of research in the framework of international agreements and via the EC Support Program to the IAEA. This paper presents recent progress in selected NDA methods. These methods have been conceived to satisfy different spent fuel verification needs, ranging from attribute testing to pin-level partial defect verification. (authors)

  2. A process improvement model for software verification and validation

    Science.gov (United States)

    Callahan, John; Sabolish, George

    1994-01-01

    We describe ongoing work at the NASA Independent Verification and Validation (IV&V) Facility to establish a process improvement model for software verification and validation (V&V) organizations. This model, similar to those used by some software development organizations, uses measurement-based techniques to identify problem areas and introduce incremental improvements. We seek to replicate this model for organizations involved in V&V on large-scale software development projects such as EOS and space station. At the IV&V Facility, a university research group and V&V contractors are working together to collect metrics across projects in order to determine the effectiveness of V&V and improve its application. Since V&V processes are intimately tied to development processes, this paper also examines the repercussions for development organizations in large-scale efforts.

  3. Face Verification for Mobile Personal Devices

    NARCIS (Netherlands)

    Tao, Q.

    2009-01-01

    In this thesis, we presented a detailed study of the face verification problem on the mobile device, covering every component of the system. The study includes face detection, registration, normalization, and verification. Furthermore, the information fusion problem is studied to verify face

  4. Gender Verification of Female Olympic Athletes.

    Science.gov (United States)

    Dickinson, Barry D.; Genel, Myron; Robinowitz, Carolyn B.; Turner, Patricia L.; Woods, Gary L.

    2002-01-01

    Gender verification of female athletes has long been criticized by geneticists, endocrinologists, and others in the medical community. Recently, the International Olympic Committee's Athletic Commission called for discontinuation of mandatory laboratory-based gender verification of female athletes. This article discusses normal sexual…

  5. VerifEYE: a real-time meat inspection system for the beef processing industry

    Science.gov (United States)

    Kocak, Donna M.; Caimi, Frank M.; Flick, Rick L.; Elharti, Abdelmoula

    2003-02-01

    Described is a real-time meat inspection system developed for the beef processing industry by eMerge Interactive. Designed to detect and localize trace amounts of contamination on cattle carcasses in the packing process, the system affords the beef industry an accurate, high speed, passive optical method of inspection. Using a method patented by United States Department of Agriculture and Iowa State University, the system takes advantage of fluorescing chlorophyll found in the animal's diet and therefore the digestive track to allow detection and imaging of contaminated areas that may harbor potentially dangerous microbial pathogens. Featuring real-time image processing and documentation of performance, the system can be easily integrated into a processing facility's Hazard Analysis and Critical Control Point quality assurance program. This paper describes the VerifEYE carcass inspection and removal verification system. Results indicating the feasibility of the method, as well as field data collected using a prototype system during four university trials conducted in 2001 are presented. Two successful demonstrations using the prototype system were held at a major U.S. meat processing facility in early 2002.

  6. Warhead verification as inverse problem: Applications of neutron spectrum unfolding from organic-scintillator measurements

    Science.gov (United States)

    Lawrence, Chris C.; Febbraro, Michael; Flaska, Marek; Pozzi, Sara A.; Becchetti, F. D.

    2016-08-01

    Verification of future warhead-dismantlement treaties will require detection of certain warhead attributes without the disclosure of sensitive design information, and this presents an unusual measurement challenge. Neutron spectroscopy—commonly eschewed as an ill-posed inverse problem—may hold special advantages for warhead verification by virtue of its insensitivity to certain neutron-source parameters like plutonium isotopics. In this article, we investigate the usefulness of unfolded neutron spectra obtained from organic-scintillator data for verifying a particular treaty-relevant warhead attribute: the presence of high-explosive and neutron-reflecting materials. Toward this end, several improvements on current unfolding capabilities are demonstrated: deuterated detectors are shown to have superior response-matrix condition to that of standard hydrogen-base scintintillators; a novel data-discretization scheme is proposed which removes important detector nonlinearities; and a technique is described for re-parameterizing the unfolding problem in order to constrain the parameter space of solutions sought, sidestepping the inverse problem altogether. These improvements are demonstrated with trial measurements and verified using accelerator-based time-of-flight calculation of reference spectra. Then, a demonstration is presented in which the elemental compositions of low-Z neutron-attenuating materials are estimated to within 10%. These techniques could have direct application in verifying the presence of high-explosive materials in a neutron-emitting test item, as well as other for treaty verification challenges.

  7. Assessing the impact of user-centered research on a clinical trial eHealth tool via counterbalanced research design.

    Science.gov (United States)

    Atkinson, Nancy L; Massett, Holly A; Mylks, Christy; McCormack, Lauren A; Kish-Doto, Julia; Hesse, Bradford W; Wang, Min Qi

    2011-01-01

    Informatics applications have the potential to improve participation in clinical trials, but their design must be based on user-centered research. This research used a fully counterbalanced experimental design to investigate the effect of changes made to the original version of a website, http://BreastCancerTrials.org/, and confirm that the revised version addressed and reinforced patients' needs and expectations. Participants included women who had received a breast cancer diagnosis within the last 5 years (N=77). They were randomized into two groups: one group used and reviewed the original version first followed by the redesigned version, and the other group used and reviewed them in reverse order. The study used both quantitative and qualitative measures. During use, participants' click paths and general reactions were observed. After use, participants were asked to answer survey items and open-ended questions to indicate their reactions and which version they preferred and met their needs and expectations better. Overall, the revised version of the site was preferred and perceived to be clearer, easier to navigate, more trustworthy and credible, and more private and safe overall. However, users who viewed the original version last had similar attitudes toward both versions. By applying research findings to the redesign of a website for clinical trial searching, it was possible to re-engineer the interface to better support patients' decisions to participate in clinical trials. The mechanisms of action in this case appeared to revolve around creating an environment that supported a sense of personal control and decisional autonomy.

  8. Reload core safety verification

    International Nuclear Information System (INIS)

    Svetlik, M.; Minarcin, M.

    2003-01-01

    This paper presents a brief look at the process of reload core safety evaluation and verification in Slovak Republic. It gives an overview of experimental verification of selected nuclear parameters in the course of physics testing during reactor start-up. The comparison of IAEA recommendations and testing procedures at Slovak and European nuclear power plants of similar design is included. An introduction of two level criteria for evaluation of tests represents an effort to formulate the relation between safety evaluation and measured values (Authors)

  9. Validation of Embedded System Verification Models

    NARCIS (Netherlands)

    Marincic, J.; Mader, Angelika H.; Wieringa, Roelf J.

    The result of a model-based requirements verification shows that the model of a system satisfies (or not) formalised system requirements. The verification result is correct only if the model represents the system adequately. No matter what modelling technique we use, what precedes the model

  10. On Verification Modelling of Embedded Systems

    NARCIS (Netherlands)

    Brinksma, Hendrik; Mader, Angelika H.

    Computer-aided verification of embedded systems hinges on the availability of good verification models of the systems at hand. Such models must be much simpler than full design models or specifications to be of practical value, because of the unavoidable combinatorial complexities in the

  11. ENVIRONMENTAL TECHNOLOGY VERIFICATION REPORT: MOBILE SOURCE RETROFIT AIR POLLUTION CONTROL DEVICES: CLEAN CLEAR FUEL TECHNOLOGIES, INC.’S, UNIVERSAL FUEL CELL

    Science.gov (United States)

    The U.S. EPA's Office of Research and Development operates the Environmental Technology Verification (ETV) program to facilitate the deployment of innovative technologies through performance verification and information dissemination. Congress funds ETV in response to the belief ...

  12. Compositional verification of real-time systems using Ecdar

    DEFF Research Database (Denmark)

    David, Alexandre; Larsen, Kim Guldstrand; Legay, Axel

    2012-01-01

    We present a specification theory for timed systems implemented in the Ecdar tool. We illustrate the operations of the specification theory on a running example, showing the models and verification checks. To demonstrate the power of the compositional verification, we perform an in depth case study...... of a leader election protocol; Modeling it in Ecdar as Timed input/output automata Specifications and performing both monolithic and compositional verification of two interesting properties on it. We compare the execution time of the compositional to the classical verification showing a huge difference...

  13. Disarmament Verification - the OPCW Experience

    International Nuclear Information System (INIS)

    Lodding, J.

    2010-01-01

    The Chemical Weapons Convention is the only multilateral treaty that bans completely an entire category of weapons of mass destruction under international verification arrangements. Possessor States, i.e. those that have chemical weapons stockpiles at the time of becoming party to the CWC, commit to destroying these. All States undertake never to acquire chemical weapons and not to help other States acquire such weapons. The CWC foresees time-bound chemical disarmament. The deadlines for destruction for early entrants to the CWC are provided in the treaty. For late entrants, the Conference of States Parties intervenes to set destruction deadlines. One of the unique features of the CWC is thus the regime for verifying destruction of chemical weapons. But how can you design a system for verification at military sites, while protecting military restricted information? What degree of assurance is considered sufficient in such circumstances? How do you divide the verification costs? How do you deal with production capability and initial declarations of existing stockpiles? The founders of the CWC had to address these and other challenges in designing the treaty. Further refinement of the verification system has followed since the treaty opened for signature in 1993 and since inspection work was initiated following entry-into-force of the treaty in 1997. Most of this work concerns destruction at the two large possessor States, Russia and the United States. Perhaps some of the lessons learned from the OPCW experience may be instructive in a future verification regime for nuclear weapons. (author)

  14. Verification of Chemical Weapons Destruction

    International Nuclear Information System (INIS)

    Lodding, J.

    2010-01-01

    The Chemical Weapons Convention is the only multilateral treaty that bans completely an entire category of weapons of mass destruction under international verification arrangements. Possessor States, i.e. those that have chemical weapons stockpiles at the time of becoming party to the CWC, commit to destroying these. All States undertake never to acquire chemical weapons and not to help other States acquire such weapons. The CWC foresees time-bound chemical disarmament. The deadlines for destruction for early entrants to the CWC are provided in the treaty. For late entrants, the Conference of States Parties intervenes to set destruction deadlines. One of the unique features of the CWC is thus the regime for verifying destruction of chemical weapons. But how can you design a system for verification at military sites, while protecting military restricted information? What degree of assurance is considered sufficient in such circumstances? How do you divide the verification costs? How do you deal with production capability and initial declarations of existing stockpiles? The founders of the CWC had to address these and other challenges in designing the treaty. Further refinement of the verification system has followed since the treaty opened for signature in 1993 and since inspection work was initiated following entry-into-force of the treaty in 1997. Most of this work concerns destruction at the two large possessor States, Russia and the United States. Perhaps some of the lessons learned from the OPCW experience may be instructive in a future verification regime for nuclear weapons. (author)

  15. A Model for Collaborative Runtime Verification

    NARCIS (Netherlands)

    Testerink, Bas; Bulling, Nils; Dastani, Mehdi

    2015-01-01

    Runtime verification concerns checking whether a system execution satisfies a given property. In this paper we propose a model for collaborative runtime verification where a network of local monitors collaborates in order to verify properties of the system. A local monitor has only a local view on

  16. HDM/PASCAL Verification System User's Manual

    Science.gov (United States)

    Hare, D.

    1983-01-01

    The HDM/Pascal verification system is a tool for proving the correctness of programs written in PASCAL and specified in the Hierarchical Development Methodology (HDM). This document assumes an understanding of PASCAL, HDM, program verification, and the STP system. The steps toward verification which this tool provides are parsing programs and specifications, checking the static semantics, and generating verification conditions. Some support functions are provided such as maintaining a data base, status management, and editing. The system runs under the TOPS-20 and TENEX operating systems and is written in INTERLISP. However, no knowledge is assumed of these operating systems or of INTERLISP. The system requires three executable files, HDMVCG, PARSE, and STP. Optionally, the editor EMACS should be on the system in order for the editor to work. The file HDMVCG is invoked to run the system. The files PARSE and STP are used as lower forks to perform the functions of parsing and proving.

  17. Specification and Verification of Hybrid System

    International Nuclear Information System (INIS)

    Widjaja, Belawati H.

    1997-01-01

    Hybrid systems are reactive systems which intermix between two components, discrete components and continuous components. The continuous components are usually called plants, subject to disturbances which cause the state variables of the systems changing continuously by physical laws and/or by the control laws. The discrete components can be digital computers, sensor and actuators controlled by programs. These programs are designed to select, control and supervise the behavior of the continuous components. Specification and verification of hybrid systems has recently become an active area of research in both computer science and control engineering, many papers concerning hybrid system have been published. This paper gives a design methodology for hybrid systems as an example to the specification and verification of hybrid systems. The design methodology is based on the cooperation between two disciplines, control engineering and computer science. The methodology brings into the design of control loops and decision loops. The external behavior of control loops are specified in a notation which is understandable by the two disciplines. The design of control loops which employed theory of differential equation is done by control engineers, and its correctness is also guaranteed analytically or experimentally by control engineers. The decision loops are designed in computing science based on the specifications of control loops. The verification of systems requirements can be done by computing scientists using a formal reasoning mechanism. For illustrating the proposed design, a problem of balancing an inverted pendulum which is a popular experiment device in control theory is considered, and the Mean Value Calculus is chosen as a formal notation for specifying the control loops and designing the decision loops

  18. Verification of RESRAD-build computer code, version 3.1

    International Nuclear Information System (INIS)

    2003-01-01

    RESRAD-BUILD is a computer model for analyzing the radiological doses resulting from the remediation and occupancy of buildings contaminated with radioactive material. It is part of a family of codes that includes RESRAD, RESRAD-CHEM, RESRAD-RECYCLE, RESRAD-BASELINE, and RESRAD-ECORISK. The RESRAD-BUILD models were developed and codified by Argonne National Laboratory (ANL); version 1.5 of the code and the user's manual were publicly released in 1994. The original version of the code was written for the Microsoft DOS operating system. However, subsequent versions of the code were written for the Microsoft Windows operating system. The purpose of the present verification task (which includes validation as defined in the standard) is to provide an independent review of the latest version of RESRAD-BUILD under the guidance provided by ANSI/ANS-10.4 for verification and validation of existing computer programs. This approach consists of a posteriori V and V review which takes advantage of available program development products as well as user experience. The purpose, as specified in ANSI/ANS-10.4, is to determine whether the program produces valid responses when used to analyze problems within a specific domain of applications, and to document the level of verification. The culmination of these efforts is the production of this formal Verification Report. The first step in performing the verification of an existing program was the preparation of a Verification Review Plan. The review plan consisted of identifying: Reason(s) why a posteriori verification is to be performed; Scope and objectives for the level of verification selected; Development products to be used for the review; Availability and use of user experience; and Actions to be taken to supplement missing or unavailable development products. The purpose, scope and objectives for the level of verification selected are described in this section of the Verification Report. The development products that were used

  19. Compliance revisited: pharmaceutical drug trials in the era of the contract research organization.

    Science.gov (United States)

    Jonvallen, Petra

    2009-12-01

    Over the past decade, the management of clinical trials of pharmaceuticals has become a veritable industry, as evidenced by the emergence and proliferation of contract research organizations (CROs) that co-ordinate and monitor trials. This article focuses on work performed by one CRO involved in the introduction of new software, modelled on industrial production processes, into clinical trial practices. It investigates how this new management technique relates to the work performed in the clinic to ensure that trial participants comply with the protocol. Using an analytical distinction between 'classical' management work and invisible work, the article contextualizes the meaning of compliance in the clinic and suggests that the work involved in producing compliance should be taken into consideration by those concerned with validity of trials, as clinical trials are put under private industrial management. The article builds on participant observation at a Swedish university hospital and interviews the nurses, dieticians, doctors and a software engineer, all part of a team involved in pharmaceutical drug trials on a potential obesity drug.

  20. SU-E-T-435: Development and Commissioning of a Complete System for In-Vivo Dosimetry and Range Verification in Proton Therapy

    Energy Technology Data Exchange (ETDEWEB)

    Samuel, D [Universite catholique de Louvain, Louvain-la-neuve, BW (Belgium); Testa, M; Park, Y [Massachusetts General Hospital, Boston, MA (United States); Schneider, R; Moteabbed, M [General Hospital, Boston, MA (United States); Janssens, G; Prieels, D [Ion Beam Applications, Louvain-la-neuve, Brabant Wallon (Belgium); Orban de Xivry, J [Universite catholique de Louvain, Louvain-la-neuve, BW (Belgium); Lu, H [Massachusetts General Hospital and Harvard Medical School, Boston, MA (United States); Bentefour, E

    2014-06-01

    Purpose: In-vivo dose and beam range verification in proton therapy could play significant roles in proton treatment validation and improvements. Invivo beam range verification, in particular, could enable new treatment techniques one of which, for example, could be the use of anterior fields for prostate treatment instead of opposed lateral fields as in current practice. We have developed and commissioned an integrated system with hardware, software and workflow protocols, to provide a complete solution, simultaneously for both in-vivo dosimetry and range verification for proton therapy. Methods: The system uses a matrix of diodes, up to 12 in total, but separable into three groups for flexibility in application. A special amplifier was developed to capture extremely small signals from very low proton beam current. The software was developed within iMagX, a general platform for image processing in radiation therapy applications. The range determination exploits the inherent relationship between the internal range modulation clock of the proton therapy system and the radiological depth at the point of measurement. The commissioning of the system, for in-vivo dosimetry and for range verification was separately conducted using anthropomorphic phantom. EBT films and TLDs were used for dose comparisons and range scan of the beam distal fall-off was used as ground truth for range verification. Results: For in-vivo dose measurement, the results were in agreement with TLD and EBT films and were within 3% from treatment planning calculations. For range verification, a precision of 0.5mm is achieved in homogeneous phantoms, and a precision of 2mm for anthropomorphic pelvic phantom, except at points with significant range mixing. Conclusion: We completed the commissioning of our system for in-vivo dosimetry and range verification in proton therapy. The results suggest that the system is ready for clinical trials on patient.

  1. SU-E-T-435: Development and Commissioning of a Complete System for In-Vivo Dosimetry and Range Verification in Proton Therapy

    International Nuclear Information System (INIS)

    Samuel, D; Testa, M; Park, Y; Schneider, R; Moteabbed, M; Janssens, G; Prieels, D; Orban de Xivry, J; Lu, H; Bentefour, E

    2014-01-01

    Purpose: In-vivo dose and beam range verification in proton therapy could play significant roles in proton treatment validation and improvements. Invivo beam range verification, in particular, could enable new treatment techniques one of which, for example, could be the use of anterior fields for prostate treatment instead of opposed lateral fields as in current practice. We have developed and commissioned an integrated system with hardware, software and workflow protocols, to provide a complete solution, simultaneously for both in-vivo dosimetry and range verification for proton therapy. Methods: The system uses a matrix of diodes, up to 12 in total, but separable into three groups for flexibility in application. A special amplifier was developed to capture extremely small signals from very low proton beam current. The software was developed within iMagX, a general platform for image processing in radiation therapy applications. The range determination exploits the inherent relationship between the internal range modulation clock of the proton therapy system and the radiological depth at the point of measurement. The commissioning of the system, for in-vivo dosimetry and for range verification was separately conducted using anthropomorphic phantom. EBT films and TLDs were used for dose comparisons and range scan of the beam distal fall-off was used as ground truth for range verification. Results: For in-vivo dose measurement, the results were in agreement with TLD and EBT films and were within 3% from treatment planning calculations. For range verification, a precision of 0.5mm is achieved in homogeneous phantoms, and a precision of 2mm for anthropomorphic pelvic phantom, except at points with significant range mixing. Conclusion: We completed the commissioning of our system for in-vivo dosimetry and range verification in proton therapy. The results suggest that the system is ready for clinical trials on patient

  2. Evaluating Data Abstraction Assistant, a novel software application for data abstraction during systematic reviews: protocol for a randomized controlled trial

    Directory of Open Access Journals (Sweden)

    Ian J. Saldanha

    2016-11-01

    Full Text Available Abstract Background Data abstraction, a critical systematic review step, is time-consuming and prone to errors. Current standards for approaches to data abstraction rest on a weak evidence base. We developed the Data Abstraction Assistant (DAA, a novel software application designed to facilitate the abstraction process by allowing users to (1 view study article PDFs juxtaposed to electronic data abstraction forms linked to a data abstraction system, (2 highlight (or “pin” the location of the text in the PDF, and (3 copy relevant text from the PDF into the form. We describe the design of a randomized controlled trial (RCT that compares the relative effectiveness of (A DAA-facilitated single abstraction plus verification by a second person, (B traditional (non-DAA-facilitated single abstraction plus verification by a second person, and (C traditional independent dual abstraction plus adjudication to ascertain the accuracy and efficiency of abstraction. Methods This is an online, randomized, three-arm, crossover trial. We will enroll 24 pairs of abstractors (i.e., sample size is 48 participants, each pair comprising one less and one more experienced abstractor. Pairs will be randomized to abstract data from six articles, two under each of the three approaches. Abstractors will complete pre-tested data abstraction forms using the Systematic Review Data Repository (SRDR, an online data abstraction system. The primary outcomes are (1 proportion of data items abstracted that constitute an error (compared with an answer key and (2 total time taken to complete abstraction (by two abstractors in the pair, including verification and/or adjudication. Discussion The DAA trial uses a practical design to test a novel software application as a tool to help improve the accuracy and efficiency of the data abstraction process during systematic reviews. Findings from the DAA trial will provide much-needed evidence to strengthen current recommendations for data

  3. Priorities for methodological research on patient and public involvement in clinical trials: A modified Delphi process.

    Science.gov (United States)

    Kearney, Anna; Williamson, Paula; Young, Bridget; Bagley, Heather; Gamble, Carrol; Denegri, Simon; Muir, Delia; Simon, Natalie A; Thomas, Stephen; Elliot, Jim T; Bulbeck, Helen; Crocker, Joanna C; Planner, Claire; Vale, Claire; Clarke, Mike; Sprosen, Tim; Woolfall, Kerry

    2017-12-01

    Despite increasing international interest, there is a lack of evidence about the most efficient, effective and acceptable ways to implement patient and public involvement (PPI) in clinical trials. To identify the priorities of UK PPI stakeholders for methodological research to help resolve uncertainties about PPI in clinical trials. A modified Delphi process including a two round online survey and a stakeholder consensus meeting. In total, 237 people registered of whom 219 (92%) completed the first round. One hundred and eighty-seven of 219 (85%) completed the second; 25 stakeholders attended the consensus meeting. Round 1 of the survey comprised 36 topics; 42 topics were considered in round 2 and at the consensus meeting. Approximately 96% of meeting participants rated the top three topics as equally important. These were as follows: developing strong and productive working relationships between researchers and PPI contributors; exploring PPI practices in selecting trial outcomes of importance to patients; and a systematic review of PPI activity to improve the accessibility and usefulness of trial information (eg participant information sheets) for participants. The prioritized methodological research topics indicate important areas of uncertainty about PPI in trials. Addressing these uncertainties will be critical to enhancing PPI. Our findings should be used in the planning and funding of PPI in clinical trials to help focus research efforts and minimize waste. © 2017 The Authors Health Expectations Published by John Wiley & Sons Ltd.

  4. FEFTRA {sup TM} verification. Update 2013

    Energy Technology Data Exchange (ETDEWEB)

    Loefman, J. [VTT Technical Research Centre of Finland, Espoo (Finland); Meszaros, F. [The Relief Lab., Harskut, (Hungary)

    2013-12-15

    FEFTRA is a finite element program package developed at VTT for the analyses of groundwater flow in Posiva's site evaluation programme that seeks a final repository for spent nuclear fuel in Finland. The code is capable of modelling steady-state or transient groundwater flow, solute transport and heat transfer as coupled or separate phenomena. Being a typical research tool used only by its developers, the FEFTRA code lacked long of a competent testing system and precise documentation of the verification of the code. In 2006 a project was launched, in which the objective was to reorganise all the material related to the existing verification cases and place them into the FEFTRA program path under the version-control system. The work also included development of a new testing system, which automatically calculates the selected cases, checks the new results against the old approved results and constructs a summary of the test run. All the existing cases were gathered together, checked and added into the new testing system. The documentation of each case was rewritten with the LATEX document preparation system and added into the testing system in a way that the whole test documentation (this report) could easily be generated in a postscript or pdf-format. The current report is the updated version of the verification report published in 2007. At the moment the report includes mainly the cases related to the testing of the primary result quantities (i.e. hydraulic head, pressure, salinity concentration, temperature). The selected cases, however, represent typical hydrological applications, in which the program package has been and will be employed in the Posiva's site evaluation programme, i.e. the simulations of groundwater flow, solute transport and heat transfer as separate or coupled phenomena. The comparison of the FEFTRA results to the analytical, semianalytical and/or other numerical solutions proves the capability of FEFTRA to simulate such problems

  5. Training Needs of Clinical and Research Professionals to Optimize Minority Recruitment and Retention in Cancer Clinical Trials.

    Science.gov (United States)

    Niranjan, Soumya J; Durant, Raegan W; Wenzel, Jennifer A; Cook, Elise D; Fouad, Mona N; Vickers, Selwyn M; Konety, Badrinath R; Rutland, Sarah B; Simoni, Zachary R; Martin, Michelle Y

    2017-08-03

    The study of disparities in minority recruitment to cancer clinical trials has focused primarily on inquiries among minority patient populations. However, clinical trial recruitment is complex and requires a broader appreciation of the multiple factors that influence minority participation. One area that has received little attention is minority recruitment training for professionals who assume various roles in the clinical trial recruitment process. Therefore, we assessed the perspectives of cancer center clinical and research personnel on their training and education needs toward minority recruitment for cancer clinical trials. Ninety-one qualitative interviews were conducted at five U.S. cancer centers among four stakeholder groups: cancer center leaders, principal investigators, referring clinicians, and research staff. Interviews were recorded and transcribed. Qualitative analyses focused on response data related to training for minority recruitment for cancer clinical trials. Four prominent themes were identified: (1) Research personnel are not currently being trained to focus on recruitment and retention of minority populations; (2) Training for minority recruitment and retention provides for a specific focus on factors influencing minority research participation; (3) Training on cultural awareness may help to bridge cultural gaps between potential minority participants and research professionals; (4) Views differ regarding the importance of research personnel training designed to focus on recruitment of minority populations. There is a lack of systematic training for minority recruitment. Many stakeholders acknowledged the benefits of minority recruitment training and welcomed training that focuses on increasing cultural awareness to increase the participation of minorities in cancer clinical trials.

  6. Employing open/hidden administration in psychotherapy research: A randomized-controlled trial of expressive writing

    Science.gov (United States)

    Tondorf, Theresa; Kaufmann, Lisa-Katrin; Degel, Alexander; Locher, Cosima; Birkhäuer, Johanna; Gerger, Heike; Ehlert, Ulrike

    2017-01-01

    Psychotherapy has been shown to be effective, but efforts to prove specific effects by placebo-controlled trials have been practically and conceptually hampered. We propose that adopting open/hidden designs from placebo research would offer a possible way to establish specificity in psychotherapy. Therefore, we tested the effects of providing opposing treatment rationales in an online expressive writing intervention on affect in healthy subjects. Results indicate that it was possible to conduct the expressive writing intervention both covertly and openly, but that participants in the hidden administration condition did not fully benefit from the otherwise effective expressive writing intervention in the long-run. Effect sizes between open and hidden administration groups were comparable to pre-post effect sizes of the intervention. While this finding is important for the understanding of psychotherapy's effects per se, it also proves that alternative research approaches to establish specificity are feasible and informative in psychotherapy research. Trial registration: German Clinical Trials Register DRKS00009428 PMID:29176768

  7. You Can't See the Real Me: Attachment Avoidance, Self-Verification, and Self-Concept Clarity.

    Science.gov (United States)

    Emery, Lydia F; Gardner, Wendi L; Carswell, Kathleen L; Finkel, Eli J

    2018-03-01

    Attachment shapes people's experiences in their close relationships and their self-views. Although attachment avoidance and anxiety both undermine relationships, past research has primarily emphasized detrimental effects of anxiety on the self-concept. However, as partners can help people maintain stable self-views, avoidant individuals' negative views of others might place them at risk for self-concept confusion. We hypothesized that avoidance would predict lower self-concept clarity and that less self-verification from partners would mediate this association. Attachment avoidance was associated with lower self-concept clarity (Studies 1-5), an effect that was mediated by low self-verification (Studies 2-3). The association between avoidance and self-verification was mediated by less self-disclosure and less trust in partner feedback (Study 4). Longitudinally, avoidance predicted changes in self-verification, which in turn predicted changes in self-concept clarity (Study 5). Thus, avoidant individuals' reluctance to trust or become too close to others may result in hidden costs to the self-concept.

  8. A Verification Logic for GOAL Agents

    Science.gov (United States)

    Hindriks, K. V.

    Although there has been a growing body of literature on verification of agents programs, it has been difficult to design a verification logic for agent programs that fully characterizes such programs and to connect agent programs to agent theory. The challenge is to define an agent programming language that defines a computational framework but also allows for a logical characterization useful for verification. The agent programming language GOAL has been originally designed to connect agent programming to agent theory and we present additional results here that GOAL agents can be fully represented by a logical theory. GOAL agents can thus be said to execute the corresponding logical theory.

  9. Secure optical verification using dual phase-only correlation

    International Nuclear Information System (INIS)

    Liu, Wei; Liu, Shutian; Zhang, Yan; Xie, Zhenwei; Liu, Zhengjun

    2015-01-01

    We introduce a security-enhanced optical verification system using dual phase-only correlation based on a novel correlation algorithm. By employing a nonlinear encoding, the inherent locks of the verification system are obtained in real-valued random distributions, and the identity keys assigned to authorized users are designed as pure phases. The verification process is implemented in two-step correlation, so only authorized identity keys can output the discriminate auto-correlation and cross-correlation signals that satisfy the reset threshold values. Compared with the traditional phase-only-correlation-based verification systems, a higher security level against counterfeiting and collisions are obtained, which is demonstrated by cryptanalysis using known attacks, such as the known-plaintext attack and the chosen-plaintext attack. Optical experiments as well as necessary numerical simulations are carried out to support the proposed verification method. (paper)

  10. Verification of DRAGON: the NXT tracking module

    International Nuclear Information System (INIS)

    Zkiek, A.; Marleau, G.

    2007-01-01

    The version of DRAGON-IST that has been verified for the calculation of the incremental cross sections associated with CANDU reactivity devices is version 3.04Bb that was released in 2001. Since then, various improvements were implemented in the code including the NXT: module that can track assemblies of clusters in 2-D and 3-D geometries. Here we will discuss the verification plan for the NXT: module of DRAGON, illustrate the verification procedure we selected and present our verification results. (author)

  11. Prostate Cancer Research Trial Helps John Spencer Treat His Cancer | NIH MedlinePlus the Magazine

    Science.gov (United States)

    ... of this page please turn Javascript on. Feature: Prostate Cancer Prostate Cancer Research Trial Helps John Spencer Treat His Cancer ... because of timely detection and treatment of his prostate cancer. He participated in an NIH-sponsored clinical trial. ...

  12. Technical challenges for dismantlement verification

    International Nuclear Information System (INIS)

    Olinger, C.T.; Stanbro, W.D.; Johnston, R.G.; Nakhleh, C.W.; Dreicer, J.S.

    1997-01-01

    In preparation for future nuclear arms reduction treaties, including any potential successor treaties to START I and II, the authors have been examining possible methods for bilateral warhead dismantlement verification. Warhead dismantlement verification raises significant challenges in the political, legal, and technical arenas. This discussion will focus on the technical issues raised by warhead arms controls. Technical complications arise from several sources. These will be discussed under the headings of warhead authentication, chain-of-custody, dismantlement verification, non-nuclear component tracking, component monitoring, and irreversibility. The authors will discuss possible technical options to address these challenges as applied to a generic dismantlement and disposition process, in the process identifying limitations and vulnerabilities. They expect that these considerations will play a large role in any future arms reduction effort and, therefore, should be addressed in a timely fashion

  13. Safety Verification for Probabilistic Hybrid Systems

    DEFF Research Database (Denmark)

    Zhang, Lijun; She, Zhikun; Ratschan, Stefan

    2010-01-01

    The interplay of random phenomena and continuous real-time control deserves increased attention for instance in wireless sensing and control applications. Safety verification for such systems thus needs to consider probabilistic variations of systems with hybrid dynamics. In safety verification o...... on a number of case studies, tackled using a prototypical implementation....

  14. Research ethics board approval for an international thromboprophylaxis trial.

    Science.gov (United States)

    Lutz, Kristina; Wilton, Kelly; Zytaruk, Nicole; Julien, Lisa; Hall, Richard; Harvey, Johanne; Skrobik, Yoanna; Vlahakis, Nicholas; Meade, Laurie; Matte, Andrea; Meade, Maureen; Burns, Karen; Albert, Martin; Cash, Bronwyn Barlow; Vallance, Shirley; Klinger, James; Heels-Ansdell, Diane; Cook, Deborah

    2012-06-01

    Research ethics board (REB) review of scientific protocols is essential, ensuring participants' dignity, safety, and rights. The objectives of this study were to examine the time from submission to approval, to analyze predictors of approval time, and to describe the scope of conditions from REBs evaluating an international thromboprophylaxis trial. We generated survey items through literature review and investigators' discussions, creating 4 domains: respondent and institutional demographics, the REB application process, and alternate consent models. We conducted a document analysis that involved duplicate assessment of themes from REB critique of the protocol and informed consent forms (ICF). Approval was granted from 65 REB institutions, requiring 58 unique applications. We analyzed 44 (75.9%) of 58 documents and surveys. Survey respondents completing the applications had 8 (5-12) years of experience; 77% completed 4 or more REB applications in previous 5 years. Critical care personnel were represented on 54% of REBs. The time to approval was a median (interquartile range) of 75 (42, 150) days, taking longer for sites with national research consortium membership (89.1 vs 31.0 days, P = .03). Document analysis of the application process and ICF yielded 5 themes: methodology, data management, consent procedures, cataloguing, and miscellaneous. Protocol-specific themes focused on trial implementation, external critiques, and budget. The only theme specific to the ICF was risks and benefits. The most frequent comments on the protocol and ICF were about methodology and miscellaneous issues; ICF comments also addressed study risks and benefits. More studies on methods to enhance efficiency and consistency of the REB approval processes for clinical trials are needed while still maintaining high ethical standards. Copyright © 2012 Elsevier Inc. All rights reserved.

  15. SSN Verification Service

    Data.gov (United States)

    Social Security Administration — The SSN Verification Service is used by Java applications to execute the GUVERF02 service using the WebSphere/CICS Interface. It accepts several input data fields...

  16. Enhanced Verification Test Suite for Physics Simulation Codes

    Energy Technology Data Exchange (ETDEWEB)

    Kamm, J R; Brock, J S; Brandon, S T; Cotrell, D L; Johnson, B; Knupp, P; Rider, W; Trucano, T; Weirs, V G

    2008-10-10

    This document discusses problems with which to augment, in quantity and in quality, the existing tri-laboratory suite of verification problems used by Los Alamos National Laboratory (LANL), Lawrence Livermore National Laboratory (LLNL), and Sandia National Laboratories (SNL). The purpose of verification analysis is demonstrate whether the numerical results of the discretization algorithms in physics and engineering simulation codes provide correct solutions of the corresponding continuum equations. The key points of this document are: (1) Verification deals with mathematical correctness of the numerical algorithms in a code, while validation deals with physical correctness of a simulation in a regime of interest. This document is about verification. (2) The current seven-problem Tri-Laboratory Verification Test Suite, which has been used for approximately five years at the DOE WP laboratories, is limited. (3) Both the methodology for and technology used in verification analysis have evolved and been improved since the original test suite was proposed. (4) The proposed test problems are in three basic areas: (a) Hydrodynamics; (b) Transport processes; and (c) Dynamic strength-of-materials. (5) For several of the proposed problems we provide a 'strong sense verification benchmark', consisting of (i) a clear mathematical statement of the problem with sufficient information to run a computer simulation, (ii) an explanation of how the code result and benchmark solution are to be evaluated, and (iii) a description of the acceptance criterion for simulation code results. (6) It is proposed that the set of verification test problems with which any particular code be evaluated include some of the problems described in this document. Analysis of the proposed verification test problems constitutes part of a necessary--but not sufficient--step that builds confidence in physics and engineering simulation codes. More complicated test cases, including physics models of

  17. Lessons Learned From Microkernel Verification — Specification is the New Bottleneck

    Directory of Open Access Journals (Sweden)

    Thorsten Bormer

    2012-11-01

    Full Text Available Software verification tools have become a lot more powerful in recent years. Even verification of large, complex systems is feasible, as demonstrated in the L4.verified and Verisoft XT projects. Still, functional verification of large software systems is rare – for reasons beyond the large scale of verification effort needed due to the size alone. In this paper we report on lessons learned for verification of large software systems based on the experience gained in microkernel verification in the Verisoft XT project. We discuss a number of issues that impede widespread introduction of formal verification in the software life-cycle process.

  18. The verification of DRAGON: progress and lessons learned

    International Nuclear Information System (INIS)

    Marleau, G.

    2002-01-01

    The general requirements for the verification of the legacy code DRAGON are somewhat different from those used for new codes. For example, the absence of a design manual for DRAGON makes it difficult to confirm that the each part of the code performs as required since these requirements are not explicitly spelled out for most of the DRAGON modules. In fact, this conformance of the code can only be assessed, in most cases, by making sure that the contents of the DRAGON data structures, which correspond to the output generated by a module of the code, contains the adequate information. It is also possible in some cases to use the self-verification options in DRAGON to perform additional verification or to evaluate, using an independent software, the performance of specific functions in the code. Here, we will describe the global verification process that was considered in order to bring DRAGON to an industry standard tool-set (IST) status. We will also discuss some of the lessons we learned in performing this verification and present some of the modification to DRAGON that were implemented as a consequence of this verification. (author)

  19. Simulation Environment Based on the Universal Verification Methodology

    CERN Document Server

    AUTHOR|(SzGeCERN)697338

    2017-01-01

    Universal Verification Methodology (UVM) is a standardized approach of verifying integrated circuit designs, targeting a Coverage-Driven Verification (CDV). It combines automatic test generation, self-checking testbenches, and coverage metrics to indicate progress in the design verification. The flow of the CDV differs from the traditional directed-testing approach. With the CDV, a testbench developer, by setting the verification goals, starts with an structured plan. Those goals are targeted further by a developed testbench, which generates legal stimuli and sends them to a device under test (DUT). The progress is measured by coverage monitors added to the simulation environment. In this way, the non-exercised functionality can be identified. Moreover, the additional scoreboards indicate undesired DUT behaviour. Such verification environments were developed for three recent ASIC and FPGA projects which have successfully implemented the new work-flow: (1) the CLICpix2 65 nm CMOS hybrid pixel readout ASIC desi...

  20. Reducing placebo exposure in trials: Considerations from the Research Roundtable in Epilepsy.

    Science.gov (United States)

    Fureman, Brandy E; Friedman, Daniel; Baulac, Michel; Glauser, Tracy; Moreno, Jonathan; Dixon-Salazar, Tracy; Bagiella, Emilia; Connor, Jason; Ferry, Jim; Farrell, Kathleen; Fountain, Nathan B; French, Jacqueline A

    2017-10-03

    The randomized controlled trial is the unequivocal gold standard for demonstrating clinical efficacy and safety of investigational therapies. Recently there have been concerns raised about prolonged exposure to placebo and ineffective therapy during the course of an add-on regulatory trial for new antiepileptic drug approval (typically ∼6 months in duration), due to the potential risks of continued uncontrolled epilepsy for that period. The first meeting of the Research Roundtable in Epilepsy on May 19-20, 2016, focused on "Reducing placebo exposure in epilepsy clinical trials," with a goal of considering new designs for epilepsy regulatory trials that may be added to the overall development plan to make it, as a whole, safer for participants while still providing rigorous evidence of effect. This topic was motivated in part by data from a meta-analysis showing a 3- to 5-fold increased rate of sudden unexpected death in epilepsy in participants randomized to placebo or ineffective doses of new antiepileptic drugs. The meeting agenda included rationale and discussion of different trial designs, including active-control add-on trials, placebo add-on to background therapy with adjustment, time to event designs, adaptive designs, platform trials with pooled placebo control, a pharmacokinetic/pharmacodynamic approach to reducing placebo exposure, and shorter trials when drug tolerance has been ruled out. The merits and limitations of each design were discussed and are reviewed here. © 2017 American Academy of Neurology.

  1. Global health trials methodological research agenda:results from a priority setting exercise

    OpenAIRE

    Blazeby, Jane; Nasser, Mona; Soares-Weiser, Karla; Sydes, Matthew R.; Zhang, Junhua; Williamson, Paula R

    2018-01-01

    BackgroundMethodological research into the design, conduct, analysis and reporting of trials is essential to optimise the process. UK specialists in the field have established a set of top priorities in aid of this research. These priorities however may not be reflected in the needs of similar research in low to middle income countries (LMICs) with different healthcare provision, resources and research infrastructure. The aim of the study was to identify the top priorities for methodological ...

  2. Hierarchical Representation Learning for Kinship Verification.

    Science.gov (United States)

    Kohli, Naman; Vatsa, Mayank; Singh, Richa; Noore, Afzel; Majumdar, Angshul

    2017-01-01

    Kinship verification has a number of applications such as organizing large collections of images and recognizing resemblances among humans. In this paper, first, a human study is conducted to understand the capabilities of human mind and to identify the discriminatory areas of a face that facilitate kinship-cues. The visual stimuli presented to the participants determine their ability to recognize kin relationship using the whole face as well as specific facial regions. The effect of participant gender and age and kin-relation pair of the stimulus is analyzed using quantitative measures such as accuracy, discriminability index d' , and perceptual information entropy. Utilizing the information obtained from the human study, a hierarchical kinship verification via representation learning (KVRL) framework is utilized to learn the representation of different face regions in an unsupervised manner. We propose a novel approach for feature representation termed as filtered contractive deep belief networks (fcDBN). The proposed feature representation encodes relational information present in images using filters and contractive regularization penalty. A compact representation of facial images of kin is extracted as an output from the learned model and a multi-layer neural network is utilized to verify the kin accurately. A new WVU kinship database is created, which consists of multiple images per subject to facilitate kinship verification. The results show that the proposed deep learning framework (KVRL-fcDBN) yields the state-of-the-art kinship verification accuracy on the WVU kinship database and on four existing benchmark data sets. Furthermore, kinship information is used as a soft biometric modality to boost the performance of face verification via product of likelihood ratio and support vector machine based approaches. Using the proposed KVRL-fcDBN framework, an improvement of over 20% is observed in the performance of face verification.

  3. Self-verification motives at the collective level of self-definition.

    Science.gov (United States)

    Chen, Serena; Chen, Karen Y; Shaw, Lindsay

    2004-01-01

    Three studies examined self-verification motives in relation to collective aspects of the self. Several moderators of collective self-verification were also examined--namely, the certainty with which collective self-views are held, the nature of one's ties to a source of self-verification, the salience of the collective self, and the importance of group identification. Evidence for collective self-verification emerged across all studies, particularly when collective self-views were held with high certainty (Studies 1 and 2), perceivers were somehow tied to the source of self-verification (Study 1), the collective self was salient (Study 2), and group identification was important (Study 3). To the authors' knowledge, these studies are the first to examine self-verification at the collective level of self-definition. The parallel and distinct ways in which self-verification processes may operate at different levels of self-definition are discussed.

  4. 24 CFR 5.512 - Verification of eligible immigration status.

    Science.gov (United States)

    2010-04-01

    ... immigration status. 5.512 Section 5.512 Housing and Urban Development Office of the Secretary, Department of... Noncitizens § 5.512 Verification of eligible immigration status. (a) General. Except as described in paragraph...) Primary verification—(1) Automated verification system. Primary verification of the immigration status of...

  5. Verification of RADTRAN

    International Nuclear Information System (INIS)

    Kanipe, F.L.; Neuhauser, K.S.

    1995-01-01

    This document presents details of the verification process of the RADTRAN computer code which was established for the calculation of risk estimates for radioactive materials transportation by highway, rail, air, and waterborne modes

  6. The N-of-1 Clinical Trial: A Timely Research Opportunity in Homeopathy.

    Science.gov (United States)

    Ulbrich-Zürni, Susanne; Teut, Michael; Roll, Stephanie; Mathie, Robert T

    2018-02-01

     The randomised controlled trial (RCT) is considered the 'gold standard' for establishing treatment efficacy or effectiveness of an intervention, but its data do not infer response in an individual patient. Individualised clinical care, a fundamental principle in complementary and alternative medicine (CAM), including homeopathy, seems well disposed in principle to being researched by single-patient (N-of-1) study design. Guidelines for reporting N-of-1 trials have recently been developed.  To overview the current status in the literature of the N-of-1 method and its application in medicine, including CAM. To consider whether the N-of-1 trial design offers an opportunity for novel research in homeopathy. N-OF-1 TRIAL DESIGN:  The N-of-1 trial applies the principles of the conventional crossover, blinded, RCT design. The treatment under study and the comparator are repeated in a randomised order, and with suitable washout time, over a defined period. N-of-1 design is constrained for use in chronic stable conditions, and for interventions that have quick onset and cessation of effect, with modest or negligible carryover. Outcome data can be aggregated and interpreted for the individual subject; they can also be pooled with data from several similar N-of-1 trials, enabling more generalisable conclusions. THE N-OF-1 TRIAL IN CAM: The typical individualisation of patient care can be accommodated in N-of-1 study design if the patient and the specific therapeutic intervention are selected within the constraints of the method. Application of the N-of-1 method in CAM has been advocated but has been mainly limited, in practice, to a small number of studies in herbal and traditional Chinese medicine. THE N-OF-1 TRIAL IN HOMEOPATHY:  Individualised homeopathy can be accommodated for investigation within the same methodological constraints; less in-depth homeopathic approaches to prescribing are also amendable to investigation using the N-of-1 method. No such studies

  7. Qualitative Research in Palliative Care: Applications to Clinical Trials Work.

    Science.gov (United States)

    Lim, Christopher T; Tadmor, Avia; Fujisawa, Daisuke; MacDonald, James J; Gallagher, Emily R; Eusebio, Justin; Jackson, Vicki A; Temel, Jennifer S; Greer, Joseph A; Hagan, Teresa; Park, Elyse R

    2017-08-01

    While vast opportunities for using qualitative methods exist within palliative care research, few studies provide practical advice for researchers and clinicians as a roadmap to identify and utilize such opportunities. To provide palliative care clinicians and researchers descriptions of qualitative methodology applied to innovative research questions relative to palliative care research and define basic concepts in qualitative research. Body: We describe three qualitative projects as exemplars to describe major concepts in qualitative analysis of early palliative care: (1) a descriptive analysis of clinician documentation in the electronic health record, (2) a thematic content analysis of palliative care clinician focus groups, and (3) a framework analysis of audio-recorded encounters between patients and clinicians as part of a clinical trial. This study provides a foundation for undertaking qualitative research within palliative care and serves as a framework for use by other palliative care researchers interested in qualitative methodologies.

  8. VEG-01: Veggie Hardware Verification Testing

    Science.gov (United States)

    Massa, Gioia; Newsham, Gary; Hummerick, Mary; Morrow, Robert; Wheeler, Raymond

    2013-01-01

    The Veggie plant/vegetable production system is scheduled to fly on ISS at the end of2013. Since much of the technology associated with Veggie has not been previously tested in microgravity, a hardware validation flight was initiated. This test will allow data to be collected about Veggie hardware functionality on ISS, allow crew interactions to be vetted for future improvements, validate the ability of the hardware to grow and sustain plants, and collect data that will be helpful to future Veggie investigators as they develop their payloads. Additionally, food safety data on the lettuce plants grown will be collected to help support the development of a pathway for the crew to safely consume produce grown on orbit. Significant background research has been performed on the Veggie plant growth system, with early tests focusing on the development of the rooting pillow concept, and the selection of fertilizer, rooting medium and plant species. More recent testing has been conducted to integrate the pillow concept into the Veggie hardware and to ensure that adequate water is provided throughout the growth cycle. Seed sanitation protocols have been established for flight, and hardware sanitation between experiments has been studied. Methods for shipping and storage of rooting pillows and the development of crew procedures and crew training videos for plant activities on-orbit have been established. Science verification testing was conducted and lettuce plants were successfully grown in prototype Veggie hardware, microbial samples were taken, plant were harvested, frozen, stored and later analyzed for microbial growth, nutrients, and A TP levels. An additional verification test, prior to the final payload verification testing, is desired to demonstrate similar growth in the flight hardware and also to test a second set of pillows containing zinnia seeds. Issues with root mat water supply are being resolved, with final testing and flight scheduled for later in 2013.

  9. CASL Verification and Validation Plan

    Energy Technology Data Exchange (ETDEWEB)

    Mousseau, Vincent Andrew [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Dinh, Nam [North Carolina State Univ., Raleigh, NC (United States)

    2016-06-30

    This report documents the Consortium for Advanced Simulation of LWRs (CASL) verification and validation plan. The document builds upon input from CASL subject matter experts, most notably the CASL Challenge Problem Product Integrators, CASL Focus Area leaders, and CASL code development and assessment teams. This document will be a living document that will track progress on CASL to do verification and validation for both the CASL codes (including MPACT, CTF, BISON, MAMBA) and for the CASL challenge problems (CIPS, PCI, DNB). The CASL codes and the CASL challenge problems are at differing levels of maturity with respect to validation and verification. The gap analysis will summarize additional work that needs to be done. Additional VVUQ work will be done as resources permit. This report is prepared for the Department of Energy’s (DOE’s) CASL program in support of milestone CASL.P13.02.

  10. Research on database realization technology of seismic information system in CTBT verification

    International Nuclear Information System (INIS)

    Zheng Xuefeng; Shen Junyi; Zhang Huimin; Jing Ping; Sun Peng; Zheng Jiangling

    2005-01-01

    Developing CTBT verification technology has become the most important method that makes sure CTBT to be fulfilled conscientiously. The seismic analysis based on seismic information system (SIS) is playing an important rule in this field. Based on GIS, the SIS will be very sufficient and powerful in spatial analysis, topologic analysis and visualization. However, the critical issue to implement the whole system function depends on the performance of SIS DB. Based on the ArcSDE Geodatabase data model, not only have the spatial data and attribute data seamless integrated management been realized with RDBMS ORACLE really, but also the most functions of ORACLE have been reserved. (authors)

  11. The monitoring and verification of nuclear weapons

    International Nuclear Information System (INIS)

    Garwin, Richard L.

    2014-01-01

    This paper partially reviews and updates the potential for monitoring and verification of nuclear weapons, including verification of their destruction. Cooperative monitoring with templates of the gamma-ray spectrum are an important tool, dependent on the use of information barriers

  12. Working Group 2: Future Directions for Safeguards and Verification, Technology, Research and Development

    International Nuclear Information System (INIS)

    Zykov, S.; Blair, D.

    2013-01-01

    For traditional safeguards it was recognized that the hardware presently available is, in general, addressing adequately fundamental IAEA needs, and that further developments should therefore focus mainly on improving efficiencies (i.e. increasing cost economies, reliability, maintainability and user-friendliness, keeping abreast of continual advancements in technologies and of the evolution of verification approaches). Specific technology areas that could benefit from further development include: -) Non-destructive measurement systems (NDA), in particular, gamma-spectroscopy and neutron counting techniques; -) Containment and surveillance tools, such as tamper indicating seals, video-surveillance, surface identification methods, etc.; -) Geophysical methods for design information verification (DIV) and safeguarding of geological repositories; and -) New tools and methods for real-time monitoring. Furthermore, the Working Group acknowledged that a 'building block' (or modular) approach should be adopted towards technology development, enabling equipment to be upgraded efficiently as technologies advance. Concerning non-traditional safeguards, in the area of satellite-based sensors, increased spatial resolution and broadened spectral range were identified as priorities. In the area of wide area surveillance, the development of LIDAR-like tools for atmospheric sensing was discussed from the perspective of both potential benefits and certain limitations. Recognizing the limitations imposed by the human brain in terms of information assessment and analysis, technologies are needed that will enable the more effective utilization of all information, regardless of its format and origin. The paper is followed by the slides of the presentation. (A.C.)

  13. DarcyTools, Version 2.1. Verification and validation

    International Nuclear Information System (INIS)

    Svensson, Urban

    2004-03-01

    DarcyTools is a computer code for simulation of flow and transport in porous and/or fractured media. The fractured media in mind is a fractured rock and the porous media the soil cover on the top of the rock; it is hence groundwater flows, which is the class of flows in mind. A number of novel methods and features form the present version of DarcyTools. In the verification studies, these methods are evaluated by comparisons with analytical solutions for idealized situations. The five verification groups, thus reflect the main areas of recent developments. The present report will focus on the Verification and Validation of DarcyTools. Two accompanying reports cover other aspects: - Concepts, Methods, Equations and Demo Simulations. - User's Guide. The objective of this report is to compile all verification and validation studies that have been carried out so far. After some brief introductory sections, all cases will be reported in Appendix A (verification cases) and Appendix B (validation cases)

  14. Verification and quality control of routine hematology analyzers.

    Science.gov (United States)

    Vis, J Y; Huisman, A

    2016-05-01

    Verification of hematology analyzers (automated blood cell counters) is mandatory before new hematology analyzers may be used in routine clinical care. The verification process consists of several items which comprise among others: precision, accuracy, comparability, carryover, background and linearity throughout the expected range of results. Yet, which standard should be met or which verification limit be used is at the discretion of the laboratory specialist. This paper offers practical guidance on verification and quality control of automated hematology analyzers and provides an expert opinion on the performance standard that should be met by the contemporary generation of hematology analyzers. Therefore (i) the state-of-the-art performance of hematology analyzers for complete blood count parameters is summarized, (ii) considerations, challenges, and pitfalls concerning the development of a verification plan are discussed, (iii) guidance is given regarding the establishment of reference intervals, and (iv) different methods on quality control of hematology analyzers are reviewed. © 2016 John Wiley & Sons Ltd.

  15. DarcyTools, Version 2.1. Verification and validation

    Energy Technology Data Exchange (ETDEWEB)

    Svensson, Urban [Computer-aided Fluid Engineering AB, Norrkoeping (Sweden)

    2004-03-01

    DarcyTools is a computer code for simulation of flow and transport in porous and/or fractured media. The fractured media in mind is a fractured rock and the porous media the soil cover on the top of the rock; it is hence groundwater flows, which is the class of flows in mind. A number of novel methods and features form the present version of DarcyTools. In the verification studies, these methods are evaluated by comparisons with analytical solutions for idealized situations. The five verification groups, thus reflect the main areas of recent developments. The present report will focus on the Verification and Validation of DarcyTools. Two accompanying reports cover other aspects: - Concepts, Methods, Equations and Demo Simulations. - User's Guide. The objective of this report is to compile all verification and validation studies that have been carried out so far. After some brief introductory sections, all cases will be reported in Appendix A (verification cases) and Appendix B (validation cases)

  16. Assume-Guarantee Verification of Software Components in SOFA 2 Framework

    Czech Academy of Sciences Publication Activity Database

    Parízek, P.; Plášil, František

    2010-01-01

    Roč. 4, č. 3 (2010), s. 210-221 ISSN 1751-8806 R&D Projects: GA AV ČR 1ET400300504 Grant - others:GA MŠk(CZ) 7E08004 Institutional research plan: CEZ:AV0Z10300504 Keywords : components * software verification * model checking Subject RIV: JC - Computer Hardware ; Software Impact factor: 0.671, year: 2010

  17. Verification-Based Interval-Passing Algorithm for Compressed Sensing

    OpenAIRE

    Wu, Xiaofu; Yang, Zhen

    2013-01-01

    We propose a verification-based Interval-Passing (IP) algorithm for iteratively reconstruction of nonnegative sparse signals using parity check matrices of low-density parity check (LDPC) codes as measurement matrices. The proposed algorithm can be considered as an improved IP algorithm by further incorporation of the mechanism of verification algorithm. It is proved that the proposed algorithm performs always better than either the IP algorithm or the verification algorithm. Simulation resul...

  18. Multi-canister overpack project - verification and validation, MCNP 4A

    International Nuclear Information System (INIS)

    Goldmann, L.H.

    1997-01-01

    This supporting document contains the software verification and validation (V and V) package used for Phase 2 design of the Spent Nuclear Fuel Multi-Canister Overpack. V and V packages for both ANSYS and MCNP are included. Description of Verification Run(s): This software requires that it be compiled specifically for the machine it is to be used on. Therefore to facilitate ease in the verification process the software automatically runs 25 sample problems to ensure proper installation and compilation. Once the runs are completed the software checks for verification by performing a file comparison on the new output file and the old output file. Any differences between any of the files will cause a verification error. Due to the manner in which the verification is completed a verification error does not necessarily indicate a problem. This indicates that a closer look at the output files is needed to determine the cause of the error

  19. Statistical methods to correct for verification bias in diagnostic studies are inadequate when there are few false negatives: a simulation study

    Directory of Open Access Journals (Sweden)

    Vickers Andrew J

    2008-11-01

    Full Text Available Abstract Background A common feature of diagnostic research is that results for a diagnostic gold standard are available primarily for patients who are positive for the test under investigation. Data from such studies are subject to what has been termed "verification bias". We evaluated statistical methods for verification bias correction when there are few false negatives. Methods A simulation study was conducted of a screening study subject to verification bias. We compared estimates of the area-under-the-curve (AUC corrected for verification bias varying both the rate and mechanism of verification. Results In a single simulated data set, varying false negatives from 0 to 4 led to verification bias corrected AUCs ranging from 0.550 to 0.852. Excess variation associated with low numbers of false negatives was confirmed in simulation studies and by analyses of published studies that incorporated verification bias correction. The 2.5th – 97.5th centile range constituted as much as 60% of the possible range of AUCs for some simulations. Conclusion Screening programs are designed such that there are few false negatives. Standard statistical methods for verification bias correction are inadequate in this circumstance.

  20. Key Nuclear Verification Priorities: Safeguards and Beyond

    International Nuclear Information System (INIS)

    Carlson, J.

    2010-01-01

    In addressing nuclear verification priorities, we should look beyond the current safeguards system. Non-proliferation, which the safeguards system underpins, is not an end in itself, but an essential condition for achieving and maintaining nuclear disarmament. Effective safeguards are essential for advancing disarmament, and safeguards issues, approaches and techniques are directly relevant to the development of future verification missions. The extent to which safeguards challenges are successfully addressed - or otherwise - will impact not only on confidence in the safeguards system, but on the effectiveness of, and confidence in, disarmament verification. To identify the key nuclear verification priorities, we need to consider the objectives of verification, and the challenges to achieving these. The strategic objective of IAEA safeguards might be expressed as: To support the global nuclear non-proliferation regime by: - Providing credible assurance that states are honouring their safeguards commitments - thereby removing a potential motivation to proliferate; and - Early detection of misuse of nuclear material and technology - thereby deterring proliferation by the risk of early detection, enabling timely intervention by the international community. Or to summarise - confidence-building, detection capability, and deterrence. These will also be essential objectives for future verification missions. The challenges to achieving these involve a mix of political, technical and institutional dimensions. Confidence is largely a political matter, reflecting the qualitative judgment of governments. Clearly assessments of detection capability and deterrence have a major impact on confidence. Detection capability is largely thought of as 'technical', but also involves issues of legal authority, as well as institutional issues. Deterrence has both political and institutional aspects - including judgments on risk of detection and risk of enforcement action being taken. The

  1. Key Nuclear Verification Priorities - Safeguards and Beyond

    International Nuclear Information System (INIS)

    Carlson, J.

    2010-01-01

    In addressing nuclear verification priorities, we should look beyond the current safeguards system. Non-proliferation, which the safeguards system underpins, is not an end in itself, but an essential condition for achieving and maintaining nuclear disarmament. Effective safeguards are essential for advancing disarmament, and safeguards issues, approaches and techniques are directly relevant to the development of future verification missions. The extent to which safeguards challenges are successfully addressed - or otherwise - will impact not only on confidence in the safeguards system, but on the effectiveness of, and confidence in, disarmament verification. To identify the key nuclear verification priorities, we need to consider the objectives of verification, and the challenges to achieving these. The strategic objective of IAEA safeguards might be expressed as: To support the global nuclear non-proliferation regime by: - Providing credible assurance that states are honouring their safeguards commitments - thereby removing a potential motivation to proliferate; and - Early detection of misuse of nuclear material and technology - thereby deterring proliferation by the risk of early detection, enabling timely intervention by the international community. Or to summarise - confidence-building, detection capability, and deterrence. These will also be essential objectives for future verification missions. The challenges to achieving these involve a mix of political, technical and institutional dimensions. Confidence is largely a political matter, reflecting the qualitative judgment of governments. Clearly assessments of detection capability and deterrence have a major impact on confidence. Detection capability is largely thought of as 'technical', but also involves issues of legal authority, as well as institutional issues. Deterrence has both political and institutional aspects - including judgments on risk of detection and risk of enforcement action being taken. The

  2. Randomised controlled trials of veterinary homeopathy: characterising the peer-reviewed research literature for systematic review.

    Science.gov (United States)

    Mathie, Robert T; Hacke, Daniela; Clausen, Jürgen

    2012-10-01

    Systematic review of the research evidence in veterinary homeopathy has never previously been carried out. This paper presents the search methods, together with categorised lists of retrieved records, that enable us to identify the literature that is acceptable for future systematic review of randomised controlled trials (RCTs) in veterinary homeopathy. All randomised and controlled trials of homeopathic intervention (prophylaxis and/or treatment of disease, in any species except man) were appraised according to pre-specified criteria. The following databases were systematically searched from their inception up to and including March 2011: AMED; Carstens-Stiftung Homeopathic Veterinary Clinical Research (HomVetCR) database; CINAHL; Cochrane Central Register of Controlled Trials; Embase; Hom-Inform; LILACS; PubMed; Science Citation Index; Scopus. One hundred and fifty records were retrieved; 38 satisfied the acceptance criteria (substantive report of a clinical treatment or prophylaxis trial in veterinary homeopathic medicine randomised and controlled and published in a peer-reviewed journal), and were thus eligible for future planned systematic review. Approximately half of the rejected records were theses. Seven species and 27 different species-specific medical conditions were represented in the 38 papers. Similar numbers of papers reported trials of treatment and prophylaxis (n=21 and n=17 respectively) and were controlled against placebo or other than placebo (n=18, n=20 respectively). Most research focused on non-individualised homeopathy (n=35 papers) compared with individualised homeopathy (n=3). The results provide a complete and clarified view of the RCT literature in veterinary homeopathy. We will systematically review the 38 substantive peer-reviewed journal articles under the main headings: treatment trials; prophylaxis trials. Copyright © 2012 The Faculty of Homeopathy. Published by Elsevier Ltd. All rights reserved.

  3. On the organisation of program verification competitions

    NARCIS (Netherlands)

    Huisman, Marieke; Klebanov, Vladimir; Monahan, Rosemary; Klebanov, Vladimir; Beckert, Bernhard; Biere, Armin; Sutcliffe, Geoff

    In this paper, we discuss the challenges that have to be addressed when organising program verification competitions. Our focus is on competitions for verification systems where the participants both formalise an informally stated requirement and (typically) provide some guidance for the tool to

  4. Simulation environment based on the Universal Verification Methodology

    International Nuclear Information System (INIS)

    Fiergolski, A.

    2017-01-01

    Universal Verification Methodology (UVM) is a standardized approach of verifying integrated circuit designs, targeting a Coverage-Driven Verification (CDV). It combines automatic test generation, self-checking testbenches, and coverage metrics to indicate progress in the design verification. The flow of the CDV differs from the traditional directed-testing approach. With the CDV, a testbench developer, by setting the verification goals, starts with an structured plan. Those goals are targeted further by a developed testbench, which generates legal stimuli and sends them to a device under test (DUT). The progress is measured by coverage monitors added to the simulation environment. In this way, the non-exercised functionality can be identified. Moreover, the additional scoreboards indicate undesired DUT behaviour. Such verification environments were developed for three recent ASIC and FPGA projects which have successfully implemented the new work-flow: (1) the CLICpix2 65 nm CMOS hybrid pixel readout ASIC design; (2) the C3PD 180 nm HV-CMOS active sensor ASIC design; (3) the FPGA-based DAQ system of the CLICpix chip. This paper, based on the experience from the above projects, introduces briefly UVM and presents a set of tips and advices applicable at different stages of the verification process-cycle.

  5. Clinical Trials

    Medline Plus

    Full Text Available ... or device is safe and effective for humans. What Are Clinical Trials? Clinical trials are research studies ... parents, clinicians, researchers, children, and the general public. What to Expect During a clinical trial, doctors, nurses, ...

  6. HTGR analytical methods and design verification

    International Nuclear Information System (INIS)

    Neylan, A.J.; Northup, T.E.

    1982-05-01

    Analytical methods for the high-temperature gas-cooled reactor (HTGR) include development, update, verification, documentation, and maintenance of all computer codes for HTGR design and analysis. This paper presents selected nuclear, structural mechanics, seismic, and systems analytical methods related to the HTGR core. This paper also reviews design verification tests in the reactor core, reactor internals, steam generator, and thermal barrier

  7. IMRT plan verification in radiotherapy

    International Nuclear Information System (INIS)

    Vlk, P.

    2006-01-01

    This article describes the procedure for verification of IMRT (Intensity modulated radiation therapy) plan, which is used in the Oncological Institute of St. Elisabeth in Bratislava. It contains basic description of IMRT technology and developing a deployment plan for IMRT planning system CORVUS 6.0, the device Mimic (Multilammelar intensity modulated collimator) and the overall process of verifying the schedule created. The aim of verification is particularly good control of the functions of MIMIC and evaluate the overall reliability of IMRT planning. (author)

  8. Automatic Verification of Timing Constraints for Safety Critical Space Systems

    Science.gov (United States)

    Fernandez, Javier; Parra, Pablo; Sanchez Prieto, Sebastian; Polo, Oscar; Bernat, Guillem

    2015-09-01

    In this paper is presented an automatic process of verification. We focus in the verification of scheduling analysis parameter. This proposal is part of process based on Model Driven Engineering to automate a Verification and Validation process of the software on board of satellites. This process is implemented in a software control unit of the energy particle detector which is payload of Solar Orbiter mission. From the design model is generated a scheduling analysis model and its verification model. The verification as defined as constraints in way of Finite Timed Automatas. When the system is deployed on target the verification evidence is extracted as instrumented points. The constraints are fed with the evidence, if any of the constraints is not satisfied for the on target evidence the scheduling analysis is not valid.

  9. Research ethics committee decision-making in relation to an efficient neonatal trial.

    Science.gov (United States)

    Gale, C; Hyde, M J; Modi, N

    2017-07-01

    Randomised controlled trials, a gold-standard approach to reduce uncertainties in clinical practice, are growing in cost and are often slow to recruit. We determined whether methodological approaches to facilitate large, efficient clinical trials were acceptable to UK research ethics committees (RECs). We developed a protocol in collaboration with parents, for a comparative-effectiveness, randomised controlled trial comparing two widely used blood transfusion practices in preterm infants. We incorporated four approaches to improve recruitment and efficiency: (i) point-of-care design using electronic patient records for patient identification, randomisation and data acquisition, (ii) short two-page information sheet; (iii) explicit mention of possible inclusion benefit; (iv) opt-out consent with enrolment as the default. With the support of the UK Health Research Authority, we submitted an identical protocol to 12 UK REC. RECs in the UK. Number of REC granting favourable opinions. The use of electronic patient records was acceptable to all RECs; one REC raised concerns about the short parent information sheet, 10 about inclusion benefit and 9 about opt-out consent. Following responses to queries, nine RECs granted a favourable final opinion and three rejected the application because they considered the opt-out consent process invalid. A majority of RECs in this study consider the use of electronic patient record data, short information sheets, opt-out consent and mention of possible inclusion benefit to be acceptable in neonatal comparative-effectiveness research. We identified a need for guidance for RECs in relation to opt-out consent processes. These methods provide opportunity to facilitate large randomised controlled trials. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  10. Clinical Trials

    Medline Plus

    Full Text Available ... Clinical Trials About Clinical Trials Clinical trials are research studies that explore whether a medical strategy, treatment, ... required to have an IRB. Office for Human Research Protections The U.S. Department of Health and Human ...

  11. Who is the research subject in cluster randomized trials in health research?

    Directory of Open Access Journals (Sweden)

    Brehaut Jamie C

    2011-07-01

    Full Text Available Abstract This article is part of a series of papers examining ethical issues in cluster randomized trials (CRTs in health research. In the introductory paper in this series, we set out six areas of inquiry that must be addressed if the CRT is to be set on a firm ethical foundation. This paper addresses the first of the questions posed, namely, who is the research subject in a CRT in health research? The identification of human research subjects is logically prior to the application of protections as set out in research ethics and regulation. Aspects of CRT design, including the fact that in a single study the units of randomization, experimentation, and observation may differ, complicate the identification of human research subjects. But the proper identification of human research subjects is important if they are to be protected from harm and exploitation, and if research ethics committees are to review CRTs efficiently. We examine the research ethics literature and international regulations to identify the core features of human research subjects, and then unify these features under a single, comprehensive definition of human research subject. We define a human research subject as any person whose interests may be compromised as a result of interventions in a research study. Individuals are only human research subjects in CRTs if: (1 they are directly intervened upon by investigators; (2 they interact with investigators; (3 they are deliberately intervened upon via a manipulation of their environment that may compromise their interests; or (4 their identifiable private information is used to generate data. Individuals who are indirectly affected by CRT study interventions, including patients of healthcare providers participating in knowledge translation CRTs, are not human research subjects unless at least one of these conditions is met.

  12. Pediatric Clinical Trials Conducted in South Korea from 2006 to 2015: An Analysis of the South Korean Clinical Research Information Service, US ClinicalTrials.gov and European Clinical Trials Registries.

    Science.gov (United States)

    Choi, Sheung-Nyoung; Lee, Ji-Hyun; Song, In-Kyung; Kim, Eun-Hee; Kim, Jin-Tae; Kim, Hee-Soo

    2017-12-01

    The status of pediatric clinical trials performed in South Korea in the last decade, including clinical trials of drugs with unapproved indications for children, has not been previously examined. The aim was to provide information regarding the current state of pediatric clinical trials and create a basis for future trials performed in South Korea by reviewing three databases of clinical trials registrations. We searched for pediatric clinical studies (participants South Korea between 2006 and 2015 registered on the Clinical Research Information Service (CRIS), ClinicalTrials.gov, and the European Clinical Trials Registry (EuCTR). Additionally, we reviewed whether unapproved indications were involved in each trial by comparing the trials with a list of authorized trials provided by the Ministry of Food and Drug Safety (MFDS). The primary and secondary outcomes were to determine the change in number of pediatric clinical trials with unapproved indications over time and to assess the status of unauthorized pediatric clinical trials from the MFDS and the publication of articles after these clinical trials, respectively. We identified 342 clinical studies registered in the CRIS (n = 81), ClinicalTrials.gov (n = 225), and EuCTR (n = 36), of which 306 were reviewed after excluding duplicate registrations. Among them, 181 studies were interventional trials dealing with drugs and biological agents, of which 129 (71.3%) involved unapproved drugs. Of these 129 trials, 107 (82.9%) were authorized by the MFDS. Pediatric clinical trials in South Korea aiming to establish the safety and efficacy of drugs in children are increasing; however, non-MFDS-authorized studies remain an issue.

  13. Mathematical description for the measurement and verification of energy efficiency improvement

    International Nuclear Information System (INIS)

    Xia, Xiaohua; Zhang, Jiangfeng

    2013-01-01

    Highlights: • A mathematical model for the measurement and verification problem is established. • Criteria to choose the four measurement and verification options are given. • Optimal measurement and verification plan is defined. • Calculus of variations and optimal control can be further applied. - Abstract: Insufficient energy supply is a problem faced by many countries, and energy efficiency improvement is identified as the quickest and most effective solution to this problem. Many energy efficiency projects are therefore initiated to reach various energy saving targets. These energy saving targets need to be measured and verified, and in many countries such a measurement and verification (M and V) activity is guided by the International Performance Measurement and Verification Protocol (IPMVP). However, M and V is widely regarded as an inaccurate science: an engineering practice relying heavily on professional judgement. This paper presents a mathematical description of the energy efficiency M and V problem and thus casts into a scientific framework the basic M and V concepts, propositions, techniques and methodologies. For this purpose, a general description of energy system modeling is provided to facilitate the discussion, strict mathematical definitions for baseline and baseline adjustment are given, and the M and V plan development is formulated as an M and V modeling problem. An optimal M and V plan is therefore obtained through solving a calculus of variation, or equivalently, an optimal control problem. This approach provides a fruitful source of research problems by which optimal M and V plans under various practical constraints can be determined. With the aid of linear control system models, this mathematical description also provides sufficient conditions for M and V practitioners to determine which one of the four M and V options in IPMVP should be used in a practical M and V project

  14. The relative importance of managerial competencies for predicting the perceived job performance of Broad-Based Black Economic Empowerment verification practitioners

    Directory of Open Access Journals (Sweden)

    Barbara M. Seate

    2016-04-01

    Full Text Available Orientation: There is a need for the growing Broad-Based Black Economic Empowerment (B-BBEE verification industry to assess competencies and determine skills gaps for the management of the verification practitioners’ perceived job performance. Knowing which managerial competencies are important for different managerial functions is vital for developing and improving training and development programmes. Research purpose: The purpose of this study was to determine the managerial capabilities that are required of the B-BBEE verification practitioners, in order to improve their perceived job performance. Motivation for the study: The growing number of the B-BBEE verification practitioners calls for more focused training and development. Generating such a training and development programme demands empirical research into the relative importance of managerial competencies. Research approach, design and method: A quantitative design using the survey approach was adopted. A questionnaire was administered to a stratified sample of 87 B-BBEE verification practitioners. Data were analysed using the Statistical Package for Social Sciences (version 22.0 and Smart Partial Least Squares software. Main findings: The results of the correlation analysis revealed that there were strong and positive associations between technical skills, interpersonal skills, compliance to standards and ethics, managerial skills and perceived job performance. Results of the regression analysis showed that managerial skills, compliance to standards and ethics and interpersonal skills were statistically significant in predicting perceived job performance. However, technical skills were insignificant in predicting perceived job performance. Practical/managerial implications: The study has shown that the B-BBEE verification industry, insofar as the technical skills of the practitioners are concerned, does have suitably qualified staff with the requisite educational qualifications. At

  15. A framework for nuclear agreement and verification

    International Nuclear Information System (INIS)

    Ali, A.

    1991-01-01

    This chapter assesses the prospects for a nuclear agreement between India and Pakistan. The chapter opens with a review of past and present political environments of the two countries. The discussion proceeds to describe the linkage of global arms control agreements, prospects for verification of a Comprehensive Test Ban Treaty, the role of nuclear power in any agreements, the intrusiveness of verification, and possible post-proliferation agreements. Various monitoring and verification technologies are described (mainly satellite oriented). The chapter concludes with an analysis of the likelihood of persuading India and Pakistan to agree to a nonproliferation arrangement

  16. Verification of Many-Qubit States

    Directory of Open Access Journals (Sweden)

    Yuki Takeuchi

    2018-06-01

    Full Text Available Verification is a task to check whether a given quantum state is close to an ideal state or not. In this paper, we show that a variety of many-qubit quantum states can be verified with only sequential single-qubit measurements of Pauli operators. First, we introduce a protocol for verifying ground states of Hamiltonians. We next explain how to verify quantum states generated by a certain class of quantum circuits. We finally propose an adaptive test of stabilizers that enables the verification of all polynomial-time-generated hypergraph states, which include output states of the Bremner-Montanaro-Shepherd-type instantaneous quantum polynomial time (IQP circuits. Importantly, we do not make any assumption that the identically and independently distributed copies of the same states are given: Our protocols work even if some highly complicated entanglement is created among copies in any artificial way. As applications, we consider the verification of the quantum computational supremacy demonstration with IQP models, and verifiable blind quantum computing.

  17. Formal Verification -26 ...

    Indian Academy of Sciences (India)

    by testing of the components and successful testing leads to the software being ... Formal verification is based on formal methods which are mathematically based ..... scenario under which a similar error could occur. There are various other ...

  18. Verification of wet blasting decontamination technology

    International Nuclear Information System (INIS)

    Matsubara, Sachito; Murayama, Kazunari; Yoshida, Hirohisa; Igei, Shigemitsu; Izumida, Tatsuo

    2013-01-01

    Macoho Co., Ltd. participated in the projects of 'Decontamination Verification Test FY 2011 by the Ministry of the Environment' and 'Decontamination Verification Test FY 2011 by the Cabinet Office.' And we tested verification to use a wet blasting technology for decontamination of rubble and roads contaminated by the accident of Fukushima Daiichi Nuclear Power Plant of the Tokyo Electric Power Company. As a results of the verification test, the wet blasting decontamination technology showed that a decontamination rate became 60-80% for concrete paving, interlocking, dense-grated asphalt pavement when applied to the decontamination of the road. When it was applied to rubble decontamination, a decontamination rate was 50-60% for gravel and approximately 90% for concrete and wood. It was thought that Cs-134 and Cs-137 attached to the fine sludge scraped off from a decontamination object and the sludge was found to be separated from abrasives by wet cyclene classification: the activity concentration of the abrasives is 1/30 or less than the sludge. The result shows that the abrasives can be reused without problems when the wet blasting decontamination technology is used. (author)

  19. Clinical Trials

    Medline Plus

    Full Text Available ... Health Topics / About Clinical Trials About Clinical Trials Clinical trials are research studies that explore whether a medical strategy, treatment, ... tool for advancing medical knowledge and patient care. Clinical research is done only if doctors don't know ...

  20. Involvement of consumers in studies run by the Medical Research Council Clinical Trials Unit: Results of a survey

    Directory of Open Access Journals (Sweden)

    Vale Claire L

    2012-01-01

    Full Text Available Abstract Background We aimed to establish levels of consumer involvement in randomised controlled trials (RCTs, meta-analyses and other studies carried out by the UK Medical Research Council (MRC Clinical Trials Unit across the range of research programs, predominantly in cancer and HIV. Methods Staff responsible for studies that were included in a Unit Progress Report (MRC CTU, April 2009 were asked to complete a semi-structured questionnaire survey regarding consumer involvement. This was defined as active involvement of consumers as partners in the research process and not as subjects of that research. The electronic questionnaires combined open and closed questions, intended to capture quantitative and qualitative information on whether studies had involved consumers; types of activities undertaken; recruitment and support; advantages and disadvantages of involvement and its perceived impact on aspects of the research. Results Between October 2009 and April 2010, 138 completed questionnaires (86% were returned. Studies had been conducted over a 20 year period from 1989, and around half were in cancer; 30% in HIV and 20% were in other disease areas including arthritis, tuberculosis and blood transfusion medicine. Forty-three studies (31% had some consumer involvement, most commonly as members of trial management groups (TMG [88%]. A number of positive impacts on both the research and the researcher were identified. Researchers generally felt involvement was worthwhile and some felt that consumer involvement had improved the credibility of the research. Benefits in design and quality, trial recruitment, dissemination and decision making were also perceived. Researchers felt they learned from consumer involvement, albeit that there were some barriers. Conclusions Whilst most researchers identified benefits of involving consumers, most of studies included in the survey had no involvement. Information from this survey will inform the development

  1. Involvement of consumers in studies run by the Medical Research Council Clinical Trials Unit: results of a survey.

    Science.gov (United States)

    Vale, Claire L; Thompson, Lindsay C; Murphy, Claire; Forcat, Silvia; Hanley, Bec

    2012-01-13

    We aimed to establish levels of consumer involvement in randomised controlled trials (RCTs), meta-analyses and other studies carried out by the UK Medical Research Council (MRC) Clinical Trials Unit across the range of research programs, predominantly in cancer and HIV. Staff responsible for studies that were included in a Unit Progress Report (MRC CTU, April 2009) were asked to complete a semi-structured questionnaire survey regarding consumer involvement. This was defined as active involvement of consumers as partners in the research process and not as subjects of that research. The electronic questionnaires combined open and closed questions, intended to capture quantitative and qualitative information on whether studies had involved consumers; types of activities undertaken; recruitment and support; advantages and disadvantages of involvement and its perceived impact on aspects of the research. Between October 2009 and April 2010, 138 completed questionnaires (86%) were returned. Studies had been conducted over a 20 year period from 1989, and around half were in cancer; 30% in HIV and 20% were in other disease areas including arthritis, tuberculosis and blood transfusion medicine. Forty-three studies (31%) had some consumer involvement, most commonly as members of trial management groups (TMG) [88%]. A number of positive impacts on both the research and the researcher were identified. Researchers generally felt involvement was worthwhile and some felt that consumer involvement had improved the credibility of the research. Benefits in design and quality, trial recruitment, dissemination and decision making were also perceived. Researchers felt they learned from consumer involvement, albeit that there were some barriers. Whilst most researchers identified benefits of involving consumers, most of studies included in the survey had no involvement. Information from this survey will inform the development of a unit policy on consumer involvement, to guide future

  2. Self-Verification as a Mediator of Mothers’ Self-Fulfilling Effects on Adolescents’ Educational Attainment

    Science.gov (United States)

    Scherr, Kyle C.; Madon, Stephanie; Guyll, Max; Willard, Jennifer; Spoth, Richard

    2013-01-01

    This research examined whether self-verification acts as a general mediational process of self-fulfilling prophecies. The authors tested this hypothesis by examining whether self-verification processes mediated self-fulfilling prophecy effects within a different context and with a different belief and a different outcome than has been used in prior research. Results of longitudinal data obtained from mothers and their adolescents (N = 332) indicated that mothers’ beliefs about their adolescents’ educational outcomes had a significant indirect effect on adolescents’ academic attainment through adolescents’ educational aspirations. This effect, observed over a six year span, provided evidence that mothers’ self-fulfilling effects occurred, in part, because mothers’ false beliefs influenced their adolescents’ own educational aspirations which adolescents then self-verified through their educational attainment. The theoretical and applied implications of these findings are discussed. PMID:21357755

  3. Self-verification as a mediator of mothers' self-fulfilling effects on adolescents' educational attainment.

    Science.gov (United States)

    Scherr, Kyle C; Madon, Stephanie; Guyll, Max; Willard, Jennifer; Spoth, Richard

    2011-05-01

    This research examined whether self-verification acts as a general mediational process of self-fulfilling prophecies. The authors tested this hypothesis by examining whether self-verification processes mediated self-fulfilling prophecy effects within a different context and with a different belief and a different outcome than has been used in prior research. Results of longitudinal data obtained from mothers and their adolescents (N=332) indicated that mothers' beliefs about their adolescents' educational outcomes had a significant indirect effect on adolescents' academic attainment through adolescents' educational aspirations. This effect, observed over a 6-year span, provided evidence that mothers' self-fulfilling effects occurred, in part, because mothers' false beliefs influenced their adolescents' own educational aspirations, which adolescents then self-verified through their educational attainment. The theoretical and applied implications of these findings are discussed.

  4. Citation of prior research has increased in introduction and discussion sections with time: A survey of clinical trials in physiotherapy.

    Science.gov (United States)

    Hoderlein, Xenia; Moseley, Anne M; Elkins, Mark R

    2017-08-01

    Many clinical trials are reported without reference to the existing relevant high-quality research. This study aimed to investigate the extent to which authors of reports of clinical trials of physiotherapy interventions try to use high-quality clinical research to (1) help justify the need for the trial in the introduction and (2) help interpret the trial's results in the discussion. Data were extracted from 221 clinical trials that were randomly selected from the Physiotherapy Evidence Database: 70 published in 2001 (10% sample) and 151 published in 2015 (10% sample). The Physiotherapy Evidence Database score (which rates methodological quality and completeness of reporting) for each trial was also downloaded. Overall 41% of trial reports cited a systematic review or the results of a search for other evidence in the introduction section: 20% for 2001 and 50% for 2015 (relative risk = 2.3, 95% confidence interval = 1.5-3.8). For the discussion section, only 1 of 221 trials integrated the results of the trial into an existing meta-analysis, but citation of a relevant systematic review did increase from 17% in 2001 to 34% in 2015. There was no relationship between citation of existing research and the total Physiotherapy Evidence Database score. Published reports of clinical trials of physiotherapy interventions increasingly cite a systematic review or the results of a search for other evidence in the introduction, but integration with existing research in the discussion section is very rare. To encourage the use of existing research, stronger recommendations to refer to existing systematic reviews (where available) could be incorporated into reporting checklists and journal editorial guidelines.

  5. K Basins Field Verification Program

    International Nuclear Information System (INIS)

    Booth, H.W.

    1994-01-01

    The Field Verification Program establishes a uniform and systematic process to ensure that technical information depicted on selected engineering drawings accurately reflects the actual existing physical configuration. This document defines the Field Verification Program necessary to perform the field walkdown and inspection process that identifies the physical configuration of the systems required to support the mission objectives of K Basins. This program is intended to provide an accurate accounting of the actual field configuration by documenting the as-found information on a controlled drawing

  6. Engineering drawing field verification program. Revision 3

    International Nuclear Information System (INIS)

    Ulk, P.F.

    1994-01-01

    Safe, efficient operation of waste tank farm facilities is dependent in part upon the availability of accurate, up-to-date plant drawings. Accurate plant drawings are also required in support of facility upgrades and future engineering remediation projects. This supporting document establishes the procedure for performing a visual field verification of engineering drawings, the degree of visual observation being performed and documenting the results. A copy of the drawing attesting to the degree of visual observation will be paginated into the released Engineering Change Notice (ECN) documenting the field verification for future retrieval and reference. All waste tank farm essential and support drawings within the scope of this program will be converted from manual to computer aided drafting (CAD) drawings. A permanent reference to the field verification status will be placed along the right border of the CAD-converted drawing, referencing the revision level, at which the visual verification was performed and documented

  7. Verification of Open Interactive Markov Chains

    OpenAIRE

    Brazdil, Tomas; Hermanns, Holger; Krcal, Jan; Kretinsky, Jan; Rehak, Vojtech

    2012-01-01

    Interactive Markov chains (IMC) are compositional behavioral models extending both labeled transition systems and continuous-time Markov chains. IMC pair modeling convenience - owed to compositionality properties - with effective verification algorithms and tools - owed to Markov properties. Thus far however, IMC verification did not consider compositionality properties, but considered closed systems. This paper discusses the evaluation of IMC in an open and thus compositional interpretation....

  8. ENVIORNMENTAL TECHNOLOGY VERIFICATION REPORT: ANEST IWATA CORPORATION LPH400-LV HVLP SPRAY GUN

    Science.gov (United States)

    This Enviornmental Technology Verification reports on the characteristics of a paint spray gun. The research showed that the spray gun provided absolute and relative increases in transfer efficiency over the base line and provided a reduction in the use of paint.

  9. A Review of Barriers to Minorities' Participation in Cancer Clinical Trials: Implications for Future Cancer Research.

    Science.gov (United States)

    Salman, Ali; Nguyen, Claire; Lee, Yi-Hui; Cooksey-James, Tawna

    2016-04-01

    To enhance nurses' awareness and competencies in practice and research by reporting the common barriers to participation of minorities in cancer clinical trials and discussing facilitators and useful strategies for recruitment. Several databases were searched for articles published in peer reviewed journals. Some of the barriers to minorities' participation in clinical trials were identified within the cultural social-context of cancer patients. The involvement of community networking was suggested as the most effective strategy for the recruitment of minorities in cancer clinical trials. Using culturally sensitive approaches to enhance ethnic minorities' participation is important for advancing cancer care and eliminating health disparities. Awareness of barriers and potential facilitators to the enrollment of ethnic minority cancer patients may contribute to enhancing nurses' competencies of recruiting ethnic minorities in nursing research, playing efficient roles in cancer clinical trials team, and providing culturally competent quality care.

  10. Verification of Radiation Isocenter on Linac Beam 6 MV using Computed Radiography

    Science.gov (United States)

    Irsal, Muhammad; Hidayanto, Eko; Sutanto, Heri

    2017-06-01

    Radiation isocenter is more important part of quality assurance for the linear accelerator (Linac) due to radiation isocenter is a main location in irradiation radiotherapy, isocenter can shift when the gantry and collimator rotation. In general, the radiation isocenter verification using a special film. This research was conducted radiation isocenter verification using computed radiography with digital image processing techniques. Image acquisition was done using the modalities of Linac 6 MV with star shot method is star-shaped beam due to rotation of the collimator, gantry and couch. Then do the delineation on each beam to determine the centroid and beam diameter. By the results of verification of radiation isocenter performed on collimator and the couch, it shows that the size diameter for rotational collimator is 0.632 mm and 0.458 mm for the couch. Based on AAPM report 40 about the size of the Linac radiation isocenter diameter used in this study is still in good condition and worth to be operated because the value of the radiation isocenter diameter is below 2 mm.

  11. Verification of Radiation Isocenter on Linac Beam 6 MV using Computed Radiography

    International Nuclear Information System (INIS)

    Irsal, Muhammad; Hidayanto, Eko; Sutanto, Heri

    2017-01-01

    Radiation isocenter is more important part of quality assurance for the linear accelerator (Linac) due to radiation isocenter is a main location in irradiation radiotherapy, isocenter can shift when the gantry and collimator rotation. In general, the radiation isocenter verification using a special film. This research was conducted radiation isocenter verification using computed radiography with digital image processing techniques. Image acquisition was done using the modalities of Linac 6 MV with star shot method is star-shaped beam due to rotation of the collimator, gantry and couch. Then do the delineation on each beam to determine the centroid and beam diameter. By the results of verification of radiation isocenter performed on collimator and the couch, it shows that the size diameter for rotational collimator is 0.632 mm and 0.458 mm for the couch. Based on AAPM report 40 about the size of the Linac radiation isocenter diameter used in this study is still in good condition and worth to be operated because the value of the radiation isocenter diameter is below 2 mm. (paper)

  12. The TLRR II – Providing Digital Infrastructure to Research Roman Republican Trials

    Directory of Open Access Journals (Sweden)

    Kirsten Jahn

    2016-12-01

    Full Text Available The project Trials in the Late Roman Republic II (TLRR II aims at collecting, organizing, and analyzing information about Roman legal cases in an XML database. M. Alexander published the book “Trials in the Late Roman Republic, 149 BC to 50 BC” (TLRR I in 1990, and initiated the current project that will make Roman republican trials easily accessible with modern technology. For each case a short description is provided, a clear distinction between assumptions and facts is made, and an updated bibliography can be found at the end of each entry. The open access database can serve both as a reference work and as a starting point for further research in Roman Republican history. It could be a connecting link within the developing digital infrastructure for that era.

  13. Earth Science Enterprise Scientific Data Purchase Project: Verification and Validation

    Science.gov (United States)

    Jenner, Jeff; Policelli, Fritz; Fletcher, Rosea; Holecamp, Kara; Owen, Carolyn; Nicholson, Lamar; Dartez, Deanna

    2000-01-01

    This paper presents viewgraphs on the Earth Science Enterprise Scientific Data Purchase Project's verification,and validation process. The topics include: 1) What is Verification and Validation? 2) Why Verification and Validation? 3) Background; 4) ESE Data Purchas Validation Process; 5) Data Validation System and Ingest Queue; 6) Shipment Verification; 7) Tracking and Metrics; 8) Validation of Contract Specifications; 9) Earth Watch Data Validation; 10) Validation of Vertical Accuracy; and 11) Results of Vertical Accuracy Assessment.

  14. Collaborative translational research leading to multicenter clinical trials in Duchenne muscular dystrophy: the Cooperative International Neuromuscular Research Group (CINRG).

    Science.gov (United States)

    Escolar, Diana M; Henricson, Erik K; Pasquali, Livia; Gorni, Ksenija; Hoffman, Eric P

    2002-10-01

    Progress in the development of rationally based therapies for Duchenne muscular dystrophy has been accelerated by encouraging multidisciplinary, multi-institutional collaboration between basic science and clinical investigators in the Cooperative International Research Group. We combined existing research efforts in pathophysiology by a gene expression profiling laboratory with the efforts of animal facilities capable of conducting high-throughput drug screening and toxicity testing to identify safe and effective drug compounds that target different parts of the pathophysiologic cascade in a genome-wide drug discovery approach. Simultaneously, we developed a clinical trial coordinating center and an international network of collaborating physicians and clinics where those drugs could be tested in large-scale clinical trials. We hope that by bringing together investigators at these facilities and providing the infrastructure to support their research, we can rapidly move new bench discoveries through animal model screening and into therapeutic testing in humans in a safe, timely and cost-effective setting.

  15. Design verification for large reprocessing plants (Proposed procedures)

    International Nuclear Information System (INIS)

    Rolandi, G.

    1988-07-01

    In the 1990s, four large commercial reprocessing plants will progressively come into operation: If an effective and efficient safeguards system is to be applied to these large and complex plants, several important factors have to be considered. One of these factors, addressed in the present report, concerns plant design verification. Design verification provides an overall assurance on plant measurement data. To this end design verification, although limited to the safeguards aspects of the plant, must be a systematic activity, which starts during the design phase, continues during the construction phase and is particularly performed during the various steps of the plant's commissioning phase. The detailed procedures for design information verification on commercial reprocessing plants must be defined within the frame of the general provisions set forth in INFCIRC/153 for any type of safeguards related activities and specifically for design verification. The present report is intended as a preliminary contribution on a purely technical level, and focusses on the problems within the Agency. For the purpose of the present study the most complex case was assumed: i.e. a safeguards system based on conventional materials accountancy, accompanied both by special input and output verification and by some form of near-real-time accountancy involving in-process inventory taking, based on authenticated operator's measurement data. C/S measures are also foreseen, where necessary to supplement the accountancy data. A complete ''design verification'' strategy comprehends: informing the Agency of any changes in the plant system which are defined as ''safeguards relevant''; ''reverifying by the Agency upon receiving notice from the Operator on any changes, on ''design information''. 13 refs

  16. Material integrity verification radar

    International Nuclear Information System (INIS)

    Koppenjan, S.K.

    1999-01-01

    The International Atomic Energy Agency (IAEA) has the need for verification of 'as-built' spent fuel-dry storage containers and other concrete structures. The IAEA has tasked the Special Technologies Laboratory (STL) to fabricate, test, and deploy a stepped-frequency Material Integrity Verification Radar (MIVR) system to nondestructively verify the internal construction of these containers. The MIVR system is based on previously deployed high-frequency, ground penetrating radar (GPR) systems that have been developed by STL for the U.S. Department of Energy (DOE). Whereas GPR technology utilizes microwave radio frequency energy to create subsurface images, MTVR is a variation for which the medium is concrete instead of soil. The purpose is to nondestructively verify the placement of concrete-reinforcing materials, pipes, inner liners, and other attributes of the internal construction. The MIVR system underwent an initial field test on CANDU reactor spent fuel storage canisters at Atomic Energy of Canada Limited (AECL), Chalk River Laboratories, Ontario, Canada, in October 1995. A second field test at the Embalse Nuclear Power Plant in Embalse, Argentina, was completed in May 1996. The DOE GPR also was demonstrated at the site. Data collection and analysis were performed for the Argentine National Board of Nuclear Regulation (ENREN). IAEA and the Brazilian-Argentine Agency for the Control and Accounting of Nuclear Material (ABACC) personnel were present as observers during the test. Reinforcing materials were evident in the color, two-dimensional images produced by the MIVR system. A continuous pattern of reinforcing bars was evident and accurate estimates on the spacing, depth, and size were made. The potential uses for safeguard applications were jointly discussed. The MIVR system, as successfully demonstrated in the two field tests, can be used as a design verification tool for IAEA safeguards. A deployment of MIVR for Design Information Questionnaire (DIQ

  17. Specification and Automated Verification of Real-Time Behaviour

    DEFF Research Database (Denmark)

    Kristensen, C.H.; Andersen, J.H.; Skou, A.

    1995-01-01

    In this paper we sketch a method for specification and automatic verification of real-time software properties.......In this paper we sketch a method for specification and automatic verification of real-time software properties....

  18. Specification and Automated Verification of Real-Time Behaviour

    DEFF Research Database (Denmark)

    Andersen, J.H.; Kristensen, C.H.; Skou, A.

    1996-01-01

    In this paper we sketch a method for specification and automatic verification of real-time software properties.......In this paper we sketch a method for specification and automatic verification of real-time software properties....

  19. Environmental Technology Verification: Biological Inactivation Efficiency by HVAC In-Duct Ultraviolet Light Systems--American Ultraviolet Corporation, DC24-6-120 [EPA600etv08005

    Science.gov (United States)

    The Air Pollution Control Technology Verification Center (APCT Center) is operated by RTI International (RTI), in cooperation with EPA's National Risk Management Research Laboratory. The APCT Center conducts verifications of technologies that clean air in ventilation systems, inc...

  20. Formal Verification of Continuous Systems

    DEFF Research Database (Denmark)

    Sloth, Christoffer

    2012-01-01

    and the verification procedures should be algorithmically synthesizable. Autonomous control plays an important role in many safety-critical systems. This implies that a malfunction in the control system can have catastrophic consequences, e.g., in space applications where a design flaw can result in large economic...... losses. Furthermore, a malfunction in the control system of a surgical robot may cause death of patients. The previous examples involve complex systems that are required to operate according to complex specifications. The systems cannot be formally verified by modern verification techniques, due...

  1. Biometric Technologies and Verification Systems

    CERN Document Server

    Vacca, John R

    2007-01-01

    Biometric Technologies and Verification Systems is organized into nine parts composed of 30 chapters, including an extensive glossary of biometric terms and acronyms. It discusses the current state-of-the-art in biometric verification/authentication, identification and system design principles. It also provides a step-by-step discussion of how biometrics works; how biometric data in human beings can be collected and analyzed in a number of ways; how biometrics are currently being used as a method of personal identification in which people are recognized by their own unique corporal or behavior

  2. Runtime Verification Through Forward Chaining

    Directory of Open Access Journals (Sweden)

    Alan Perotti

    2014-12-01

    Full Text Available In this paper we present a novel rule-based approach for Runtime Verification of FLTL properties over finite but expanding traces. Our system exploits Horn clauses in implication form and relies on a forward chaining-based monitoring algorithm. This approach avoids the branching structure and exponential complexity typical of tableaux-based formulations, creating monitors with a single state and a fixed number of rules. This allows for a fast and scalable tool for Runtime Verification: we present the technical details together with a working implementation.

  3. Current status of verification practices in clinical biochemistry in Spain.

    Science.gov (United States)

    Gómez-Rioja, Rubén; Alvarez, Virtudes; Ventura, Montserrat; Alsina, M Jesús; Barba, Núria; Cortés, Mariano; Llopis, María Antonia; Martínez, Cecilia; Ibarz, Mercè

    2013-09-01

    Verification uses logical algorithms to detect potential errors before laboratory results are released to the clinician. Even though verification is one of the main processes in all laboratories, there is a lack of standardization mainly in the algorithms used and the criteria and verification limits applied. A survey in clinical laboratories in Spain was conducted in order to assess the verification process, particularly the use of autoverification. Questionnaires were sent to the laboratories involved in the External Quality Assurance Program organized by the Spanish Society of Clinical Biochemistry and Molecular Pathology. Seven common biochemical parameters were included (glucose, cholesterol, triglycerides, creatinine, potassium, calcium, and alanine aminotransferase). Completed questionnaires were received from 85 laboratories. Nearly all the laboratories reported using the following seven verification criteria: internal quality control, instrument warnings, sample deterioration, reference limits, clinical data, concordance between parameters, and verification of results. The use of all verification criteria varied according to the type of verification (automatic, technical, or medical). Verification limits for these parameters are similar to biological reference ranges. Delta Check was used in 24% of laboratories. Most laboratories (64%) reported using autoverification systems. Autoverification use was related to laboratory size, ownership, and type of laboratory information system, but amount of use (percentage of test autoverified) was not related to laboratory size. A total of 36% of Spanish laboratories do not use autoverification, despite the general implementation of laboratory information systems, most of them, with autoverification ability. Criteria and rules for seven routine biochemical tests were obtained.

  4. SUPERFUND TREATABILITY CLEARINGHOUSE: FULL SCALE ROTARY KILN INCINERATOR FIELD TRIAL: PHASE I, VERIFICATION TRIAL BURN ON DIOXIN/HERBICIDE ORANGE CONTAMINATED SOIL

    Science.gov (United States)

    This treatability study reports on the results of one of a series of field trials using various remedial action technologies that may be capable of restoring Herbicide Orange (HO)XDioxin contaminated sites. A full-scale field trial using a rotary kiln incinerator capable of pro...

  5. Production of plastic scintillation survey meter for clearance verification measurement

    International Nuclear Information System (INIS)

    Tachibana, Mitsuo; Shiraishi, Kunio; Ishigami, Tsutomu; Tomii, Hiroyuki

    2008-03-01

    In the Nuclear Science Research Institute, the decommissioning of various nuclear facilities is carried out according to the plan for meeting the midterm goal of the Japan Atomic Energy Agency (JAEA). An increase in the clearance verification measurement of concrete on buildings and the radiation measurement for releasing controlled areas will be expected along with the dismantlement of nuclear facilities in the future. The radiation measurement for releasing controlled areas has been carried out in small-scale nuclear facilities including the JPDR (Japan Power Demonstration Reactor). However, the radiation measurement with an existing measuring device was difficult in effects of radiation from radioactive materials that remains in buried piping. On the other hand, there is no experience that the clearance verification measurement is executed in the JAEA. The generation of a large amount of clearance object will be expected along with the decommissioning of the nuclear facilities in the future. The plastic scintillation survey meter (hereafter, 'PL measuring device') was produced to apply to the clearance verification measurement and the radiation measurement for releasing controlled areas. The basic characteristic test and the actual test were confirmed using the PL measuring device. As a result of these tests, it was found that the evaluation value of radioactivity with the PL measuring device was accuracy equal with the existing measuring device. The PL measuring device has feature of the existing measuring device with a light weight and easy operability. The PL measuring device can correct the gamma ray too. The PL measuring device is effective to the clearance verification measurement of concrete on buildings and the radiation measurement for releasing controlled areas. (author)

  6. Guidance and Control Software Project Data - Volume 3: Verification Documents

    Science.gov (United States)

    Hayhurst, Kelly J. (Editor)

    2008-01-01

    The Guidance and Control Software (GCS) project was the last in a series of software reliability studies conducted at Langley Research Center between 1977 and 1994. The technical results of the GCS project were recorded after the experiment was completed. Some of the support documentation produced as part of the experiment, however, is serving an unexpected role far beyond its original project context. Some of the software used as part of the GCS project was developed to conform to the RTCA/DO-178B software standard, "Software Considerations in Airborne Systems and Equipment Certification," used in the civil aviation industry. That standard requires extensive documentation throughout the software development life cycle, including plans, software requirements, design and source code, verification cases and results, and configuration management and quality control data. The project documentation that includes this information is open for public scrutiny without the legal or safety implications associated with comparable data from an avionics manufacturer. This public availability has afforded an opportunity to use the GCS project documents for DO-178B training. This report provides a brief overview of the GCS project, describes the 4-volume set of documents and the role they are playing in training, and includes the verification documents from the GCS project. Volume 3 contains four appendices: A. Software Verification Cases and Procedures for the Guidance and Control Software Project; B. Software Verification Results for the Pluto Implementation of the Guidance and Control Software; C. Review Records for the Pluto Implementation of the Guidance and Control Software; and D. Test Results Logs for the Pluto Implementation of the Guidance and Control Software.

  7. Complementary technologies for verification of excess plutonium

    International Nuclear Information System (INIS)

    Langner, D.G.; Nicholas, N.J.; Ensslin, N.; Fearey, B.L.; Mitchell, D.J.; Marlow, K.W.; Luke, S.J.; Gosnell, T.B.

    1998-01-01

    Three complementary measurement technologies have been identified as candidates for use in the verification of excess plutonium of weapons origin. These technologies: high-resolution gamma-ray spectroscopy, neutron multiplicity counting, and low-resolution gamma-ray spectroscopy, are mature, robust technologies. The high-resolution gamma-ray system, Pu-600, uses the 630--670 keV region of the emitted gamma-ray spectrum to determine the ratio of 240 Pu to 239 Pu. It is useful in verifying the presence of plutonium and the presence of weapons-grade plutonium. Neutron multiplicity counting is well suited for verifying that the plutonium is of a safeguardable quantity and is weapons-quality material, as opposed to residue or waste. In addition, multiplicity counting can independently verify the presence of plutonium by virtue of a measured neutron self-multiplication and can detect the presence of non-plutonium neutron sources. The low-resolution gamma-ray spectroscopic technique is a template method that can provide continuity of knowledge that an item that enters the a verification regime remains under the regime. In the initial verification of an item, multiple regions of the measured low-resolution spectrum form a unique, gamma-radiation-based template for the item that can be used for comparison in subsequent verifications. In this paper the authors discuss these technologies as they relate to the different attributes that could be used in a verification regime

  8. A DVE Time Management Simulation and Verification Platform Based on Causality Consistency Middleware

    Science.gov (United States)

    Zhou, Hangjun; Zhang, Wei; Peng, Yuxing; Li, Sikun

    During the course of designing a time management algorithm for DVEs, the researchers always become inefficiency for the distraction from the realization of the trivial and fundamental details of simulation and verification. Therefore, a platform having realized theses details is desirable. However, this has not been achieved in any published work to our knowledge. In this paper, we are the first to design and realize a DVE time management simulation and verification platform providing exactly the same interfaces as those defined by the HLA Interface Specification. Moreover, our platform is based on a new designed causality consistency middleware and might offer the comparison of three kinds of time management services: CO, RO and TSO. The experimental results show that the implementation of the platform only costs small overhead, and that the efficient performance of it is highly effective for the researchers to merely focus on the improvement of designing algorithms.

  9. A Synthesized Framework for Formal Verification of Computing Systems

    Directory of Open Access Journals (Sweden)

    Nikola Bogunovic

    2003-12-01

    Full Text Available Design process of computing systems gradually evolved to a level that encompasses formal verification techniques. However, the integration of formal verification techniques into a methodical design procedure has many inherent miscomprehensions and problems. The paper explicates the discrepancy between the real system implementation and the abstracted model that is actually used in the formal verification procedure. Particular attention is paid to the seamless integration of all phases of the verification procedure that encompasses definition of the specification language and denotation and execution of conformance relation between the abstracted model and its intended behavior. The concealed obstacles are exposed, computationally expensive steps identified and possible improvements proposed.

  10. Research design considerations for chronic pain prevention clinical trials: IMMPACT recommendations.

    Science.gov (United States)

    Gewandter, Jennifer S; Dworkin, Robert H; Turk, Dennis C; Farrar, John T; Fillingim, Roger B; Gilron, Ian; Markman, John D; Oaklander, Anne Louise; Polydefkis, Michael J; Raja, Srinivasa N; Robinson, James P; Woolf, Clifford J; Ziegler, Dan; Ashburn, Michael A; Burke, Laurie B; Cowan, Penney; George, Steven Z; Goli, Veeraindar; Graff, Ole X; Iyengar, Smriti; Jay, Gary W; Katz, Joel; Kehlet, Henrik; Kitt, Rachel A; Kopecky, Ernest A; Malamut, Richard; McDermott, Michael P; Palmer, Pamela; Rappaport, Bob A; Rauschkolb, Christine; Steigerwald, Ilona; Tobias, Jeffrey; Walco, Gary A

    2015-07-01

    Although certain risk factors can identify individuals who are most likely to develop chronic pain, few interventions to prevent chronic pain have been identified. To facilitate the identification of preventive interventions, an IMMPACT meeting was convened to discuss research design considerations for clinical trials investigating the prevention of chronic pain. We present general design considerations for prevention trials in populations that are at relatively high risk for developing chronic pain. Specific design considerations included subject identification, timing and duration of treatment, outcomes, timing of assessment, and adjusting for risk factors in the analyses. We provide a detailed examination of 4 models of chronic pain prevention (ie, chronic postsurgical pain, postherpetic neuralgia, chronic low back pain, and painful chemotherapy-induced peripheral neuropathy). The issues discussed can, in many instances, be extrapolated to other chronic pain conditions. These examples were selected because they are representative models of primary and secondary prevention, reflect persistent pain resulting from multiple insults (ie, surgery, viral infection, injury, and toxic or noxious element exposure), and are chronically painful conditions that are treated with a range of interventions. Improvements in the design of chronic pain prevention trials could improve assay sensitivity and thus accelerate the identification of efficacious interventions. Such interventions would have the potential to reduce the prevalence of chronic pain in the population. Additionally, standardization of outcomes in prevention clinical trials will facilitate meta-analyses and systematic reviews and improve detection of preventive strategies emerging from clinical trials.

  11. Independent verification in operations at nuclear power plants

    International Nuclear Information System (INIS)

    Donderi, D.C.; Smiley, A.; Ostry, D.J.; Moray, N.P.

    1995-09-01

    A critical review of approaches to independent verification in operations used in nuclear power plant quality assurance programs in other countries, was conducted for this study. This report identifies the uses of independent verification and provides an assessment of the effectiveness of the various approaches. The findings indicate that at Canadian nuclear power plants as much, if not more, independent verification is performed than at power plants in the other countries included in the study. Additional requirements in this area are not proposed for Canadian stations. (author)

  12. Determining the Accuracy of Crowdsourced Tweet Verification for Auroral Research

    Directory of Open Access Journals (Sweden)

    Nathan A. Case

    2016-12-01

    Full Text Available The Aurorasaurus project harnesses volunteer crowdsourcing to identify sightings of an aurora (the “northern/southern lights” posted by citizen scientists on Twitter. Previous studies have demonstrated that aurora sightings can be mined from Twitter with the caveat that there is a large background level of non-sighting tweets, especially during periods of low auroral activity. Aurorasaurus attempts to mitigate this, and thus increase the quality of its Twitter sighting data, by using volunteers to sift through a pre-filtered list of geolocated tweets to verify real-time aurora sightings. In this study, the current implementation of this crowdsourced verification system, including the process of geolocating tweets, is described and its accuracy (which, overall, is found to be 68.4% is determined. The findings suggest that citizen science volunteers are able to accurately filter out unrelated, spam-like, Twitter data but struggle when filtering out somewhat related, yet undesired, data. The citizen scientists particularly struggle with determining the real-time nature of the sightings, so care must be taken when relying on crowdsourced identification.

  13. Sequential, Multiple Assignment, Randomized Trial Designs in Immuno-oncology Research.

    Science.gov (United States)

    Kidwell, Kelley M; Postow, Michael A; Panageas, Katherine S

    2018-02-15

    Clinical trials investigating immune checkpoint inhibitors have led to the approval of anti-CTLA-4 (cytotoxic T-lymphocyte antigen-4), anti-PD-1 (programmed death-1), and anti-PD-L1 (PD-ligand 1) drugs by the FDA for numerous tumor types. In the treatment of metastatic melanoma, combinations of checkpoint inhibitors are more effective than single-agent inhibitors, but combination immunotherapy is associated with increased frequency and severity of toxicity. There are questions about the use of combination immunotherapy or single-agent anti-PD-1 as initial therapy and the number of doses of either approach required to sustain a response. In this article, we describe a novel use of sequential, multiple assignment, randomized trial (SMART) design to evaluate immune checkpoint inhibitors to find treatment regimens that adapt within an individual based on intermediate response and lead to the longest overall survival. We provide a hypothetical example SMART design for BRAF wild-type metastatic melanoma as a framework for investigating immunotherapy treatment regimens. We compare implementing a SMART design to implementing multiple traditional randomized clinical trials. We illustrate the benefits of a SMART over traditional trial designs and acknowledge the complexity of a SMART. SMART designs may be an optimal way to find treatment strategies that yield durable response, longer survival, and lower toxicity. Clin Cancer Res; 24(4); 730-6. ©2017 AACR . ©2017 American Association for Cancer Research.

  14. South African Research Ethics Committee Review of Standards of Prevention in HIV Vaccine Trial Protocols.

    Science.gov (United States)

    Essack, Zaynab; Wassenaar, Douglas R

    2018-04-01

    HIV prevention trials provide a prevention package to participants to help prevent HIV acquisition. As new prevention methods are proven effective, this raises ethical and scientific design complexities regarding the prevention package or standard of prevention. Given its high HIV incidence and prevalence, South Africa has become a hub for HIV prevention research. For this reason, it is critical to study the implementation of relevant ethical-legal frameworks for such research in South Africa. This qualitative study used in-depth interviews to explore the practices and perspectives of eight members of South African research ethics committees (RECs) who have reviewed protocols for HIV vaccine trials. Their practices and perspectives are compared with ethics guideline requirements for standards of prevention.

  15. Effectiveness in practice-based research: Looking for alternatives to the randomized controlled trial (RCT)

    NARCIS (Netherlands)

    Tavecchio, L.

    2015-01-01

    Over the last decade, the status of the randomized controlled trial (RCT), hallmark of evidence-based medicine (research), has been growing strongly in general practice, social work and public health. But this type of research is only practicable under strictly controlled and well-defined settings

  16. VERIFICATION OF GEAR DYNAMIC MODEL IN DIFFERENT OPERATING CONDITIONS

    Directory of Open Access Journals (Sweden)

    Grzegorz PERUŃ

    2014-09-01

    Full Text Available The article presents the results of verification of the drive system dynamic model with gear. Tests were carried out on the real object in different operating conditions. For the same assumed conditions were also carried out simulation studies. Comparison of the results obtained from those two series of tests helped determine the suitability of the model and verify the possibility of replacing experimental research by simulations with use of dynamic model.

  17. Nuclear Data Verification and Standardization

    Energy Technology Data Exchange (ETDEWEB)

    Karam, Lisa R.; Arif, Muhammad; Thompson, Alan K.

    2011-10-01

    The objective of this interagency program is to provide accurate neutron interaction verification and standardization data for the U.S. Department of Energy Division of Nuclear Physics programs which include astrophysics, radioactive beam studies, and heavy-ion reactions. The measurements made in this program are also useful to other programs that indirectly use the unique properties of the neutron for diagnostic and analytical purposes. These include homeland security, personnel health and safety, nuclear waste disposal, treaty verification, national defense, and nuclear based energy production. The work includes the verification of reference standard cross sections and related neutron data employing the unique facilities and capabilities at NIST and other laboratories as required; leadership and participation in international intercomparisons and collaborations; and the preservation of standard reference deposits. An essential element of the program is critical evaluation of neutron interaction data standards including international coordinations. Data testing of critical data for important applications is included. The program is jointly supported by the Department of Energy and the National Institute of Standards and Technology.

  18. Verification and quality control of routine hematology analyzers

    NARCIS (Netherlands)

    Vis, J Y; Huisman, A

    2016-01-01

    Verification of hematology analyzers (automated blood cell counters) is mandatory before new hematology analyzers may be used in routine clinical care. The verification process consists of several items which comprise among others: precision, accuracy, comparability, carryover, background and

  19. Solid waste operations complex engineering verification program plan

    International Nuclear Information System (INIS)

    Bergeson, C.L.

    1994-01-01

    This plan supersedes, but does not replace, the previous Waste Receiving and Processing/Solid Waste Engineering Development Program Plan. In doing this, it does not repeat the basic definitions of the various types or classes of development activities nor provide the rigorous written description of each facility and assign the equipment to development classes. The methodology described in the previous document is still valid and was used to determine the types of verification efforts required. This Engineering Verification Program Plan will be updated on a yearly basis. This EVPP provides programmatic definition of all engineering verification activities for the following SWOC projects: (1) Project W-026 - Waste Receiving and Processing Facility Module 1; (2) Project W-100 - Waste Receiving and Processing Facility Module 2A; (3) Project W-112 - Phase V Storage Facility; and (4) Project W-113 - Solid Waste Retrieval. No engineering verification activities are defined for Project W-112 as no verification work was identified. The Acceptance Test Procedures/Operational Test Procedures will be part of each project's Title III operation test efforts. The ATPs/OTPs are not covered by this EVPP

  20. Exploring implementation practices in results-based financing: the case of the verification in Benin.

    Science.gov (United States)

    Antony, Matthieu; Bertone, Maria Paola; Barthes, Olivier

    2017-03-14

    Results-based financing (RBF) has been introduced in many countries across Africa and a growing literature is building around the assessment of their impact. These studies are usually quantitative and often silent on the paths and processes through which results are achieved and on the wider health system effects of RBF. To address this gap, our study aims at exploring the implementation of an RBF pilot in Benin, focusing on the verification of results. The study is based on action research carried out by authors involved in the pilot as part of the agency supporting the RBF implementation in Benin. While our participant observation and operational collaboration with project's stakeholders informed the study, the analysis is mostly based on quantitative and qualitative secondary data, collected throughout the project's implementation and documentation processes. Data include project documents, reports and budgets, RBF data on service outputs and on the outcome of the verification, daily activity timesheets of the technical assistants in the districts, as well as focus groups with Community-based Organizations and informal interviews with technical assistants and district medical officers. Our analysis focuses on the actual practices of quantitative, qualitative and community verification. Results show that the verification processes are complex, costly and time-consuming, and in practice they end up differing from what designed originally. We explore the consequences of this on the operation of the scheme, on its potential to generate the envisaged change. We find, for example, that the time taken up by verification procedures limits the time available for data analysis and feedback to facility staff, thus limiting the potential to improve service delivery. Verification challenges also result in delays in bonus payment, which delink effort and reward. Additionally, the limited integration of the verification activities of district teams with their routine tasks

  1. 21 CFR 21.44 - Verification of identity.

    Science.gov (United States)

    2010-04-01

    ... 21 Food and Drugs 1 2010-04-01 2010-04-01 false Verification of identity. 21.44 Section 21.44 Food... Verification of identity. (a) An individual seeking access to records in a Privacy Act Record System may be... identity. The identification required shall be suitable considering the nature of the records sought. No...

  2. Participant recruitment and motivation for participation in optical technology for cervical cancer screening research trials.

    Science.gov (United States)

    Shuhatovich, Olga M; Sharman, Mathilde P; Mirabal, Yvette N; Earle, Nan R; Follen, Michele; Basen-Engquist, Karen

    2005-12-01

    In order to improve recruitment for cervical cancer screening trials, it is necessary to analyze the effectiveness of recruitment strategies used in current trials. A trial to test optical spectroscopy for the diagnosis of cervical neoplasia recruited 1000 women from the community; the trial evaluated the emerging technology against Pap smears and colposcopically directed biopsies for cervical dysplasia. We have examined women's reasons for participating as well as the effectiveness and efficiency for each recruitment strategy. Reasons for participation were identified and compared between trials. The recruitment method that resulted in the most contacts was newspaper reportorial coverage and advertising, followed by family and friends, then television news coverage. The most cost-effective method for finding eligible women who attend the research appointment is word of mouth from a family member or friend. Recommendations are given for maximizing the efficiency of recruitment for cervical cancer screening trials.

  3. Autonomic networking-on-chip bio-inspired specification, development, and verification

    CERN Document Server

    Cong-Vinh, Phan

    2011-01-01

    Despite the growing mainstream importance and unique advantages of autonomic networking-on-chip (ANoC) technology, Autonomic Networking-On-Chip: Bio-Inspired Specification, Development, and Verification is among the first books to evaluate research results on formalizing this emerging NoC paradigm, which was inspired by the human nervous system. The FIRST Book to Assess Research Results, Opportunities, & Trends in ""BioChipNets"" The third book in the Embedded Multi-Core Systems series from CRC Press, this is an advanced technical guide and reference composed of contributions from prominent re

  4. A methodology for the rigorous verification of plasma simulation codes

    Science.gov (United States)

    Riva, Fabio

    2016-10-01

    The methodology used to assess the reliability of numerical simulation codes constitutes the Verification and Validation (V&V) procedure. V&V is composed by two separate tasks: the verification, which is a mathematical issue targeted to assess that the physical model is correctly solved, and the validation, which determines the consistency of the code results, and therefore of the physical model, with experimental data. In the present talk we focus our attention on the verification, which in turn is composed by the code verification, targeted to assess that a physical model is correctly implemented in a simulation code, and the solution verification, that quantifies the numerical error affecting a simulation. Bridging the gap between plasma physics and other scientific domains, we introduced for the first time in our domain a rigorous methodology for the code verification, based on the method of manufactured solutions, as well as a solution verification based on the Richardson extrapolation. This methodology was applied to GBS, a three-dimensional fluid code based on a finite difference scheme, used to investigate the plasma turbulence in basic plasma physics experiments and in the tokamak scrape-off layer. Overcoming the difficulty of dealing with a numerical method intrinsically affected by statistical noise, we have now generalized the rigorous verification methodology to simulation codes based on the particle-in-cell algorithm, which are employed to solve Vlasov equation in the investigation of a number of plasma physics phenomena.

  5. Verification and Performance Analysis for Embedded Systems

    DEFF Research Database (Denmark)

    Larsen, Kim Guldstrand

    2009-01-01

    This talk provides a thorough tutorial of the UPPAAL tool suite for, modeling, simulation, verification, optimal scheduling, synthesis, testing and performance analysis of embedded and real-time systems.......This talk provides a thorough tutorial of the UPPAAL tool suite for, modeling, simulation, verification, optimal scheduling, synthesis, testing and performance analysis of embedded and real-time systems....

  6. Complex Systems Models and Their Applications: Towards a New Science of Verification, Validation & Uncertainty Quantification

    Energy Technology Data Exchange (ETDEWEB)

    Tsao, Jeffrey Y. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Trucano, Timothy G. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Kleban, Stephen D. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Naugle, Asmeret Bier [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Verzi, Stephen Joseph [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Swiler, Laura Painton [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Johnson, Curtis M. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Smith, Mark A. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Flanagan, Tatiana Paz [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Vugrin, Eric D. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Gabert, Kasimir Georg [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Lave, Matthew Samuel [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Chen, Wei [Northwestern Univ., Evanston, IL (United States); DeLaurentis, Daniel [Purdue Univ., West Lafayette, IN (United States); Hubler, Alfred [Univ. of Illinois, Urbana, IL (United States); Oberkampf, Bill [WLO Consulting, Austin, TX (United States)

    2016-08-01

    This report contains the written footprint of a Sandia-hosted workshop held in Albuquerque, New Mexico, June 22-23, 2016 on “Complex Systems Models and Their Applications: Towards a New Science of Verification, Validation and Uncertainty Quantification,” as well as of pre-work that fed into the workshop. The workshop’s intent was to explore and begin articulating research opportunities at the intersection between two important Sandia communities: the complex systems (CS) modeling community, and the verification, validation and uncertainty quantification (VVUQ) community The overarching research opportunity (and challenge) that we ultimately hope to address is: how can we quantify the credibility of knowledge gained from complex systems models, knowledge that is often incomplete and interim, but will nonetheless be used, sometimes in real-time, by decision makers?

  7. Tolerance Verification of Micro and Nano Structures on Polycarbonate Substrates

    DEFF Research Database (Denmark)

    Gasparin, Stefania; Tosello, Guido; Hansen, Hans Nørgaard

    2010-01-01

    Micro and nano structures are an increasing challenge in terms of tolerance verification and process quality control: smaller dimensions led to a smaller tolerance zone to be evaluated. This paper focuses on the verification of CD, DVD and HD-DVD nanoscale features. CD tolerance features are defi......Micro and nano structures are an increasing challenge in terms of tolerance verification and process quality control: smaller dimensions led to a smaller tolerance zone to be evaluated. This paper focuses on the verification of CD, DVD and HD-DVD nanoscale features. CD tolerance features...

  8. When referring physicians and researchers disagree on equipoise : the TOTAL trial experience

    NARCIS (Netherlands)

    Rodrigues, H. C. M. L.; Deprest, J.; v. d. Berg, P. P.

    Objective In this article, we reflect on whether randomized controlled trials (RCTs) are adequate for the clinical evaluation of maternal-fetal surgery for congenital diaphragmatic hernia (CDH), focusing on the role of patients' preferences in the setting up of research protocols, on the requirement

  9. SU-F-T-268: A Feasibility Study of Independent Dose Verification for Vero4DRT

    International Nuclear Information System (INIS)

    Yamashita, M; Kokubo, M; Takahashi, R; Takayama, K; Tanabe, H; Sueoka, M; Okuuchi, N; Ishii, M; Iwamoto, Y; Tachibana, H

    2016-01-01

    Purpose: Vero4DRT (Mitsubishi Heavy Industries Ltd.) has been released for a few years. The treatment planning system (TPS) of Vero4DRT is dedicated, so the measurement is the only method of dose verification. There have been no reports of independent dose verification using Clarksonbased algorithm for Vero4DRT. An independent dose verification software program of the general-purpose linac using a modified Clarkson-based algorithm was modified for Vero4DRT. In this study, we evaluated the accuracy of independent dose verification program and the feasibility of the secondary check for Vero4DRT. Methods: iPlan (Brainlab AG) was used as the TPS. PencilBeam Convolution was used for dose calculation algorithm of IMRT and X-ray Voxel Monte Carlo was used for the others. Simple MU Analysis (SMU, Triangle Products, Japan) was used as the independent dose verification software program in which CT-based dose calculation was performed using a modified Clarkson-based algorithm. In this study, 120 patients’ treatment plans were collected in our institute. The treatments were performed using the conventional irradiation for lung and prostate, SBRT for lung and Step and shoot IMRT for prostate. Comparison in dose between the TPS and the SMU was done and confidence limits (CLs, Mean ± 2SD %) were compared to those from the general-purpose linac. Results: As the results of the CLs, the conventional irradiation (lung, prostate), SBRT (lung) and IMRT (prostate) show 2.2 ± 3.5% (CL of the general-purpose linac: 2.4 ± 5.3%), 1.1 ± 1.7% (−0.3 ± 2.0%), 4.8 ± 3.7% (5.4 ± 5.3%) and −0.5 ± 2.5% (−0.1 ± 3.6%), respectively. The CLs for Vero4DRT show similar results to that for the general-purpose linac. Conclusion: The independent dose verification for the new linac is clinically available as a secondary check and we performed the check with the similar tolerance level of the general-purpose linac. This research is partially supported by Japan Agency for Medical Research and

  10. SU-F-T-268: A Feasibility Study of Independent Dose Verification for Vero4DRT

    Energy Technology Data Exchange (ETDEWEB)

    Yamashita, M; Kokubo, M [Kobe City Medical Center General Hospital, Kobe, Hyogo (Japan); Institute of Biomedical Research and Innovation, Kobe, Hyogo (Japan); Takahashi, R [Cancer Institute Hospital of Japanese Foundation for Cancer Research, Koto, Tokyo (Japan); Takayama, K [Institute of Biomedical Research and Innovation, Kobe, Hyogo (Japan); Kobe City Medical Center General Hospital, Kobe, Hyogo (Japan); Tanabe, H; Sueoka, M; Okuuchi, N [Institute of Biomedical Research and Innovation, Kobe, Hyogo (Japan); Ishii, M; Iwamoto, Y [Kobe City Medical Center General Hospital, Kobe, Hyogo (Japan); Tachibana, H [National Cancer Center, Kashiwa, Chiba (Japan)

    2016-06-15

    Purpose: Vero4DRT (Mitsubishi Heavy Industries Ltd.) has been released for a few years. The treatment planning system (TPS) of Vero4DRT is dedicated, so the measurement is the only method of dose verification. There have been no reports of independent dose verification using Clarksonbased algorithm for Vero4DRT. An independent dose verification software program of the general-purpose linac using a modified Clarkson-based algorithm was modified for Vero4DRT. In this study, we evaluated the accuracy of independent dose verification program and the feasibility of the secondary check for Vero4DRT. Methods: iPlan (Brainlab AG) was used as the TPS. PencilBeam Convolution was used for dose calculation algorithm of IMRT and X-ray Voxel Monte Carlo was used for the others. Simple MU Analysis (SMU, Triangle Products, Japan) was used as the independent dose verification software program in which CT-based dose calculation was performed using a modified Clarkson-based algorithm. In this study, 120 patients’ treatment plans were collected in our institute. The treatments were performed using the conventional irradiation for lung and prostate, SBRT for lung and Step and shoot IMRT for prostate. Comparison in dose between the TPS and the SMU was done and confidence limits (CLs, Mean ± 2SD %) were compared to those from the general-purpose linac. Results: As the results of the CLs, the conventional irradiation (lung, prostate), SBRT (lung) and IMRT (prostate) show 2.2 ± 3.5% (CL of the general-purpose linac: 2.4 ± 5.3%), 1.1 ± 1.7% (−0.3 ± 2.0%), 4.8 ± 3.7% (5.4 ± 5.3%) and −0.5 ± 2.5% (−0.1 ± 3.6%), respectively. The CLs for Vero4DRT show similar results to that for the general-purpose linac. Conclusion: The independent dose verification for the new linac is clinically available as a secondary check and we performed the check with the similar tolerance level of the general-purpose linac. This research is partially supported by Japan Agency for Medical Research and

  11. Standard Verification System (SVS)

    Data.gov (United States)

    Social Security Administration — SVS is a mainframe program that accesses the NUMIDENT to perform SSN verifications. This program is called by SSA Internal applications to verify SSNs. There is also...

  12. Packaged low-level waste verification system

    International Nuclear Information System (INIS)

    Tuite, K.T.; Winberg, M.; Flores, A.Y.; Killian, E.W.; McIsaac, C.V.

    1996-01-01

    Currently, states and low-level radioactive waste (LLW) disposal site operators have no method of independently verifying the radionuclide content of packaged LLW that arrive at disposal sites for disposal. At this time, disposal sites rely on LLW generator shipping manifests and accompanying records to insure that LLW received meets the waste acceptance criteria. An independent verification system would provide a method of checking generator LLW characterization methods and help ensure that LLW disposed of at disposal facilities meets requirements. The Mobile Low-Level Waste Verification System (MLLWVS) provides the equipment, software, and methods to enable the independent verification of LLW shipping records to insure that disposal site waste acceptance criteria are being met. The MLLWVS system was developed under a cost share subcontract between WMG, Inc., and Lockheed Martin Idaho Technologies through the Department of Energy's National Low-Level Waste Management Program at the Idaho National Engineering Laboratory (INEL)

  13. Groundwater flow code verification ''benchmarking'' activity (COVE-2A): Analysis of participants' work

    International Nuclear Information System (INIS)

    Dykhuizen, R.C.; Barnard, R.W.

    1992-02-01

    The Nuclear Waste Repository Technology Department at Sandia National Laboratories (SNL) is investigating the suitability of Yucca Mountain as a potential site for underground burial of nuclear wastes. One element of the investigations is to assess the potential long-term effects of groundwater flow on the integrity of a potential repository. A number of computer codes are being used to model groundwater flow through geologic media in which the potential repository would be located. These codes compute numerical solutions for problems that are usually analytically intractable. Consequently, independent confirmation of the correctness of the solution is often not possible. Code verification is a process that permits the determination of the numerical accuracy of codes by comparing the results of several numerical solutions for the same problem. The international nuclear waste research community uses benchmarking for intercomparisons that partially satisfy the Nuclear Regulatory Commission (NRC) definition of code verification. This report presents the results from the COVE-2A (Code Verification) project, which is a subset of the COVE project

  14. Spent Nuclear Fuel (SNF) Project Design Verification and Validation Process

    International Nuclear Information System (INIS)

    OLGUIN, L.J.

    2000-01-01

    This document provides a description of design verification and validation activities implemented by the Spent Nuclear Fuel (SNF) Project. During the execution of early design verification, a management assessment (Bergman, 1999) and external assessments on configuration management (Augustenburg, 1999) and testing (Loscoe, 2000) were conducted and identified potential uncertainties in the verification process. This led the SNF Chief Engineer to implement corrective actions to improve process and design products. This included Design Verification Reports (DVRs) for each subproject, validation assessments for testing, and verification of the safety function of systems and components identified in the Safety Equipment List to ensure that the design outputs were compliant with the SNF Technical Requirements. Although some activities are still in progress, the results of the DVR and associated validation assessments indicate that Project requirements for design verification are being effectively implemented. These results have been documented in subproject-specific technical documents (Table 2). Identified punch-list items are being dispositioned by the Project. As these remaining items are closed, the technical reports (Table 2) will be revised and reissued to document the results of this work

  15. A web-based clinical trial management system for a sham-controlled multicenter clinical trial in depression.

    Science.gov (United States)

    Durkalski, Valerie; Wenle Zhao; Dillon, Catherine; Kim, Jaemyung

    2010-04-01

    Clinical trial investigators and sponsors invest vast amounts of resources and energy into conducting trials and often face daily challenges with data management, project management, and data quality control. Rather than waiting months for study progress reports, investigators need the ability to use real-time data for the coordination and management of study activities across all study team members including site investigators, oversight committees, data and safety monitoring boards, and medical safety monitors. Web-based data management systems are beginning to meet this need but what distinguishes one system from the other are user needs/requirements and cost. To illustrate the development and implementation of a web-based data and project management system for a multicenter clinical trial designed to test the superiority of repeated transcranial magnetic stimulation versus sham for the treatment of patients with major depression. The authors discuss the reasons for not using a commercially available system for this study and describe the approach to developing their own web-based system for the OPT-TMS study. Timelines, effort, system architecture, and lessons learned are shared with the hope that this information will direct clinical trial researchers and software developers towards more efficient, user-friendly systems. The developers use a combination of generic and custom application code to allow for the flexibility to adapt the system to the needs of the study. Features of the system include: central participant registration and randomization; secure data entry at the site; participant progress/study calendar; safety data reporting; device accounting; monitor verification; and user-configurable generic reports and built-in customized reports. Hard coding was more time-efficient to address project-specific issues compared with the effort of creating a generic code application. As a consequence of this strategy, the required maintenance of the system is

  16. Bias associated with delayed verification in test accuracy studies: accuracy of tests for endometrial hyperplasia may be much higher than we think!

    Directory of Open Access Journals (Sweden)

    Coomarasamy Aravinthan

    2004-05-01

    Full Text Available Abstract Background To empirically evaluate bias in estimation of accuracy associated with delay in verification of diagnosis among studies evaluating tests for predicting endometrial hyperplasia. Methods Systematic reviews of all published research on accuracy of miniature endometrial biopsy and endometr ial ultrasonography for diagnosing endometrial hyperplasia identified 27 test accuracy studies (2,982 subjects. Of these, 16 had immediate histological verification of diagnosis while 11 had verification delayed > 24 hrs after testing. The effect of delay in verification of diagnosis on estimates of accuracy was evaluated using meta-regression with diagnostic odds ratio (dOR as the accuracy measure. This analysis was adjusted for study quality and type of test (miniature endometrial biopsy or endometrial ultrasound. Results Compared to studies with immediate verification of diagnosis (dOR 67.2, 95% CI 21.7–208.8, those with delayed verification (dOR 16.2, 95% CI 8.6–30.5 underestimated the diagnostic accuracy by 74% (95% CI 7%–99%; P value = 0.048. Conclusion Among studies of miniature endometrial biopsy and endometrial ultrasound, diagnostic accuracy is considerably underestimated if there is a delay in histological verification of diagnosis.

  17. Clinical Trials

    Medline Plus

    Full Text Available ... Entire Site NHLBI Entire Site Health Topics News & Resources Intramural Research ... or device is safe and effective for humans. What Are Clinical Trials? Clinical trials are research ...

  18. As-Built Verification Plan Spent Nuclear Fuel Canister Storage Building MCO Handling Machine

    International Nuclear Information System (INIS)

    SWENSON, C.E.

    2000-01-01

    This as-built verification plan outlines the methodology and responsibilities that will be implemented during the as-built field verification activity for the Canister Storage Building (CSB) MCO HANDLING MACHINE (MHM). This as-built verification plan covers THE ELECTRICAL PORTION of the CONSTRUCTION PERFORMED BY POWER CITY UNDER CONTRACT TO MOWAT. The as-built verifications will be performed in accordance Administrative Procedure AP 6-012-00, Spent Nuclear Fuel Project As-Built Verification Plan Development Process, revision I. The results of the verification walkdown will be documented in a verification walkdown completion package, approved by the Design Authority (DA), and maintained in the CSB project files

  19. Research Article ( New England Journal of Medicine ) A trial of a 7 ...

    African Journals Online (AJOL)

    Research Article (New England Journal of Medicine) A trial of a 7-valent pneumococcal conjugate vaccine in HIV-infected adults. Neil French, Stephen B. Gordon, Thandie Mwalukomo, Sarah A. White, Gershom Mwafulirwa, Herbert Longwe, Martin Mwaiponya, Eduard E. Zijlstra, Malcolm E. Molyneux, Charles F. Gilks ...

  20. 37 CFR 262.7 - Verification of royalty payments.

    Science.gov (United States)

    2010-07-01

    ... Designated Agent have agreed as to proper verification methods. (b) Frequency of verification. A Copyright Owner or a Performer may conduct a single audit of the Designated Agent upon reasonable notice and... COPYRIGHT ARBITRATION ROYALTY PANEL RULES AND PROCEDURES RATES AND TERMS FOR CERTAIN ELIGIBLE...

  1. 40 CFR 1065.675 - CLD quench verification calculations.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 32 2010-07-01 2010-07-01 false CLD quench verification calculations... POLLUTION CONTROLS ENGINE-TESTING PROCEDURES Calculations and Data Requirements § 1065.675 CLD quench verification calculations. Perform CLD quench-check calculations as follows: (a) Perform a CLD analyzer quench...

  2. Formal Verification of Digital Protection Logic and Automatic Testing Software

    Energy Technology Data Exchange (ETDEWEB)

    Cha, S. D.; Ha, J. S.; Seo, J. S. [KAIST, Daejeon (Korea, Republic of)

    2008-06-15

    - Technical aspect {center_dot} It is intended that digital I and C software have safety and reliability. Project results help the software to acquire license. Software verification technique, which results in this project, can be to use for digital NPP(Nuclear power plant) in the future. {center_dot} This research introduces many meaningful results of verification on digital protection logic and suggests I and C software testing strategy. These results apply to verify nuclear fusion device, accelerator, nuclear waste management and nuclear medical device that require dependable software and high-reliable controller. Moreover, These can be used for military, medical or aerospace-related software. - Economical and industrial aspect {center_dot} Since safety of digital I and C software is highly import, It is essential for the software to be verified. But verification and licence acquisition related to digital I and C software face high cost. This project gives economic profit to domestic economy by using introduced verification and testing technique instead of foreign technique. {center_dot} The operation rate of NPP will rise, when NPP safety critical software is verified with intellectual V and V tool. It is expected that these software substitute safety-critical software that wholly depend on foreign. Consequently, the result of this project has high commercial value and the recognition of the software development works will be able to be spread to the industrial circles. - Social and cultural aspect People expect that nuclear power generation contributes to relieving environmental problems because that does not emit more harmful air pollution source than other power generations. To give more trust and expectation about nuclear power generation to our society, we should make people to believe that NPP is highly safe system. In that point of view, we can present high-reliable I and C proofed by intellectual V and V technique as evidence

  3. Pharmacy Students’ Knowledge and Attitude toward Registration Trials and Clinical Research: A Survey in a Japanese University Hospital

    Directory of Open Access Journals (Sweden)

    Natsuko Ise

    2017-12-01

    Full Text Available Clinical research plays a fundamental role in establishing new treatments. Clinical research coordinators are considered essential in clinical research, and medical professionals such as pharmacists often take on this role. Pharmacy students can be considered future candidates for this task. We used questionnaires to survey the knowledge of and attitudes toward registration trials and clinical research of pharmacy students at Tokushima University Hospital. All pharmacy students (103 to whom questionnaires were sent responded. Almost all respondents were aware of registration trials and clinical research. More than 90% were aware of the existence of clinical research coordinators, and about half (48.6% understood their role. In clinical research terminology, most respondents were aware of informed consent and related issues, but fewer than 20% were aware of more practical things. In total, 29.1% and 40.8% of the respondents were willing to carry out and coordinate research. These findings suggest that pharmacy students have basic knowledge of clinical research and that many students are willing to carry out and coordinate clinical research. More practical exposure to clinical research may help to strengthen their future contribution. Further studies may help to determine how to provide education on registration trials and clinical research to pharmacy students.

  4. Analysis and Transformation Tools for Constrained Horn Clause Verification

    DEFF Research Database (Denmark)

    Kafle, Bishoksan; Gallagher, John Patrick

    2014-01-01

    Several techniques and tools have been developed for verification of properties expressed as Horn clauses with constraints over a background theory (CHC). Current CHC verification tools implement intricate algorithms and are often limited to certain subclasses of CHC problems. Our aim in this work...... is to investigate the use of a combination of off-the-shelf techniques from the literature in analysis and transformation of Constraint Logic Programs (CLPs) to solve challenging CHC verification problems. We find that many problems can be solved using a combination of tools based on well-known techniques from...... abstract interpretation, semantics-preserving transformations, program specialisation and query-answer transformations. This gives insights into the design of automatic, more general CHC verification tools based on a library of components....

  5. DIMENSIONAL VERIFICATION AND QUALITY CONTROL OF IMPLANTS PRODUCED BY ADDITIVE MANUFACTURING

    Directory of Open Access Journals (Sweden)

    Teodor Toth

    2015-07-01

    Full Text Available Purpose: Development of computer technology and alternative manufacturing methods in form of additive manufacturing leads to the manufacture of products with complex shapes. In the field of medicine they include, inter alia, custom-made implants manufactured for a particular patient, such as cranial implants, maxillofacial implants, etc. With regard to the fact that such implants are inserted into a patient’s body, it is necessary to perform the verification, including the shape and dimensional verification. The article deals with the application of the industrial computer tomography within the process of inspection and verification of selected custom-made implant types.Methodology/Approach: The Department of Biomedical Engineering and Measurement performs the verification of medicinal products manufactured by the additive manufacturing technologies from the Ti-6Al-4V (Grade 5 titanium alloy, using the coordinate measuring machine Carl Zeiss Contura G2 and the industrial computed tomography machine Carl Zeiss Metrotom 1500. These equipment fulfil the requirements for the identification and evaluation of dimensions of both, the external and the internal structures. Findings: The article presents the possibilities of the computed tomography utilisation in the inspection of individual implant manufacture using the additive manufacturing technologies. The results indicate that with the adjustment of appropriate input parameters (alignment, this technology is appropriate for the analysis of shape deviations, when compared with the CAD model.Research Limitation/implication: With the increasing distance of measured object from X-ray source, the machine’s resolution function decreases. Decreasing of resolution has a minor impact on the measured dimensions (relatively high tolerances, but has a significant impact on the evaluation of porosity and inclusions. Originality/Value of paper: Currently, the verification of a manufactured implant  can be

  6. Inconsistencies in quality of life data collection in clinical trials: a potential source of bias? Interviews with research nurses and trialists.

    Science.gov (United States)

    Kyte, Derek; Ives, Jonathan; Draper, Heather; Keeley, Thomas; Calvert, Melanie

    2013-01-01

    Patient-reported outcomes (PROs), such as health-related quality of life (HRQL) are increasingly used to evaluate treatment effectiveness in clinical trials, are valued by patients, and may inform important decisions in the clinical setting. It is of concern, therefore, that preliminary evidence, gained from group discussions at UK-wide Medical Research Council (MRC) quality of life training days, suggests there are inconsistent standards of HRQL data collection in trials and appropriate training and education is often lacking. Our objective was to investigate these reports, to determine if they represented isolated experiences, or were indicative of a potentially wider problem. We undertook a qualitative study, conducting 26 semi-structured interviews with research nurses, data managers, trial coordinators and research facilitators involved in the collection and entry of HRQL data in clinical trials, across one primary care NHS trust, two secondary care NHS trusts and two clinical trials units in the UK. We used conventional content analysis to analyze and interpret our data. Our study participants reported (1) inconsistent standards in HRQL measurement, both between, and within, trials, which appeared to risk the introduction of bias; (2), difficulties in dealing with HRQL data that raised concern for the well-being of the trial participant, which in some instances led to the delivery of non-protocol driven co-interventions, (3), a frequent lack of HRQL protocol content and appropriate training and education of trial staff, and (4) that HRQL data collection could be associated with emotional and/or ethical burden. Our findings suggest there are inconsistencies in the standards of HRQL data collection in some trials resulting from a general lack of HRQL-specific protocol content, training and education. These inconsistencies could lead to biased HRQL trial results. Future research should aim to develop HRQL guidelines and training programmes aimed at supporting

  7. A Cluster-Randomized Trial of Restorative Practices: An Illustration to Spur High-Quality Research and Evaluation

    Science.gov (United States)

    Acosta, Joie D.; Chinman, Matthew; Ebener, Patricia; Phillips, Andrea; Xenakis, Lea; Malone, Patrick S.

    2016-01-01

    Restorative practices in schools lack rigorous evaluation studies. As an example of rigorous school-based research, this article describes the first randomized control trial of restorative practices to date, the Study of Restorative Practices. It is a 5-year, cluster-randomized controlled trial (RCT) of the Restorative Practices Intervention (RPI)…

  8. Clinical Trials

    Medline Plus

    Full Text Available ... take part in a clinical trial. When researchers think that a trial's potential risks are greater than ... care costs for clinical trials. If you're thinking about taking part in a clinical trial, find ...

  9. Methods of Software Verification

    Directory of Open Access Journals (Sweden)

    R. E. Gurin

    2015-01-01

    Full Text Available This article is devoted to the problem of software verification (SW. Methods of software verification designed to check the software for compliance with the stated requirements such as correctness, system security and system adaptability to small changes in the environment, portability and compatibility, etc. These are various methods both by the operation process and by the way of achieving result. The article describes the static and dynamic methods of software verification and paid attention to the method of symbolic execution. In its review of static analysis are discussed and described the deductive method, and methods for testing the model. A relevant issue of the pros and cons of a particular method is emphasized. The article considers classification of test techniques for each method. In this paper we present and analyze the characteristics and mechanisms of the static analysis of dependencies, as well as their views, which can reduce the number of false positives in situations where the current state of the program combines two or more states obtained both in different paths of execution and in working with multiple object values. Dependences connect various types of software objects: single variables, the elements of composite variables (structure fields, array elements, the size of the heap areas, the length of lines, the number of initialized array elements in the verification code using static methods. The article pays attention to the identification of dependencies within the framework of the abstract interpretation, as well as gives an overview and analysis of the inference tools.Methods of dynamic analysis such as testing, monitoring and profiling are presented and analyzed. Also some kinds of tools are considered which can be applied to the software when using the methods of dynamic analysis. Based on the work a conclusion is drawn, which describes the most relevant problems of analysis techniques, methods of their solutions and

  10. Report on the achievement of verification for FY 1997-1999. Verification survey of a capacitor system for power leveling of the photovoltaic power generation; 1997 nendo kara 1999 nendo taiyoko hatsuden shutsuryoku heijunkayo capacitor system no jissho chosa seika hokokusho

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2000-07-01

    The purpose of this verification survey is to confirm the performance and reliability with which a new large capacity capacitor works as power leveling use and partial power peak shift back-up use in the photovoltaic power system. The results of FY 1997 are as follows: (1) Verification using jelly-role type cells. (2) Trial manufacture of stacking cells. In (1), jelly-role type cells were manufactured using an electrode composed of aluminum foil and activated carbon layer and the organic electrolyte. The obtained cell capacitance was 6,000F on average and the energy density 5.4-5.6Wh/L. The experiment on the constant power load of a capacitor bank constructed with four 6,000F cells connected in series was carried out to confirm discharge energy of 25Wh between 12V and 3V. In (2), stacking rectangular type cells were trially manufactured by the electrode composed of two types of electrode; press type and sheet type electrodes. The cell capacitance was approximately 3,500F at sheet electrode cell, and the energy density 7.2Wh/L. To improve the cell performance, a relation was studied between the pore distribution of activated carbon and the double layer capacity, and the material higher than the conventional one in capacity per volume was found out. Studies were also made of trial fabrication of charge/discharge circuits and the experiment and the voltage balance. (NEDO)

  11. Verification and Validation in Systems Engineering

    CERN Document Server

    Debbabi, Mourad; Jarraya, Yosr; Soeanu, Andrei; Alawneh, Luay

    2010-01-01

    "Verification and validation" represents an important process used for the quality assessment of engineered systems and their compliance with the requirements established at the beginning of or during the development cycle. Debbabi and his coauthors investigate methodologies and techniques that can be employed for the automatic verification and validation of systems engineering design models expressed in standardized modeling languages. Their presentation includes a bird's eye view of the most prominent modeling languages for software and systems engineering, namely the Unified Model

  12. Confidence-increasing elements in user instructions: Seniors' reactions to verification steps and personal stories

    NARCIS (Netherlands)

    Loorbach, N.R.; Karreman, Joyce; Steehouder, M.F.

    2013-01-01

    Purpose: Research shows that confidence-increasing elements in user instructions positively influence senior users' task performance and motivation. We added verification steps and personal stories to user instructions for a cell phone, to find out how seniors (between 60 and 70 years) perceive

  13. 37 CFR 260.6 - Verification of royalty payments.

    Science.gov (United States)

    2010-07-01

    ... verification of the payment of royalty fees to those parties entitled to receive such fees, according to terms... 37 Patents, Trademarks, and Copyrights 1 2010-07-01 2010-07-01 false Verification of royalty... COPYRIGHT ARBITRATION ROYALTY PANEL RULES AND PROCEDURES RATES AND TERMS FOR PREEXISTING SUBSCRIPTION...

  14. Verification and validation of the PLTEMP/ANL code for thermal hydraulic analysis of experimental and test reactors

    International Nuclear Information System (INIS)

    Kalimullah, M.; Olson, A.O.; Feldman, E.E.; Hanan, N.; Dionne, B.

    2012-01-01

    The document compiles in a single volume several verification and validation works done for the PLTEMP/ANL code during the years of its development and improvement. Some works that are available in the open literature are simply referenced at the outset, and are not included in the document. PLTEMP has been used in conversion safety analysis reports of several US and foreign research reactors that have been licensed and converted. A list of such reactors is given. Each chapter of the document deals with the verification or validation of a specific model. The model verification is usually done by comparing the code with hand calculation, Microsoft spreadsheet calculation, or Mathematica calculation. The model validation is done by comparing the code with experimental data or a more validated code like the RELAP5 code.

  15. Verification and Validation of the PLTEMP/ANL Code for Thermal-Hydraulic Analysis of Experimental and Test Reactors

    Energy Technology Data Exchange (ETDEWEB)

    Kalimullah, M. [Argonne National Lab. (ANL), Argonne, IL (United States); Olson, Arne P. [Argonne National Lab. (ANL), Argonne, IL (United States); Feldman, E. E. [Argonne National Lab. (ANL), Argonne, IL (United States); Hanan, N. [Argonne National Lab. (ANL), Argonne, IL (United States); Dionne, B. [Argonne National Lab. (ANL), Argonne, IL (United States)

    2015-04-07

    The document compiles in a single volume several verification and validation works done for the PLTEMP/ANL code during the years of its development and improvement. Some works that are available in the open literature are simply referenced at the outset, and are not included in the document. PLTEMP has been used in conversion safety analysis reports of several US and foreign research reactors that have been licensed and converted. A list of such reactors is given. Each chapter of the document deals with the verification or validation of a specific model. The model verification is usually done by comparing the code with hand calculation, Microsoft spreadsheet calculation, or Mathematica calculation. The model validation is done by comparing the code with experimental data or a more validated code like the RELAP5 code.

  16. A Roadmap for the Implementation of Continued Process Verification.

    Science.gov (United States)

    Boyer, Marcus; Gampfer, Joerg; Zamamiri, Abdel; Payne, Robin

    2016-01-01

    In 2014, the members of the BioPhorum Operations Group (BPOG) produced a 100-page continued process verification case study, entitled "Continued Process Verification: An Industry Position Paper with Example Protocol". This case study captures the thought processes involved in creating a continued process verification plan for a new product in response to the U.S. Food and Drug Administration's guidance on the subject introduced in 2011. In so doing, it provided the specific example of a plan developed for a new molecular antibody product based on the "A MAb Case Study" that preceded it in 2009.This document provides a roadmap that draws on the content of the continued process verification case study to provide a step-by-step guide in a more accessible form, with reference to a process map of the product life cycle. It could be used as a basis for continued process verification implementation in a number of different scenarios: For a single product and process;For a single site;To assist in the sharing of data monitoring responsibilities among sites;To assist in establishing data monitoring agreements between a customer company and a contract manufacturing organization. The U.S. Food and Drug Administration issued guidance on the management of manufacturing processes designed to improve quality and control of drug products. This involved increased focus on regular monitoring of manufacturing processes, reporting of the results, and the taking of opportunities to improve. The guidance and practice associated with it is known as continued process verification This paper summarizes good practice in responding to continued process verification guidance, gathered from subject matter experts in the biopharmaceutical industry. © PDA, Inc. 2016.

  17. Land surface Verification Toolkit (LVT)

    Science.gov (United States)

    Kumar, Sujay V.

    2017-01-01

    LVT is a framework developed to provide an automated, consolidated environment for systematic land surface model evaluation Includes support for a range of in-situ, remote-sensing and other model and reanalysis products. Supports the analysis of outputs from various LIS subsystems, including LIS-DA, LIS-OPT, LIS-UE. Note: The Land Information System Verification Toolkit (LVT) is a NASA software tool designed to enable the evaluation, analysis and comparison of outputs generated by the Land Information System (LIS). The LVT software is released under the terms and conditions of the NASA Open Source Agreement (NOSA) Version 1.1 or later. Land Information System Verification Toolkit (LVT) NOSA.

  18. On Backward-Style Anonymity Verification

    Science.gov (United States)

    Kawabe, Yoshinobu; Mano, Ken; Sakurada, Hideki; Tsukada, Yasuyuki

    Many Internet services and protocols should guarantee anonymity; for example, an electronic voting system should guarantee to prevent the disclosure of who voted for which candidate. To prove trace anonymity, which is an extension of the formulation of anonymity by Schneider and Sidiropoulos, this paper presents an inductive method based on backward anonymous simulations. We show that the existence of an image-finite backward anonymous simulation implies trace anonymity. We also demonstrate the anonymity verification of an e-voting protocol (the FOO protocol) with our backward anonymous simulation technique. When proving the trace anonymity, this paper employs a computer-assisted verification tool based on a theorem prover.

  19. Formal Development and Verification of Railway Control Systems

    DEFF Research Database (Denmark)

    Vu Hong, Linh; Haxthausen, Anne Elisabeth; Peleska, Jan

    done applying conventional methods where requirements and designs are described using natural language, diagrams and pseudo code, and the verification of requirements has been done by code inspection and non-exhaustive testing. These techniques are not sufficient, leading to errors and an in-effective...... for Strategic Research. The work is affiliated with a number of partners: DTU Compute, DTU Transport, DTU Management, DTU Fotonik, Bremen University, Banedanmark, Trafikstyrelsen, DSB, and DSB S-tog. More information about RobustRails project is available at http://www.dtu.dk/subsites/robustrails/English.aspx...

  20. Consortium for Verification Technology Fellowship Report.

    Energy Technology Data Exchange (ETDEWEB)

    Sadler, Lorraine E. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-06-01

    As one recipient of the Consortium for Verification Technology (CVT) Fellowship, I spent eight days as a visiting scientist at the University of Michigan, Department of Nuclear Engineering and Radiological Sciences (NERS). During this time, I participated in multiple department and research group meetings and presentations, met with individual faculty and students, toured multiple laboratories, and taught one-half of a one-unit class on Risk Analysis in Nuclear Arms control (six 1.5 hour lectures). The following report describes some of the interactions that I had during my time as well as a brief discussion of the impact of this fellowship on members of the consortium and on me/my laboratory’s technical knowledge and network.

  1. HIV vaccine research--South Africa's ethical-legal framework and its ability to promote the welfare of trial participants.

    Science.gov (United States)

    Strode, Ann; Slack, Catherine; Mushariwa, Muriel

    2005-08-01

    An effective ethical-legal framework for the conduct of research is critical. We describe five essential components of such a system, review the extent to which these components have been realised in South Africa, present brief implications for the ethical conduct of clinical trials of HIV vaccines in South Africa and make recommendations. The components of an effective ethical-legal system that we propose are the existence of scientific ethical and policy-making structures that regulate research; research ethics committees (RECs) that ethically review research; national ethical guidelines and standards; laws protecting research participants; and mechanisms to enforce and monitor legal rights and ethical standards. We conclude that the ethical-legal framework has, for the most part, the necessary institutions, and certain necessary guidelines but does not have many of the laws needed to protect and promote the rights of persons participating in research, including HIV vaccine trials. Recommendations made include advocacy measures to finalise and implement legislation, development of regulations, analysis and comparison of ethical guidelines, and the development of measures to monitor ethical-legal rights at trial sites.

  2. A formal design verification and validation on the human factors of a computerized information system in nuclear power plants

    International Nuclear Information System (INIS)

    Lee, Yong Hee; Park, Jae Chang; Cheon, Se Woo; Jung, Kwang Tae; Baek, Seung Min; Han, Seung; Park, Hee Suk; Son, Ki Chang; Kim, Jung Man; Jung Yung Woo

    1999-11-01

    This report describe a technical transfer under the title of ''A formal design verification and validation on the human factors of a computerized information system in nuclear power plants''. Human factors requirements for the information system designs are extracted from various regulatory and industrial standards and guidelines, and interpreted into a more specific procedures and checklists for verifying the satisfaction of those requirements. A formalized implementation plan is established for human factors verification and validation of a computerized information system in nuclear power plants. Additionally, a Computer support system, named as DIMS-web (design Issue Management System), is developed based upon web internet environment so as to enhance the implementation of the human factors activities. DIMS-Web has three maine functions: supporting requirements review, tracking design issues, and management if issues screening evaluation. DIMS-Web shows its benefits in practice through a trial application to the design review of CFMS for YGN nuclear unit 5 and 6. (author)

  3. A formal design verification and validation on the human factors of a computerized information system in nuclear power plants

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Yong Hee; Park, Jae Chang; Cheon, Se Woo; Jung, Kwang Tae; Baek, Seung Min; Han, Seung; Park, Hee Suk; Son, Ki Chang; Kim, Jung Man; Jung Yung Woo

    1999-11-01

    This report describe a technical transfer under the title of ''A formal design verification and validation on the human factors of a computerized information system in nuclear power plants''. Human factors requirements for the information system designs are extracted from various regulatory and industrial standards and guidelines, and interpreted into a more specific procedures and checklists for verifying the satisfaction of those requirements. A formalized implementation plan is established for human factors verification and validation of a computerized information system in nuclear power plants. Additionally, a Computer support system, named as DIMS-web (design Issue Management System), is developed based upon web internet environment so as to enhance the implementation of the human factors activities. DIMS-Web has three maine functions: supporting requirements review, tracking design issues, and management if issues screening evaluation. DIMS-Web shows its benefits in practice through a trial application to the design review of CFMS for YGN nuclear unit 5 and 6. (author)

  4. Two-Level Verification of Data Integrity for Data Storage in Cloud Computing

    Science.gov (United States)

    Xu, Guangwei; Chen, Chunlin; Wang, Hongya; Zang, Zhuping; Pang, Mugen; Jiang, Ping

    Data storage in cloud computing can save capital expenditure and relive burden of storage management for users. As the lose or corruption of files stored may happen, many researchers focus on the verification of data integrity. However, massive users often bring large numbers of verifying tasks for the auditor. Moreover, users also need to pay extra fee for these verifying tasks beyond storage fee. Therefore, we propose a two-level verification of data integrity to alleviate these problems. The key idea is to routinely verify the data integrity by users and arbitrate the challenge between the user and cloud provider by the auditor according to the MACs and ϕ values. The extensive performance simulations show that the proposed scheme obviously decreases auditor's verifying tasks and the ratio of wrong arbitration.

  5. 78 FR 27882 - VA Veteran-Owned Small Business (VOSB) Verification Guidelines

    Science.gov (United States)

    2013-05-13

    ... Verification Self-Assessment Tool that walks the veteran through the regulation and how it applies to the...) Verification Guidelines AGENCY: Department of Veterans Affairs. ACTION: Advanced notice of proposed rulemaking... regulations governing the Department of Veterans Affairs (VA) Veteran-Owned Small Business (VOSB) Verification...

  6. Verification and validation for waste disposal models

    International Nuclear Information System (INIS)

    1987-07-01

    A set of evaluation criteria has been developed to assess the suitability of current verification and validation techniques for waste disposal methods. A survey of current practices and techniques was undertaken and evaluated using these criteria with the items most relevant to waste disposal models being identified. Recommendations regarding the most suitable verification and validation practices for nuclear waste disposal modelling software have been made

  7. Property-driven functional verification technique for high-speed vision system-on-chip processor

    Science.gov (United States)

    Nshunguyimfura, Victor; Yang, Jie; Liu, Liyuan; Wu, Nanjian

    2017-04-01

    The implementation of functional verification in a fast, reliable, and effective manner is a challenging task in a vision chip verification process. The main reason for this challenge is the stepwise nature of existing functional verification techniques. This vision chip verification complexity is also related to the fact that in most vision chip design cycles, extensive efforts are focused on how to optimize chip metrics such as performance, power, and area. Design functional verification is not explicitly considered at an earlier stage at which the most sound decisions are made. In this paper, we propose a semi-automatic property-driven verification technique. The implementation of all verification components is based on design properties. We introduce a low-dimension property space between the specification space and the implementation space. The aim of this technique is to speed up the verification process for high-performance parallel processing vision chips. Our experimentation results show that the proposed technique can effectively improve the verification effort up to 20% for the complex vision chip design while reducing the simulation and debugging overheads.

  8. Design verification methodology for a solenoid valve for industrial applications

    International Nuclear Information System (INIS)

    Park, Chang Dae; Lim, Byung Ju; Chun, Kyung Yul

    2015-01-01

    Solenoid operated valves (SOV) are widely used in many applications due to their fast dynamic responses, cost effectiveness, and less contamination sensitive characteristics. In this paper, we tried to provide a convenient method of design verification of SOV to design engineers who depend on their experiences and experiment during design and development process of SOV. First, we summarize a detailed procedure for designing SOVs for industrial applications. All of the design constraints are defined in the first step of the design, and then the detail design procedure is presented based on design experiences as well as various physical and electromagnetic relationships. Secondly, we have suggested a verification method of this design using theoretical relationships, which enables optimal design of SOV from a point of view of safety factor of design attraction force. Lastly, experimental performance tests using several prototypes manufactured based on this design method show that the suggested design verification methodology is appropriate for designing new models of solenoids. We believe that this verification process is novel logic and useful to save time and expenses during development of SOV because verification tests with manufactured specimen may be substituted partly by this verification methodology.

  9. Verification test report on a solar heating and hot water system

    Science.gov (United States)

    1978-01-01

    Information is provided on the development, qualification and acceptance verification of commercial solar heating and hot water systems and components. The verification includes the performances, the efficiences and the various methods used, such as similarity, analysis, inspection, test, etc., that are applicable to satisfying the verification requirements.

  10. ENVIRONMENTAL TECHNOLOGY PROTOCOL VERIFICATION REPORT, EMISSIONS OF VOCS AND ALDEHYDES FROM COMMERCIAL FURNITURE (WITH APPENDICES)

    Science.gov (United States)

    As part of a U.S. Environmental Protection Agency Environmental Technology Verification program, the Research Triangle Institute (RTI) developed a test protocol for measuring volatile organic compounds and aldehydes in a large chamber. RTI convened stakeholders for the commercial...

  11. Enhancing the informed consent process for critical care research: strategies from a thromboprophylaxis trial.

    Science.gov (United States)

    Smith, Orla M; McDonald, Ellen; Zytaruk, Nicole; Foster, Denise; Matte, Andrea; Clarke, France; Fleury, Suzie; Krause, Katie; McArdle, Tracey; Skrobik, Yoanna; Cook, Deborah J

    2013-12-01

    Critically ill patients lack capacity for decisions about research participation. Consent to enrol these patients in studies is typically obtained from substitute decision-makers. To present strategies that may optimise the process of obtaining informed consent from substitute decision-makers for participation of critically ill patients in trials. We use examples from a randomised trial of heparin thromboprophylaxis in the intensive care unit (PROTECT, clinicaltrials.gov NCT00182143). 3764 patients were randomised, with an informed consent rate of 82%; 90% of consents were obtained from substitute decision-makers. North American PROTECT research coordinators attended three meetings to discuss enrolment: (1) Trial start-up (January 2006); (2) Near trial closure (January 2010); and (3) Post-publication (April 2011). Data were derived from slide presentations, field notes from break-out groups and plenary discussions, then analysed inductively. We derived three phases for the informed consent process: (1) Preparation for the Consent Encounter; (2) The Consent Encounter; and (3) Follow-up to the Consent Encounter. Specific strategies emerged for each phase: Phase 1 (four strategies); Phase 2 (six strategies); and Phase 3 (three strategies). We identified 13 strategies that may improve the process of obtaining informed consent from substitute decision-makers and be generalisable to other settings and studies. Copyright © 2013 The Authors. Published by Elsevier Ltd.. All rights reserved.

  12. Temporal Specification and Verification of Real-Time Systems.

    Science.gov (United States)

    1991-08-30

    of concrete real - time systems can be modeled adequately. Specification: We present two conservative extensions of temporal logic that allow for the...logic. We present both model-checking algorithms for the automatic verification of finite-state real - time systems and proof methods for the deductive verification of real - time systems .

  13. A verification regime for the spatial discretization of the SN transport equations

    Energy Technology Data Exchange (ETDEWEB)

    Schunert, S.; Azmy, Y. [North Carolina State Univ., Dept. of Nuclear Engineering, 2500 Stinson Drive, Raleigh, NC 27695 (United States)

    2012-07-01

    The order-of-accuracy test in conjunction with the method of manufactured solutions is the current state of the art in computer code verification. In this work we investigate the application of a verification procedure including the order-of-accuracy test on a generic SN transport solver that implements the AHOTN spatial discretization. Different types of semantic errors, e.g. removal of a line of code or changing a single character, are introduced randomly into the previously verified S{sub N} code and the proposed verification procedure is used to identify the coding mistakes (if possible) and classify them. Itemized by error type we record the stage of the verification procedure where the error is detected and report the frequency with which the errors are correctly identified at various stages of the verification. Errors that remain undetected by the verification procedure are further scrutinized to determine the reason why the introduced coding mistake eluded the verification procedure. The result of this work is that the verification procedure based on an order-of-accuracy test finds almost all detectable coding mistakes but rarely, 1.44% of the time, and under certain circumstances can fail. (authors)

  14. TH-AB-201-01: A Feasibility Study of Independent Dose Verification for CyberKnife

    International Nuclear Information System (INIS)

    Sato, A; Noda, T; Keduka, Y; Kawajiri, T; Itano, M; Yamazaki, T; Tachibana, H

    2016-01-01

    Purpose: CyberKnife irradiation is composed of tiny-size, multiple and intensity-modulated beams compared to conventional linacs. Few of the publications for Independent dose calculation verification for CyberKnife have been reported. In this study, we evaluated the feasibility of independent dose verification for CyberKnife treatment as Secondary check. Methods: The followings were measured: test plans using some static and single beams, clinical plans in a phantom and using patient’s CT. 75 patient plans were collected from several treatment sites of brain, lung, liver and bone. In the test plans and the phantom plans, a pinpoint ion-chamber measurement was performed to assess dose deviation for a treatment planning system (TPS) and an independent verification program of Simple MU Analysis (SMU). In the clinical plans, dose deviation between the SMU and the TPS was performed. Results: In test plan, the dose deviations were 3.3±4.5%, and 4.1±4.4% for the TPS and the SMU, respectively. In the phantom measurements for the clinical plans, the dose deviations were −0.2±3.6% for the TPS and −2.3±4.8% for the SMU. In the clinical plans using the patient’s CT, the dose deviations were −3.0±2.1% (Mean±1SD). The systematic difference was partially derived from inverse square law and penumbra calculation. Conclusion: The independent dose calculation for CyberKnife shows −3.0±4.2% (Mean±2SD) and our study, the confidence limit was achieved within 5% of the tolerance level from AAPM task group 114 for non-IMRT treatment. Thus, it may be feasible to use independent dose calculation verification for CyberKnife treatment as the secondary check. This research is partially supported by Japan Agency for Medical Research and Development (AMED)

  15. SU-F-T-267: A Clarkson-Based Independent Dose Verification for the Helical Tomotherapy

    Energy Technology Data Exchange (ETDEWEB)

    Nagata, H [Shonan Kamakura General Hospital, Kamakura, Kanagawa, (Japan); Juntendo University, Hongo, Tokyo (Japan); Hongo, H [Shonan Kamakura General Hospital, Kamakura, Kanagawa, (Japan); Tsukuba University, Tsukuba, Ibaraki (Japan); Kawai, D [Kanagawa Cancer Center, Yokohama, Kanagawa (Japan); Takahashi, R [Cancer Institute Hospital of Japanese Foundation for Cancer Research, Koto, Tokyo (Japan); Hashimoto, H [Shonan Fujisawa Tokushukai Hospital, Fujisawa, Kanagawa (Japan); Tachibana, H [National Cancer Center, Kashiwa, Chiba (Japan)

    2016-06-15

    Purpose: There have been few reports for independent dose verification for Tomotherapy. We evaluated the accuracy and the effectiveness of an independent dose verification system for the Tomotherapy. Methods: Simple MU Analysis (SMU, Triangle Product, Ishikawa, Japan) was used as the independent verification system and the system implemented a Clarkson-based dose calculation algorithm using CT image dataset. For dose calculation in the SMU, the Tomotherapy machine-specific dosimetric parameters (TMR, Scp, OAR and MLC transmission factor) were registered as the machine beam data. Dose calculation was performed after Tomotherapy sinogram from DICOM-RT plan information was converted to the information for MU and MLC location at more segmented control points. The performance of the SMU was assessed by a point dose measurement in non-IMRT and IMRT plans (simple target and mock prostate plans). Subsequently, 30 patients’ treatment plans for prostate were compared. Results: From the comparison, dose differences between the SMU and the measurement were within 3% for all cases in non-IMRT plans. In the IMRT plan for the simple target, the differences (Average±1SD) were −0.70±1.10% (SMU vs. TPS), −0.40±0.10% (measurement vs. TPS) and −1.20±1.00% (measurement vs. SMU), respectively. For the mock prostate, the differences were −0.40±0.60% (SMU vs. TPS), −0.50±0.90% (measurement vs. TPS) and −0.90±0.60% (measurement vs. SMU), respectively. For patients’ plans, the difference was −0.50±2.10% (SMU vs. TPS). Conclusion: A Clarkson-based independent dose verification for the Tomotherapy can be clinically available as a secondary check with the similar tolerance level of AAPM Task group 114. This research is partially supported by Japan Agency for Medical Research and Development (AMED)

  16. FY 1983 report on the results of the verification test on the methanol conversion for oil-fired power plant. Part 1. Verification test on the environmental safety; 1983 nendo sekiyu karyoku hatsudensho metanoru tenkan tou jissho shiken seika hokokusho. Kankyo anzensei jissho shiken (Sono 1)

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1984-03-01

    As to the verification test on the environmental safety in the use of methanol as power generation use fuel, the following were summed up: review of the verification test and the interim evaluation, state of implementation of the FY 1983 verification test, study/evaluation of the results of the FY 1983 test, survey of research trends, plan of the FY 1984 verification test, record of the committee, etc. Concerning the interim evaluation, high evaluation was obtained as described below: Testing facilities were constructed as planned at first to make the implementation of various tests possible; Tests were smoothly conducted, and among the acute test using monkey, test on mock flue gas using monkey/rat, test on mutagenicity and test on the effect on aquatic animals, tests using oryzias latipes and abalone on the fatal concentration, avoidance behavior and chronic effect were finished by the end of FY 1983 almost as planned; The long-term inhalation test using monkey and rat/mouse has been smoothly in progress. In the survey of research trends, the paper introduced the outlined literature on the methanol metabolism of monkey, changes in the methanol concentration in blood/urine in the case of drinking methanol by mistake. (NEDO)

  17. Clinical Trials

    Medline Plus

    Full Text Available ... trials are research studies that explore whether a medical strategy, treatment, or device is safe and effective ... trials are research studies that explore whether a medical strategy, treatment, or device is safe and effective ...

  18. Clinical Trials

    Medline Plus

    Full Text Available ... best data available for health care decisionmaking. The purpose of clinical trials is research, so the studies ... Thus, research in humans is needed. For safety purposes, clinical trials start with small groups of patients ...

  19. Clinical Trials

    Medline Plus

    Full Text Available ... decisionmaking. The purpose of clinical trials is research, so the studies follow strict scientific standards. These standards ... otherwise. The purpose of clinical trials is research, so the studies follow strict scientific standards. These standards ...

  20. Methods of Verification, Accountability and Control of Special Nuclear Material

    International Nuclear Information System (INIS)

    Stewart, J.E.

    1999-01-01

    This session demonstrates nondestructive assay (NDA) measurement, surveillance and analysis technology required to protect, control and account (MPC and A) for special nuclear materials (SNM) in sealed containers. These measurements, observations and analyses comprise state-of-the art, strengthened, SNM safeguards systems. Staff member specialists, actively involved in research, development, training and implementation worldwide, will present six NDA verification systems and two software tools for integration and analysis of facility MPC and A data

  1. The Cervix Cancer Research Network (CCRN: Increasing access to cancer clinical trials in low- and middle-income countries

    Directory of Open Access Journals (Sweden)

    Gita eSuneja

    2015-02-01

    Full Text Available Introduction: The burden of cervical cancer is large and growing in developing countries, due in large part to limited access to screening services and lack of human papillomavirus (HPV vaccination. In spite of modern advances in diagnostic and therapeutic modalities, outcomes from cervical cancer have not markedly improved in recent years. Novel clinical trials are urgently needed to improve outcomes from cervical cancer worldwide. Methods: The Cervix Cancer Research Network (CCRN, a subsidiary of the Gynecologic Cancer InterGroup (GCIG, is a multi-national, multi-institutional consortium of physicians and scientists focused on improving cervical cancer outcomes worldwide by making cancer clinical trials available in low-, middle-, and high-income countries. Standard operating procedures for participation in CCRN include a pre-qualifying questionnaire to evaluate clinical activities and research infrastructure, followed by a site visit. Once a site is approved, they may choose to participate in one of four currently accruing clinical trials.Results: To date, 13 different CCRN site visits have been performed. Of these 13 sites visited, 10 have been approved as CCRN sites including Tata Memorial Hospital, India; Bangalore, India; Trivandrum, India; Ramathibodi, Thailand; Siriaj, Thailand; Pramongkutklao, Thailand; Ho Chi Minh, Vietnam; Blokhin Russian Cancer Research Center; the Hertzen Moscow Cancer Research Institute; and the Russian Scientific Center of Roentgenoradiology. The four currently accruing clinical trials are TACO, OUTBACK, INTERLACE, and SHAPE.Discussion: The CCRN has successfully enrolled 10 sites in developing countries to participate in four randomized clinical trials. The primary objectives are to provide novel therapeutics to regions with the greatest need and to improve the validity and generalizability of clinical trial results by enrolling a diverse sample of patients.

  2. Environmental technology verification methods

    CSIR Research Space (South Africa)

    Szewczuk, S

    2016-03-01

    Full Text Available Environmental Technology Verification (ETV) is a tool that has been developed in the United States of America, Europe and many other countries around the world to help innovative environmental technologies reach the market. Claims about...

  3. Verification and Optimization of a PLC Control Schedule

    NARCIS (Netherlands)

    Brinksma, Hendrik; Mader, Angelika H.; Havelund, K.; Penix, J.; Visser, W.

    We report on the use of the SPIN model checker for both the verification of a process control program and the derivation of optimal control schedules. This work was carried out as part of a case study for the EC VHS project (Verification of Hybrid Systems), in which the program for a Programmable

  4. Clinical Trials

    Medline Plus

    Full Text Available ... treatment, or device is safe and effective for humans. What Are Clinical Trials? Clinical trials are research ... are required to have an IRB. Office for Human Research Protections The U.S. Department of Health and ...

  5. Space Weather Models and Their Validation and Verification at the CCMC

    Science.gov (United States)

    Hesse, Michael

    2010-01-01

    The Community Coordinated l\\lodeling Center (CCMC) is a US multi-agency activity with a dual mission. With equal emphasis, CCMC strives to provide science support to the international space research community through the execution of advanced space plasma simulations, and it endeavors to support the space weather needs of the CS and partners. Space weather support involves a broad spectrum, from designing robust forecasting systems and transitioning them to forecasters, to providing space weather updates and forecasts to NASA's robotic mission operators. All of these activities have to rely on validation and verification of models and their products, so users and forecasters have the means to assign confidence levels to the space weather information. In this presentation, we provide an overview of space weather models resident at CCMC, as well as of validation and verification activities undertaken at CCMC or through the use of CCMC services.

  6. 340 and 310 drawing field verification

    International Nuclear Information System (INIS)

    Langdon, J.

    1996-01-01

    The purpose of the drawing field verification work plan is to provide reliable drawings for the 310 Treated Effluent Disposal Facility (TEDF) and 340 Waste Handling Facility (340 Facility). The initial scope of this work plan is to provide field verified and updated versions of all the 340 Facility essential drawings. This plan can also be used for field verification of any other drawings that the facility management directs to be so updated. Any drawings revised by this work plan will be issued in an AutoCAD format

  7. Verification of Scientific Simulations via Hypothesis-Driven Comparative and Quantitative Visualization

    Energy Technology Data Exchange (ETDEWEB)

    Ahrens, James P [ORNL; Heitmann, Katrin [ORNL; Petersen, Mark R [ORNL; Woodring, Jonathan [Los Alamos National Laboratory (LANL); Williams, Sean [Los Alamos National Laboratory (LANL); Fasel, Patricia [Los Alamos National Laboratory (LANL); Ahrens, Christine [Los Alamos National Laboratory (LANL); Hsu, Chung-Hsing [ORNL; Geveci, Berk [ORNL

    2010-11-01

    This article presents a visualization-assisted process that verifies scientific-simulation codes. Code verification is necessary because scientists require accurate predictions to interpret data confidently. This verification process integrates iterative hypothesis verification with comparative, feature, and quantitative visualization. Following this process can help identify differences in cosmological and oceanographic simulations.

  8. Should we embed randomized controlled trials within action research: arguing from a case study of telemonitoring

    Directory of Open Access Journals (Sweden)

    Karen Day

    2016-06-01

    Full Text Available Abstract Background Action research (AR and randomized controlled trials (RCTs are usually considered to be theoretically and practically incompatible. However, we argue that their respective strengths and weaknesses can be complementary. We illustrate our argument from a recent study assessing the effect of telemonitoring on health-related quality of life, self-care, hospital use, costs and the experiences of patients, informal carers and health care professionals in two urban hospital services and one remote rural primary care service in New Zealand. Methods Data came from authors’ observations and field notes of discussions with three groups: the healthcare providers and healthcare consumers who participated in the research, and a group of 17 researchers and collaborators. The consumers had heart failure (Site A, urban, airways disease (Site B, urban, and diabetes (Site C, rural. The research ran from 2008 (project inception until 2012 (project close-off. Researchers came from a wide range of disciplines. Both RCT and AR methods were recognised from early in the process but often worked in parallel rather than together. In retrospect, we have mapped our observed research processes to the AR cycle characteristics (creation of communicative space, democracy and participation, iterative learning and improvement, emergence, and accommodation of different ways of knowing. Results We describe the context, conduct and outcomes of the telemonitoring trial, framing the overall process in the language of AR. Although not fully articulated at the time, AR processes made the RCT sensitive to important context, e.g. clinical processes. They resulted in substantive changes to the design and conduct of the RCT, and to interpretation and uptake of findings, e.g. a simpler technology procurement process emerged. Creating a communicative space enabled co-design between the researcher group and collaborators from the provider participant group, and a stronger

  9. Verification steps and personal stories in an instruction manual for seniors: Effects on confidence, motivation, and usability

    NARCIS (Netherlands)

    Loorbach, N.R.; Karreman, Joyce; Steehouder, M.F.

    2013-01-01

    Research problem: The purpose of this study was to investigate the effects of two types of motivational elements—verification steps and personal stories—in an instruction manual for a cell phone targeted at senior users (between 60 and 70 years). Research questions: What are the effects of adding

  10. FY 2000 report on the research cooperation project - Research cooperation in developmental support for oil producing countries. Development of the new field of usage of Orinoco oil for fuel of gas turbine combined power generation; 2000 nendo san'yukoku kaihatsu shien kenkyu kyoryoku jigyo seika hokokusho. Gasu tabin fukugo hatsuden nenryo muke Orinoko oil no shin yoto kaihatsu

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2001-09-01

    For the purpose of spreading the usage of Orinoco crude oil which is suffering from sluggishness in the export and heightening the economical efficiency in Venezuela, research cooperation was made for a project for reduction of the power cost and environmental loads in Japan by producing the advanced gas turbine use fuel oil from Orinoco oil and exporting it to Japan. In this project, conducted were the technical verification that the gas turbine use fuel oil (GTF) can be produced from Orinoco oil and the economical verification based on the result thereof. As a result of the technical verification, it was confirmed that from the Orinoco crude oil which is heavy, high in sulfur and high in heavy metal concentration, a refined oil satisfying the following properties of the advanced gas turbine fuel oil could be trial-produced using the distilling unit, SDA unit, desulfurizer and de-metaling unit: vanadium concentration: 0.5 wtppm or below; sodium + potassium concentration: 1.0 wtppm or below; viscosity: 20 cSt or below at 135 degrees C. Further, from the economical verification, the good result was obtained that the price was lower than the LNG price and the domestic price of A heavy oil/C heavy oil. (NEDO)

  11. Formal verification of Simulink/Stateflow diagrams a deductive approach

    CERN Document Server

    Zhan, Naijun; Zhao, Hengjun

    2017-01-01

    This book presents a state-of-the-art technique for formal verification of continuous-time Simulink/Stateflow diagrams, featuring an expressive hybrid system modelling language, a powerful specification logic and deduction-based verification approach, and some impressive, realistic case studies. Readers will learn the HCSP/HHL-based deductive method and the use of corresponding tools for formal verification of Simulink/Stateflow diagrams. They will also gain some basic ideas about fundamental elements of formal methods such as formal syntax and semantics, and especially the common techniques applied in formal modelling and verification of hybrid systems. By investigating the successful case studies, readers will realize how to apply the pure theory and techniques to real applications, and hopefully will be inspired to start to use the proposed approach, or even develop their own formal methods in their future work.

  12. Ontology Matching with Semantic Verification.

    Science.gov (United States)

    Jean-Mary, Yves R; Shironoshita, E Patrick; Kabuka, Mansur R

    2009-09-01

    ASMOV (Automated Semantic Matching of Ontologies with Verification) is a novel algorithm that uses lexical and structural characteristics of two ontologies to iteratively calculate a similarity measure between them, derives an alignment, and then verifies it to ensure that it does not contain semantic inconsistencies. In this paper, we describe the ASMOV algorithm, and then present experimental results that measure its accuracy using the OAEI 2008 tests, and that evaluate its use with two different thesauri: WordNet, and the Unified Medical Language System (UMLS). These results show the increased accuracy obtained by combining lexical, structural and extensional matchers with semantic verification, and demonstrate the advantage of using a domain-specific thesaurus for the alignment of specialized ontologies.

  13. Interpolant tree automata and their application in Horn clause verification

    DEFF Research Database (Denmark)

    Kafle, Bishoksan; Gallagher, John Patrick

    2016-01-01

    This paper investigates the combination of abstract interpretation over the domain of convex polyhedra with interpolant tree automata, in an abstraction-refinement scheme for Horn clause verification. These techniques have been previously applied separately, but are combined in a new way in this ......This paper investigates the combination of abstract interpretation over the domain of convex polyhedra with interpolant tree automata, in an abstraction-refinement scheme for Horn clause verification. These techniques have been previously applied separately, but are combined in a new way...... clause verification problems indicates that the combination of interpolant tree automaton with abstract interpretation gives some increase in the power of the verification tool, while sometimes incurring a performance overhead....

  14. Synergies across verification regimes: Nuclear safeguards and chemical weapons convention compliance

    International Nuclear Information System (INIS)

    Kadner, Steven P.; Turpen, Elizabeth

    2001-01-01

    In the implementation of all arms control agreements, accurate verification is essential. In setting a course for verifying compliance with a given treaty - whether the NPT or the CWC, one must make a technical comparison of existing information-gathering capabilities against the constraints in an agreement. Then it must be decided whether this level of verifiability is good enough. Generally, the policy standard of 'effective verification' includes the ability to detect significant violations, with high confidence, in sufficient time to respond effectively with policy adjustments or other responses, as needed. It is at this juncture where verification approaches have traditionally diverged. Nuclear safeguards requirements have taken one path while chemical verification methods have pursued another. However, recent technological advances have brought a number of changes affecting verification, and lately their pace has been accelerating. First, all verification regimes have more and better information as a result of new kinds of sensors, imagery, and other technologies. Second, the verification provisions in agreements have also advanced, to include on-site inspections, portal monitoring, data exchanges, and a variety of transparency, confidence-building, and other cooperative measures, Together these developments translate into a technological overlap of certain institutional verification measures such as the NPT's safeguards requirements and the IAEA and the CWC's verification visions and the OPCW. Hence, a priority of international treaty-implementing organizations is exploring the development of a synergistic and coordinated approach to WMD policy making that takes into account existing inter-linkages between nuclear, chemical, and biological weapons issues. Specific areas of coordination include harmonizing information systems and information exchanges and the shared application of scientific mechanisms, as well as collaboration on technological developments

  15. Heavy water physical verification in power plants

    International Nuclear Information System (INIS)

    Morsy, S.; Schuricht, V.; Beetle, T.; Szabo, E.

    1986-01-01

    This paper is a report on the Agency experience in verifying heavy water inventories in power plants. The safeguards objectives and goals for such activities are defined in the paper. The heavy water is stratified according to the flow within the power plant, including upgraders. A safeguards scheme based on a combination of records auditing, comparing records and reports, and physical verification has been developed. This scheme has elevated the status of heavy water safeguards to a level comparable to nuclear material safeguards in bulk facilities. It leads to attribute and variable verification of the heavy water inventory in the different system components and in the store. The verification methods include volume and weight determination, sampling and analysis, non-destructive assay (NDA), and criticality check. The analysis of the different measurement methods and their limits of accuracy are discussed in the paper

  16. Project report: Experimental planning and verification of working fluids (WP 5)

    DEFF Research Database (Denmark)

    Babi, Deenesh Kavi

    working fluid candidates a database is required that can be simultaneously searched in order to differentiate and determine whether the generated candidates are existing or novel. Also, the next step upon selection of the candidates is performing experiments in order to test and verify the generated...... working fluids. If performed properly, the experimental step is solely verification. Experiments can either be performed virtually (in order to further reduce the number of required experiments) and/or physically. Therefore the objective of this work was the development of a database of existing working......Computer-aided molecular design (CAMD) helps in the reduction of experiments for the selection/design of optimal working fluids. In reducing the number of experiments, solutions obtain by trial and error is replaced by solutions that are based on mixture-process properties. In generating optimal...

  17. Impact of individual clinical outcomes on trial participants' perspectives on enrollment in emergency research without consent.

    Science.gov (United States)

    Whitesides, Louisa W; Baren, Jill M; Biros, Michelle H; Fleischman, Ross J; Govindarajan, Prasanthi R; Jones, Elizabeth B; Pancioli, Arthur M; Pentz, Rebecca D; Scicluna, Victoria M; Wright, David W; Dickert, Neal W

    2017-04-01

    Evidence suggests that patients are generally accepting of their enrollment in trials for emergency care conducted under exception from informed consent. It is unknown whether individuals with more severe initial injuries or worse clinical outcomes have different perspectives. Determining whether these differences exist may help to structure post-enrollment interactions. Primary clinical data from the Progesterone for the Treatment of Traumatic Brain Injury trial were matched to interview data from the Patients' Experiences in Emergency Research-Progesterone for the Treatment of Traumatic Brain Injury study. Answers to three key questions from Patients' Experiences in Emergency Research-Progesterone for the Treatment of Traumatic Brain Injury study were analyzed in the context of enrolled patients' initial injury severity (initial Glasgow Coma Scale and Injury Severity Score) and principal clinical outcomes (Extended Glasgow Outcome Scale and Extended Glasgow Outcome Scale relative to initial injury severity). The three key questions from Patients' Experiences in Emergency Research-Progesterone for the Treatment of Traumatic Brain Injury study addressed participants' general attitude toward inclusion in the Progesterone for the Treatment of Traumatic Brain Injury trial (general trial inclusion), their specific attitude toward being included in Progesterone for the Treatment of Traumatic Brain Injury trial under the exception from informed consent (personal exception from informed consent enrollment), and their attitude toward the use of exception from informed consent in the Progesterone for the Treatment of Traumatic Brain Injury trial in general (general exception from informed consent enrollment). Qualitative analysis of interview transcripts was performed to provide contextualization and to determine the extent to which respondents framed their attitudes in terms of clinical experience. Clinical data from Progesterone for the Treatment of Traumatic Brain Injury

  18. Influences on recruitment to randomised controlled trials in mental health settings in England: a national cross-sectional survey of researchers working for the Mental Health Research Network.

    Science.gov (United States)

    Borschmann, Rohan; Patterson, Sue; Poovendran, Dilkushi; Wilson, Danielle; Weaver, Tim

    2014-02-17

    Recruitment to trials is complex and often protracted; selection bias may compromise generalisability. In the mental health field (as elsewhere), diverse factors have been described as hindering researcher access to potential participants and various strategies have been proposed to overcome barriers. However, the extent to which various influences identified in the literature are operational across mental health settings in England has not been systematically examined. A cross-sectional, online survey of clinical studies officers employed by the Mental Health Research Network in England to recruit to trials from National Health Service mental health services. The bespoke questionnaire invited participants to report exposure to specified influences on recruitment, the perceived impact of these on access to potential participants, and to describe additional positive or negative influences on recruitment. Analysis employed descriptive statistics, the framework approach and triangulation of data. Questionnaires were returned by 98 (58%) of 170 clinical studies officers who reported diverse experience. Data demonstrated a disjunction between policy and practice. While the particulars of trial design and various marketing and communication strategies could influence recruitment, consensus was that the culture of NHS mental health services is not conducive to research. Since financial rewards for recruitment paid to Trusts and feedback about studies seldom reaching frontline services, clinicians were described as distanced from research. Facing continual service change and demanding clinical workloads, clinicians generally did not prioritise recruitment activities. Incentives to trial participants had variable impact on access but recruitment could be enhanced by engagement of senior investigators and integrating referral with routine practice. Comprehensive, robust feasibility studies and reciprocity between researchers and clinicians were considered crucial to

  19. Logic verification system for power plant sequence diagrams

    International Nuclear Information System (INIS)

    Fukuda, Mitsuko; Yamada, Naoyuki; Teshima, Toshiaki; Kan, Ken-ichi; Utsunomiya, Mitsugu.

    1994-01-01

    A logic verification system for sequence diagrams of power plants has been developed. The system's main function is to verify correctness of the logic realized by sequence diagrams for power plant control systems. The verification is based on a symbolic comparison of the logic of the sequence diagrams with the logic of the corresponding IBDs (interlock Block Diagrams) in combination with reference to design knowledge. The developed system points out the sub-circuit which is responsible for any existing mismatches between the IBD logic and the logic realized by the sequence diagrams. Applications to the verification of actual sequence diagrams of power plants confirmed that the developed system is practical and effective. (author)

  20. Formal verification of complex properties on PLC programs

    CERN Document Server

    Darvas, D; Voros, A; Bartha, T; Blanco Vinuela, E; Gonzalez Suarez, V M

    2014-01-01

    Formal verification has become a recommended practice in the safety-critical application areas. However, due to the complexity of practical control and safety systems, the state space explosion often prevents the use of formal analysis. In this paper we extend our former verification methodology with effective property preserving reduction techniques. For this purpose we developed general rule-based reductions and a customized version of the Cone of Influence (COI) reduction. Using these methods, the verification of complex requirements formalised with temporal logics (e.g. CTL, LTL) can be orders of magnitude faster. We use the NuSMV model checker on a real-life PLC program from CERN to demonstrate the performance of our reduction techniques.

  1. Implementation and verification of global optimization benchmark problems

    Science.gov (United States)

    Posypkin, Mikhail; Usov, Alexander

    2017-12-01

    The paper considers the implementation and verification of a test suite containing 150 benchmarks for global deterministic box-constrained optimization. A C++ library for describing standard mathematical expressions was developed for this purpose. The library automate the process of generating the value of a function and its' gradient at a given point and the interval estimates of a function and its' gradient on a given box using a single description. Based on this functionality, we have developed a collection of tests for an automatic verification of the proposed benchmarks. The verification has shown that literary sources contain mistakes in the benchmarks description. The library and the test suite are available for download and can be used freely.

  2. Application verification research of cloud computing technology in the field of real time aerospace experiment

    Science.gov (United States)

    Wan, Junwei; Chen, Hongyan; Zhao, Jing

    2017-08-01

    According to the requirements of real-time, reliability and safety for aerospace experiment, the single center cloud computing technology application verification platform is constructed. At the IAAS level, the feasibility of the cloud computing technology be applied to the field of aerospace experiment is tested and verified. Based on the analysis of the test results, a preliminary conclusion is obtained: Cloud computing platform can be applied to the aerospace experiment computing intensive business. For I/O intensive business, it is recommended to use the traditional physical machine.

  3. Towards automatic verification of ladder logic programs

    OpenAIRE

    Zoubek , Bohumir; Roussel , Jean-Marc; Kwiatkowska , Martha

    2003-01-01

    International audience; Control system programs are usually validated by testing prior to their deployment. Unfortunately, testing is not exhaustive and therefore it is possible that a program which passed all the required tests still contains errors. In this paper we apply techniques of automatic verification to a control program written in ladder logic. A model is constructed mechanically from the ladder logic program and subjected to automatic verification against requirements that include...

  4. Inventory verification measurements using neutron multiplicity counting

    International Nuclear Information System (INIS)

    Ensslin, N.; Foster, L.A.; Harker, W.C.; Krick, M.S.; Langner, D.G.

    1998-01-01

    This paper describes a series of neutron multiplicity measurements of large plutonium samples at the Los Alamos Plutonium Facility. The measurements were corrected for bias caused by neutron energy spectrum shifts and nonuniform multiplication, and are compared with calorimetry/isotopics. The results show that multiplicity counting can increase measurement throughput and yield good verification results for some inventory categories. The authors provide recommendations on the future application of the technique to inventory verification

  5. A booklet on participants' rights to improve consent for clinical research: a randomized trial.

    Directory of Open Access Journals (Sweden)

    Jocelyne R Benatar

    Full Text Available OBJECTIVE: Information on the rights of subjects in clinical trials has become increasingly complex and difficult to understand. This study evaluates whether a simple booklet which is relevant to all research studies improves the understanding of rights needed for subjects to provide informed consent. METHODS: 21 currently used informed consent forms (ICF from international clinical trials were separated into information related to the specific research study, and general information on participants' rights. A booklet designed to provide information on participants' rights which used simple language was developed to replace this information in current ICF's Readability of each component of ICF's and the booklet was then assessed using the Flesch-Kincaid Reading ease score (FK. To further evaluate the booklet 282 hospital inpatients were randomised to one of three ways to present research information; a standard ICF, the booklet combined with a short ICF, or the booklet combined with a simplified ICF. Comprehension of information related to the research proposal and to participant's rights was assessed by questionnaire. RESULTS: Information related to participants' rights contributed an average of 44% of the words in standard ICFs, and was harder to read than information describing the clinical trial (FK 25 versus (vs. 41 respectively, p = 0.0003. The booklet reduced the number of words and improved FK from 25 to 42. The simplified ICF had a slightly higher FK score than the standard ICF (50 vs. 42. Comprehension assessed in inpatients was better for the booklet and short ICF 62%, (95% confidence interval (CI 56 to 67 correct, or simplified ICF 62% (CI 58 to 68 correct compared to 52%, (CI 47 to 57 correct for the standard ICF, p = 0.009. This was due to better understanding of questions on rights (62% vs. 49% correct, p = 0.0008. Comprehension of study related information was similar for the simplified and standard ICF (60% vs. 64

  6. A booklet on participants' rights to improve consent for clinical research: a randomized trial.

    Science.gov (United States)

    Benatar, Jocelyne R; Mortimer, John; Stretton, Matthew; Stewart, Ralph A H

    2012-01-01

    Information on the rights of subjects in clinical trials has become increasingly complex and difficult to understand. This study evaluates whether a simple booklet which is relevant to all research studies improves the understanding of rights needed for subjects to provide informed consent. 21 currently used informed consent forms (ICF) from international clinical trials were separated into information related to the specific research study, and general information on participants' rights. A booklet designed to provide information on participants' rights which used simple language was developed to replace this information in current ICF's Readability of each component of ICF's and the booklet was then assessed using the Flesch-Kincaid Reading ease score (FK). To further evaluate the booklet 282 hospital inpatients were randomised to one of three ways to present research information; a standard ICF, the booklet combined with a short ICF, or the booklet combined with a simplified ICF. Comprehension of information related to the research proposal and to participant's rights was assessed by questionnaire. Information related to participants' rights contributed an average of 44% of the words in standard ICFs, and was harder to read than information describing the clinical trial (FK 25 versus (vs.) 41 respectively, p = 0.0003). The booklet reduced the number of words and improved FK from 25 to 42. The simplified ICF had a slightly higher FK score than the standard ICF (50 vs. 42). Comprehension assessed in inpatients was better for the booklet and short ICF 62%, (95% confidence interval (CI) 56 to 67) correct, or simplified ICF 62% (CI 58 to 68) correct compared to 52%, (CI 47 to 57) correct for the standard ICF, p = 0.009. This was due to better understanding of questions on rights (62% vs. 49% correct, p = 0.0008). Comprehension of study related information was similar for the simplified and standard ICF (60% vs. 64% correct, p = 0.68). A booklet

  7. The participation of minors in preventive HIV research trials in South Africa: legal and human rights considerations.

    Science.gov (United States)

    van Wyk, Christa

    2003-01-01

    The constitutional prohibition of experimentation/research without the individual subject's (own) consent is investigated. A distinction is drawn between therapeutic and non-therapeutic research. A minor of 14 is competent to consent independently to medical treatment (which would include therapeutic research), but not to non-therapeutic research. A minor must be at least 18 years to be able to do so. Proxy consent can be secured for the participation of minors under 18 in non-therapeutic research only if they assent, if their participation in the research is indispensable and the research carries no more than negligible risk. Since the risks inherent in HIV preventive vaccine trials may carry more than negligible risk, these trials may not be carried out on children under 18. The limitation of rights and the consideration of foreign and international law in the interpretation of the South African Bill of Rights are investigated.

  8. Return of individual research results and incidental findings in the clinical trials cooperative group setting.

    Science.gov (United States)

    Ferriere, Michael; Van Ness, Brian

    2012-04-01

    The National Cancer Institute (NCI)-funded cooperative group cancer clinical trial system develops experimental therapies and often collects samples from patients for correlative research. The cooperative group bank (CGB) system maintains biobanks with a current policy not to return research results to individuals. An online survey was created, and 10 directors of CGBs completed the surveys asking about understanding and attitudes in changing policies to consider return of incidental findings (IFs) and individual research results (IRRs) of health significance. The potential impact of the 10 consensus recommendations of Wolf et al. presented in this issue are examined. Reidentification of samples is often not problematic; however, changes to the current banking and clinical trial systems would require significant effort to fulfill an obligation of recontact of subjects. Additional resources, as well as a national advisory board would be required to standardize implementation.

  9. The UK clinical research network - has it been a success for dermatology clinical trials?

    OpenAIRE

    Charlesworth Lisa; Perdue Jo; Foster Katharine; Koller Karin; Thomas Kim S; Chalmers Joanne R

    2011-01-01

    Abstract Background Following the successful introduction of five topic-specific research networks in the UK, the Comprehensive Local Research Network (CLRN) was established in 2008 in order to provide a blanket level of support across the whole country regardless of the clinical discipline. The role of the CLRN was to facilitate recruitment into clinical trials, and to encourage greater engagement in research throughout the National Health Service (NHS). Methods This report evaluates the imp...

  10. Verification of analysis methods for predicting the behaviour of seismically isolated nuclear structures. Final report of a co-ordinated research project 1996-1999

    International Nuclear Information System (INIS)

    2002-06-01

    This report is a summary of the work performed under a co-ordinated research project (CRP) entitled Verification of Analysis Methods for Predicting the Behaviour of Seismically isolated Nuclear Structures. The project was organized by the IAEA on the recommendation of the IAEA's Technical Working Group on Fast Reactors (TWGFR) and carried out from 1996 to 1999. One of the primary requirements for nuclear power plants and facilities is to ensure safety and the absence of damage under strong external dynamic loading from, for example, earthquakes. The designs of liquid metal cooled fast reactors (LMFRs) include systems which operate at low pressure and include components which are thin-walled and flexible. These systems and components could be considerably affected by earthquakes in seismic zones. Therefore, the IAEA through its advanced reactor technology development programme supports the activities of Member States to apply seismic isolation technology to LMFRs. The application of this technology to LMFRs and other nuclear plants and related facilities would offer the advantage that standard designs may be safely used in areas with a seismic risk. The technology may also provide a means of seismically upgrading nuclear facilities. Design analyses applied to such critical structures need to be firmly established, and the CRP provided a valuable tool in assessing their reliability. Ten organizations from India, Italy, Japan, the Republic of Korea, the Russian Federation, the United Kingdom, the United States of America and the European Commission co-operated in this CRP. This report documents the CRP activities, provides the main results and recommendations and includes the work carried out by the research groups at the participating institutes within the CRP on verification of their analysis methods for predicting the behaviour of seismically isolated nuclear structures

  11. The MODUS Approach to Formal Verification

    Directory of Open Access Journals (Sweden)

    Brewka Lukasz

    2014-03-01

    Full Text Available Background: Software reliability is of great importance for the development of embedded systems that are often used in applications that have requirements for safety. Since the life cycle of embedded products is becoming shorter, productivity and quality simultaneously required and closely in the process of providing competitive products Objectives: In relation to this, MODUS (Method and supporting toolset advancing embedded systems quality project aims to provide small and medium-sized businesses ways to improve their position in the embedded market through a pragmatic and viable solution Methods/Approach: This paper will describe the MODUS project with focus on the technical methodologies that can assist formal verification and formal model checking. Results: Based on automated analysis of the characteristics of the system and by controlling the choice of the existing opensource model verification engines, model verification producing inputs to be fed into these engines. Conclusions: The MODUS approach is aligned with present market needs; the familiarity with tools, the ease of use and compatibility/interoperability remain among the most important criteria when selecting the development environment for a project

  12. Symposium on international safeguards: Verification and nuclear material security. Book of extended synopses

    International Nuclear Information System (INIS)

    2001-01-01

    The symposium covered the topics related to international safeguards, verification and nuclear materials security, namely: verification and nuclear material security; the NPT regime: progress and promises; the Additional Protocol as an important tool for the strengthening of the safeguards system; the nuclear threat and the nuclear threat initiative. Eighteen sessions dealt with the following subjects: the evolution of IAEA safeguards (including strengthened safeguards, present and future challenges; verification of correctness and completeness of initial declarations; implementation of the Additional Protocol, progress and experience; security of material; nuclear disarmament and ongoing monitoring and verification in Iraq; evolution of IAEA verification in relation to nuclear disarmament); integrated safeguards; physical protection and illicit trafficking; destructive analysis for safeguards; the additional protocol; innovative safeguards approaches; IAEA verification and nuclear disarmament; environmental sampling; safeguards experience; safeguards equipment; panel discussion on development of state systems of accountancy and control; information analysis in the strengthened safeguard system; satellite imagery and remote monitoring; emerging IAEA safeguards issues; verification technology for nuclear disarmament; the IAEA and the future of nuclear verification and security

  13. Symposium on international safeguards: Verification and nuclear material security. Book of extended synopses

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2001-07-01

    The symposium covered the topics related to international safeguards, verification and nuclear materials security, namely: verification and nuclear material security; the NPT regime: progress and promises; the Additional Protocol as an important tool for the strengthening of the safeguards system; the nuclear threat and the nuclear threat initiative. Eighteen sessions dealt with the following subjects: the evolution of IAEA safeguards (including strengthened safeguards, present and future challenges; verification of correctness and completeness of initial declarations; implementation of the Additional Protocol, progress and experience; security of material; nuclear disarmament and ongoing monitoring and verification in Iraq; evolution of IAEA verification in relation to nuclear disarmament); integrated safeguards; physical protection and illicit trafficking; destructive analysis for safeguards; the additional protocol; innovative safeguards approaches; IAEA verification and nuclear disarmament; environmental sampling; safeguards experience; safeguards equipment; panel discussion on development of state systems of accountancy and control; information analysis in the strengthened safeguard system; satellite imagery and remote monitoring; emerging IAEA safeguards issues; verification technology for nuclear disarmament; the IAEA and the future of nuclear verification and security.

  14. Verification Games: Crowd-Sourced Formal Verification

    Science.gov (United States)

    2016-03-01

    additional paintbrushes. Additionally, in Paradox , human players are never given small optimization problems (for example, toggling the values of 50...were developed by the Center for Game Science: Pipe Jam, Traffic Jam, Flow Jam and Paradox . Verification tools and games were integrated to verify...4 4. Paradox …………………………………………………......5 5. MyClass ………………………………………………….....7 6. Results …………………………………………………......11 7. Time to

  15. Comparison of lumiracoxib with naproxen and ibuprofen in the Therapeutic Arthritis Research and Gastrointestinal Event Trial (TARGET), cardiovascular outcomes: randomised controlled trial.

    NARCIS (Netherlands)

    Farkouh, M.E.; Kirshner, H.; Harrington, R.A.; Ruland, S.; Verheugt, F.W.A.; Schnitzer, T.J.; Burmester, G.R.; Mysler, E.; Hochberg, M.C.; Doherty, M.; Ehrsam, E.; Gitton, X.; Krammer, G.; Mellein, B.; Gimona, A.; Matchaba, P.; Hawkey, C.J.; Chesebro, J.H.

    2004-01-01

    BACKGROUND: The potential for cyclo-oxygenase 2 (COX2)-selective inhibitors to increase the risk for myocardial infarction is controversial. The Therapeutic Arthritis Research and Gastrointestinal Event Trial (TARGET) aimed to assess gastrointestinal and cardiovascular safety of the COX2 inhibitor

  16. Towards a More Competitive Italy in Clinical Research: The Survey of Attitudes towards Trial sites in Europe (The SAT-EU Study TM

    Directory of Open Access Journals (Sweden)

    Marta Gehring

    2014-11-01

    Full Text Available  BackgroundItaly is Europe’s third largest pharmaceutical market, yet it ranks only ninth in the number of NIH-registered clinical trials per capita. The aim of our study was to explore stakeholders’ perception of Italy as place to undertake clinical trials, and to estimate the potential economic impact of selected reforms in terms of incremental trial activity.MethodsThe Survey of Attitudes towards Trials in Europe (SAT-EU Study was an anonymous, web-based survey which systematically assessed factors impacting clinical trial site selection in Europe. Estimates of Italian economic impact were developed in collaboration with AICRO (Association of Italian Contract Research Organisations.ResultsResponses were obtained from 485 professionals in 34 countries (15% residing in Italy representing over 100 institutions, spanning BioPharma Clinical Research Organizations (CROs, and Academic Clinical Trial Units (CTUs. Italy ranked tenth of twelve in terms of accessibility and transparency of information required to run clinical trials, and last with respect to predictability and speed of Ethics Committees. Costs of running clinical trials were not considered critical, whereas, fragmented and slow approval process was. Streamlined centralized trial authorization would translate into an estimated 1.1 billion Euros of incremental trial investments over three years. ConclusionsClinical trial professionals consider Italy’s governance of clinical research suboptimal, among the worst in Europe, and indicate that much could be done to make Italy more attractive for clinical trial investments. The present study also provides evidence about stakeholders’ willingness to invest in trials and its economic consequences provided effective reforms are put in place. 

  17. Verification of product design using regulation knowledge base and Web services

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Ik June [KAERI, Daejeon (Korea, Republic of); Lee, Jae Chul; Mun Du Hwan [Kyungpook National University, Daegu (Korea, Republic of); Kim, Byung Chul [Dong-A University, Busan (Korea, Republic of); Hwang, Jin Sang [PartDB Co., Ltd., Daejeom (Korea, Republic of); Lim, Chae Ho [Korea Institute of Industrial Technology, Incheon (Korea, Republic of)

    2015-11-15

    Since product regulations contain important rules or codes that manufacturers must follow, automatic verification of product design with the regulations related to a product is necessary. For this, this study presents a new method for the verification of product design using regulation knowledge base and Web services. Regulation knowledge base consisting of product ontology and rules was built with a hybrid technique combining ontology and programming languages. Web service for design verification was developed ensuring the flexible extension of knowledge base. By virtue of two technical features, design verification is served to various products while the change of system architecture is minimized.

  18. Verification of product design using regulation knowledge base and Web services

    International Nuclear Information System (INIS)

    Kim, Ik June; Lee, Jae Chul; Mun Du Hwan; Kim, Byung Chul; Hwang, Jin Sang; Lim, Chae Ho

    2015-01-01

    Since product regulations contain important rules or codes that manufacturers must follow, automatic verification of product design with the regulations related to a product is necessary. For this, this study presents a new method for the verification of product design using regulation knowledge base and Web services. Regulation knowledge base consisting of product ontology and rules was built with a hybrid technique combining ontology and programming languages. Web service for design verification was developed ensuring the flexible extension of knowledge base. By virtue of two technical features, design verification is served to various products while the change of system architecture is minimized.

  19. Compromises produced by the dialectic between self-verification and self-enhancement.

    Science.gov (United States)

    Morling, B; Epstein, S

    1997-12-01

    Three studies of people's reactions to evaluative feedback demonstrated that the dialectic between self-enhancement and self-verification results in compromises between these 2 motives, as hypothesized in cognitive-experiential self-theory. The demonstration was facilitated by 2 procedural improvements: Enhancement and verification were established by calibrating evaluative feedback against self appraisals, and degree of enhancement and of verification were varied along a continuum, rather than categorically. There was also support for the hypotheses that processing in an intuitive-experiential mode favors enhancement and processing in an analytical-rational mode favors verification in the kinds of situations investigated.

  20. Calibration and verification of surface contamination meters --- Procedures and techniques

    International Nuclear Information System (INIS)

    Schuler, C; Butterweck, G.; Wernli, C.; Bochud, F.; Valley, J.-F.

    2007-03-01

    A standardised measurement procedure for surface contamination meters (SCM) is presented. The procedure aims at rendering surface contamination measurements to be simply and safely interpretable. Essential for the approach is the introduction and common use of the radionuclide specific quantity 'guideline value' specified in the Swiss Radiation Protection Ordinance as unit for the measurement of surface activity. The according radionuclide specific 'guideline value count rate' can be summarized as verification reference value for a group of radionuclides ('basis guideline value count rate'). The concept can be generalized for SCM of the same type or for SCM of different types using he same principle of detection. A SCM multi source calibration technique is applied for the determination of the instrument efficiency. Four different electron radiation energy regions, four different photon radiation energy regions and an alpha radiation energy region are represented by a set of calibration sources built according to ISO standard 8769-2. A guideline value count rate representing the activity per unit area of a surface contamination of one guideline value can be calculated for any radionuclide using instrument efficiency, radionuclide decay data, contamination source efficiency, guideline value averaging area (100 cm 2 ), and radionuclide specific guideline value. n this way, instrument responses for the evaluation of surface contaminations are obtained for radionuclides without available calibration sources as well as for short-Iived radionuclides, for which the continuous replacement of certified calibration sources can lead to unreasonable costs. SCM verification is based on surface emission rates of reference sources with an active area of 100 cm 2 . The verification for a given list of radionuclides is based on the radionuclide specific quantity guideline value count rate. Guideline value count rates for groups of radionuclides can be represented within the maximum

  1. Clinical Trials

    Medline Plus

    Full Text Available ... Trials About Clinical Trials Clinical trials are research studies that explore whether a medical strategy, treatment, or ... and Clinical Studies Web page. Children and Clinical Studies Learn more about Children and Clinical Studies Importance ...

  2. Clinical Trials

    Medline Plus

    Full Text Available ... protocol affect the trial's results. Comparison Groups In most clinical trials, researchers use comparison groups. This means ... study before you agree to take part. Randomization Most clinical trials that have comparison groups use randomization. ...

  3. Integrated knowledge base tool for acquisition and verification of NPP alarm systems

    International Nuclear Information System (INIS)

    Park, Joo Hyun; Seong, Poong Hyun

    1998-01-01

    Knowledge acquisition and knowledge base verification are important activities in developing knowledge-based systems such as alarm processing systems. In this work, we developed the integrated tool, for knowledge acquisition and verification of NPP alarm processing systems, by using G2 tool. The tool integrates document analysis method and ECPN matrix analysis method, for knowledge acquisition and knowledge verification, respectively. This tool enables knowledge engineers to perform their tasks from knowledge acquisition to knowledge verification consistently

  4. [The advance in the research and therapeutic trials of amyotrophic lateral sclerosis].

    Science.gov (United States)

    Moriwaka, F; Tashiro, K

    2000-12-01

    The research concerning with the pathogenesis of amyotrophic lateral sclerosis (ALS) has been in steady progress in the last 10 years, including discovery of SOD mutation in familial ALS. Riluzole, by its inhibiting excitatory amino acid release, is the only drug, which has been demonstrated the neuroprotective activity in the randomised double-blind placebo-controlled clinical trials in patients with ALS, although many other clinical therapeutic trials for ALS patients has been carried out. We discussed the clinical trials being the under way, especially SR57746A, (1-[2-(naphth-2-yl)ethy]-4-(3-trifluoromethyl phenyl)-1, 2, 5, 6-tetrahydro-pyridine, hydrochloride), a non-peptide compound which has been shown to exhibit a wide range of neurotrophic effects both in vitro and in vivo, and its phase II study in Japan and two kinds of phase III studies ongoing in the United States, Canada and Europe. We also introduced the clinical guideline for practice and care of ALS patients proposed by American Academy of Neurology, expecting to establish clinical guideline to be applicable to Japanese cases.

  5. The Challenge for Arms Control Verification in the Post-New START World

    Energy Technology Data Exchange (ETDEWEB)

    Wuest, C R

    2012-05-24

    considered as the baseline case and are contrasted with possible alternative verification protocols that could be effective in a post-New START era of significant reductions in U.S. and other countries nuclear stockpiles. Of particular concern is the possibility of deception and breakout when declared and observed numbers of weapons are below the level considered to pose an existential threat to the U.S. In a regime of very low stockpile numbers, 'traditional' verification protocols as currently embodied in the New START treaty might prove less than adequate. I introduce and discuss a number of issues that need to be considered in future verification protocols, many of which do not have immediate solutions and so require further study. I also discuss alternatives and enhancements to traditional verification protocols, for example, confidence building measures such as burden sharing against the common threat of weapon of mass destruction (WMD) terrorism, joint research and development.

  6. Transmutation Fuel Performance Code Thermal Model Verification

    Energy Technology Data Exchange (ETDEWEB)

    Gregory K. Miller; Pavel G. Medvedev

    2007-09-01

    FRAPCON fuel performance code is being modified to be able to model performance of the nuclear fuels of interest to the Global Nuclear Energy Partnership (GNEP). The present report documents the effort for verification of the FRAPCON thermal model. It was found that, with minor modifications, FRAPCON thermal model temperature calculation agrees with that of the commercial software ABAQUS (Version 6.4-4). This report outlines the methodology of the verification, code input, and calculation results.

  7. Verification and Validation of RADTRAN 5.5.

    Energy Technology Data Exchange (ETDEWEB)

    Osborn, Douglas.; Weiner, Ruth F.; Mills, George Scott; Hamp, Steve C.

    2005-02-01

    This document contains a description of the verification and validation process used for the RADTRAN 5.5 code. The verification and validation process ensured the proper calculational models and mathematical and numerical methods were used in the RADTRAN 5.5 code for the determination of risk and consequence assessments. The differences between RADTRAN 5 and RADTRAN 5.5 are the addition of tables, an expanded isotope library, and the additional User-Defined meteorological option for accident dispersion. 3

  8. Bias associated with delayed verification in test accuracy studies: accuracy of tests for endometrial hyperplasia may be much higher than we think!

    OpenAIRE

    Clark, T Justin; ter Riet, Gerben; Coomarasamy, Aravinthan; Khan, Khalid S

    2004-01-01

    Abstract Background To empirically evaluate bias in estimation of accuracy associated with delay in verification of diagnosis among studies evaluating tests for predicting endometrial hyperplasia. Methods Systematic reviews of all published research on accuracy of miniature endometrial biopsy and endometr ial ultrasonography for diagnosing endometrial hyperplasia identified 27 test accuracy studies (2,982 subjects). Of these, 16 had immediate histological verification of diagnosis while 11 ha...

  9. Proceedings of the 7th International Workshop on Verification of Infinite-State Systems (INFINITY'05)

    DEFF Research Database (Denmark)

    2005-01-01

    The aim of the workshop is, to provide a forum for researchers interested in the development of mathematical techniques for the analysis and verification of systems with infinitely many states. Topics: Techniques for modeling and analysis of infinite-state systems; Equivalence-checking and model-...

  10. Implementation and verification of global optimization benchmark problems

    Directory of Open Access Journals (Sweden)

    Posypkin Mikhail

    2017-12-01

    Full Text Available The paper considers the implementation and verification of a test suite containing 150 benchmarks for global deterministic box-constrained optimization. A C++ library for describing standard mathematical expressions was developed for this purpose. The library automate the process of generating the value of a function and its’ gradient at a given point and the interval estimates of a function and its’ gradient on a given box using a single description. Based on this functionality, we have developed a collection of tests for an automatic verification of the proposed benchmarks. The verification has shown that literary sources contain mistakes in the benchmarks description. The library and the test suite are available for download and can be used freely.

  11. Integrated Java Bytecode Verification

    DEFF Research Database (Denmark)

    Gal, Andreas; Probst, Christian; Franz, Michael

    2005-01-01

    Existing Java verifiers perform an iterative data-flow analysis to discover the unambiguous type of values stored on the stack or in registers. Our novel verification algorithm uses abstract interpretation to obtain definition/use information for each register and stack location in the program...

  12. Cluster randomized trial in the general practice research database: 2. Secondary prevention after first stroke (eCRT study: study protocol for a randomized controlled trial

    Directory of Open Access Journals (Sweden)

    Dregan Alex

    2012-10-01

    Full Text Available Abstract Background The purpose of this research is to develop and evaluate methods for conducting pragmatic cluster randomized trials in a primary care electronic database. The proposal describes one application, in a less frequent chronic condition of public health importance, secondary prevention of stroke. A related protocol in antibiotic prescribing was reported previously. Methods/Design The study aims to implement a cluster randomized trial (CRT using the electronic patient records of the General Practice Research Database (GPRD as a sampling frame and data source. The specific objective of the trial is to evaluate the effectiveness of a computer-delivered intervention at enhancing the delivery of stroke secondary prevention in primary care. GPRD family practices will be allocated to the intervention or usual care. The intervention promotes the use of electronic prompts to support adherence with the recommendations of the UK Intercollegiate Stroke Working Party and NICE guidelines for the secondary prevention of stroke in primary care. Primary outcome measure will be the difference in systolic blood pressure between intervention and control trial arms at 12-month follow-up. Secondary outcomes will be differences in serum cholesterol, prescribing of antihypertensive drugs, statins, and antiplatelet therapy. The intervention will continue for 12 months. Information on the utilization of the decision-support tools will also be analyzed. Discussion The CRT will investigate the effectiveness of using a computer-delivered intervention to reduce the risk of stroke recurrence following a first stroke event. The study will provide methodological guidance on the implementation of CRTs in electronic databases in primary care. Trial registration Current Controlled Trials ISRCTN35701810

  13. Cost-effectiveness analysis alongside clinical trials II-An ISPOR Good Research Practices Task Force report.

    Science.gov (United States)

    Ramsey, Scott D; Willke, Richard J; Glick, Henry; Reed, Shelby D; Augustovski, Federico; Jonsson, Bengt; Briggs, Andrew; Sullivan, Sean D

    2015-03-01

    Clinical trials evaluating medicines, medical devices, and procedures now commonly assess the economic value of these interventions. The growing number of prospective clinical/economic trials reflects both widespread interest in economic information for new technologies and the regulatory and reimbursement requirements of many countries that now consider evidence of economic value along with clinical efficacy. As decision makers increasingly demand evidence of economic value for health care interventions, conducting high-quality economic analyses alongside clinical studies is desirable because they broaden the scope of information available on a particular intervention, and can efficiently provide timely information with high internal and, when designed and analyzed properly, reasonable external validity. In 2005, ISPOR published the Good Research Practices for Cost-Effectiveness Analysis Alongside Clinical Trials: The ISPOR RCT-CEA Task Force report. ISPOR initiated an update of the report in 2014 to include the methodological developments over the last 9 years. This report provides updated recommendations reflecting advances in several areas related to trial design, selecting data elements, database design and management, analysis, and reporting of results. Task force members note that trials should be designed to evaluate effectiveness (rather than efficacy) when possible, should include clinical outcome measures, and should obtain health resource use and health state utilities directly from study subjects. Collection of economic data should be fully integrated into the study. An incremental analysis should be conducted with an intention-to-treat approach, complemented by relevant subgroup analyses. Uncertainty should be characterized. Articles should adhere to established standards for reporting results of cost-effectiveness analyses. Economic studies alongside trials are complementary to other evaluations (e.g., modeling studies) as information for decision

  14. Radiochemical verification and validation in the environmental data collection process

    International Nuclear Information System (INIS)

    Rosano-Reece, D.; Bottrell, D.; Bath, R.J.

    1994-01-01

    A credible and cost effective environmental data collection process should produce analytical data which meets regulatory and program specific requirements. Analytical data, which support the sampling and analysis activities at hazardous waste sites, undergo verification and independent validation before the data are submitted to regulators. Understanding the difference between verification and validation and their respective roles in the sampling and analysis process is critical to the effectiveness of a program. Verification is deciding whether the measurement data obtained are what was requested. The verification process determines whether all the requirements were met. Validation is more complicated than verification. It attempts to assess the impacts on data use, especially when requirements are not met. Validation becomes part of the decision-making process. Radiochemical data consists of a sample result with an associated error. Therefore, radiochemical validation is different and more quantitative than is currently possible for the validation of hazardous chemical data. Radiochemical data include both results and uncertainty that can be statistically compared to identify significance of differences in a more technically defensible manner. Radiochemical validation makes decisions about analyte identification, detection, and uncertainty for a batch of data. The process focuses on the variability of the data in the context of the decision to be made. The objectives of this paper are to present radiochemical verification and validation for environmental data and to distinguish the differences between the two operations

  15. Automatic verification of a lip-synchronisation protocol using Uppaal

    NARCIS (Netherlands)

    Bowman, H.; Faconti, G.; Katoen, J.-P.; Latella, D.; Massink, M.

    1998-01-01

    We present the formal specification and verification of a lip-synchronisation protocol using the real-time model checker Uppaal. A number of specifications of this protocol can be found in the literature, but this is the first automatic verification. We take a published specification of the

  16. Numident Online Verification Utility (NOVU)

    Data.gov (United States)

    Social Security Administration — NOVU is a mainframe application that accesses the NUMIDENT to perform real-time SSN verifications. This program is called by other SSA online programs that serve as...

  17. Systematic study of source mask optimization and verification flows

    Science.gov (United States)

    Ben, Yu; Latypov, Azat; Chua, Gek Soon; Zou, Yi

    2012-06-01

    Source mask optimization (SMO) emerged as powerful resolution enhancement technique (RET) for advanced technology nodes. However, there is a plethora of flow and verification metrics in the field, confounding the end user of the technique. Systemic study of different flows and the possible unification thereof is missing. This contribution is intended to reveal the pros and cons of different SMO approaches and verification metrics, understand the commonality and difference, and provide a generic guideline for RET selection via SMO. The paper discusses 3 different type of variations commonly arise in SMO, namely pattern preparation & selection, availability of relevant OPC recipe for freeform source and finally the metrics used in source verification. Several pattern selection algorithms are compared and advantages of systematic pattern selection algorithms are discussed. In the absence of a full resist model for SMO, alternative SMO flow without full resist model is reviewed. Preferred verification flow with quality metrics of DOF and MEEF is examined.

  18. Static and Dynamic Verification of Critical Software for Space Applications

    Science.gov (United States)

    Moreira, F.; Maia, R.; Costa, D.; Duro, N.; Rodríguez-Dapena, P.; Hjortnaes, K.

    Space technology is no longer used only for much specialised research activities or for sophisticated manned space missions. Modern society relies more and more on space technology and applications for every day activities. Worldwide telecommunications, Earth observation, navigation and remote sensing are only a few examples of space applications on which we rely daily. The European driven global navigation system Galileo and its associated applications, e.g. air traffic management, vessel and car navigation, will significantly expand the already stringent safety requirements for space based applications Apart from their usefulness and practical applications, every single piece of onboard software deployed into the space represents an enormous investment. With a long lifetime operation and being extremely difficult to maintain and upgrade, at least when comparing with "mainstream" software development, the importance of ensuring their correctness before deployment is immense. Verification &Validation techniques and technologies have a key role in ensuring that the onboard software is correct and error free, or at least free from errors that can potentially lead to catastrophic failures. Many RAMS techniques including both static criticality analysis and dynamic verification techniques have been used as a means to verify and validate critical software and to ensure its correctness. But, traditionally, these have been isolated applied. One of the main reasons is the immaturity of this field in what concerns to its application to the increasing software product(s) within space systems. This paper presents an innovative way of combining both static and dynamic techniques exploiting their synergy and complementarity for software fault removal. The methodology proposed is based on the combination of Software FMEA and FTA with Fault-injection techniques. The case study herein described is implemented with support from two tools: The SoftCare tool for the SFMEA and SFTA

  19. Cellular Therapies Clinical Research Roadmap: lessons learned on how to move a cellular therapy into a clinical trial.

    Science.gov (United States)

    Ouseph, Stacy; Tappitake, Darah; Armant, Myriam; Wesselschmidt, Robin; Derecho, Ivy; Draxler, Rebecca; Wood, Deborah; Centanni, John M

    2015-04-01

    A clinical research roadmap has been developed as a resource for researchers to identify critical areas and potential pitfalls when transitioning a cellular therapy product from the research laboratory, by means of an Investigational New Drug (IND) application, into early-phase clinical trials. The roadmap describes four key areas: basic and preclinical research, resource development, translational research and Good Manufacturing Practice (GMP) and IND assembly and submission. Basic and preclinical research identifies a new therapeutic concept and demonstrates its potential value with the use of a model of the relevant disease. During resource development, the appropriate specialists and the required expertise to bring this product into the clinic are identified (eg, researchers, regulatory specialists, GMP manufacturing staff, clinicians and clinical trials staff, etc). Additionally, the funds required to achieve this goal (or a plan to procure them) are identified. In the next phase, the plan to translate the research product into a clinical-grade therapeutic is developed. Finally regulatory approval to start the trial must be obtained. In the United States, this is done by filing an IND application with the Food and Drug Administration. The National Heart, Lung and Blood Institute-funded Production Assistance for Cellular Therapies program has facilitated the transition of a variety of cellular therapy products from the laboratory into Phase1/2 trials. The five Production Assistance for Cellular Therapies facilities have assisted investigators by performing translational studies and GMP manufacturing to ensure that cellular products met release specifications and were manufactured safely, reproducibly and at the appropriate scale. The roadmap resulting from this experience is the focus of this article. Copyright © 2015 International Society for Cellular Therapy. Published by Elsevier Inc. All rights reserved.

  20. Organics Verification Study for Sinclair and Dyes Inlets, Washington

    Energy Technology Data Exchange (ETDEWEB)

    Kohn, Nancy P.; Brandenberger, Jill M.; Niewolny, Laurie A.; Johnston, Robert K.

    2006-09-28

    Sinclair and Dyes Inlets near Bremerton, Washington, are on the State of Washington 1998 303(d) list of impaired waters because of fecal coliform contamination in marine water, metals in sediment and fish tissue, and organics in sediment and fish tissue. Because significant cleanup and source control activities have been conducted in the inlets since the data supporting the 1998 303(d) listings were collected, two verification studies were performed to address the 303(d) segments that were listed for metal and organic contaminants in marine sediment. The Metals Verification Study (MVS) was conducted in 2003; the final report, Metals Verification Study for Sinclair and Dyes Inlets, Washington, was published in March 2004 (Kohn et al. 2004). This report describes the Organics Verification Study that was conducted in 2005. The study approach was similar to the MVS in that many surface sediment samples were screened for the major classes of organic contaminants, and then the screening results and other available data were used to select a subset of samples for quantitative chemical analysis. Because the MVS was designed to obtain representative data on concentrations of contaminants in surface sediment throughout Sinclair Inlet, Dyes Inlet, Port Orchard Passage, and Rich Passage, aliquots of the 160 MVS sediment samples were used in the analysis for the Organics Verification Study. However, unlike metals screening methods, organics screening methods are not specific to individual organic compounds, and are not available for some target organics. Therefore, only the quantitative analytical results were used in the organics verification evaluation. The results of the Organics Verification Study showed that sediment quality outside of Sinclair Inlet is unlikely to be impaired because of organic contaminants. Similar to the results for metals, in Sinclair Inlet, the distribution of residual organic contaminants is generally limited to nearshore areas already within the

  1. The role of the United Nations in the field of verification

    International Nuclear Information System (INIS)

    1991-01-01

    By resolution 43/81 B of 7 December 1988, the General Assembly requested the Secretary General to undertake, with the assistance of a group of qualified governmental experts, an in-depth study of the role of the United Nations in the field of verification. In August 1990, the Secretary-General transmitted to the General Assembly the unanimously approved report of the experts. The report is structured in six chapters and contains a bibliographic appendix on technical aspects of verification. The Introduction provides a brief historical background on the development of the question of verification in the United Nations context, culminating with the adoption by the General Assembly of resolution 43/81 B, which requested the study. Chapters II and III address the definition and functions of verification and the various approaches, methods, procedures and techniques used in the process of verification. Chapters IV and V examine the existing activities of the United Nations in the field of verification, possibilities for improvements in those activities as well as possible additional activities, while addressing the organizational, technical, legal, operational and financial implications of each of the possibilities discussed. Chapter VI presents the conclusions and recommendations of the Group

  2. Integrating Fingerprint Verification into the Smart Card-Based Healthcare Information System

    Directory of Open Access Journals (Sweden)

    Jin-Won Park

    2009-01-01

    Full Text Available As VLSI technology has been improved, a smart card employing 32-bit processors has been released, and more personal information such as medical, financial data can be stored in the card. Thus, it becomes important to protect personal information stored in the card. Verification of the card holder's identity using a fingerprint has advantages over the present practices of Personal Identification Numbers (PINs and passwords. However, the computational workload of fingerprint verification is much heavier than that of the typical PIN-based solution. In this paper, we consider three strategies to implement fingerprint verification in a smart card environment and how to distribute the modules of fingerprint verification between the smart card and the card reader. We first evaluate the number of instructions of each step of a typical fingerprint verification algorithm, and estimate the execution time of several cryptographic algorithms to guarantee the security/privacy of the fingerprint data transmitted in the smart card with the client-server environment. Based on the evaluation results, we analyze each scenario with respect to the security level and the real-time execution requirements in order to implement fingerprint verification in the smart card with the client-server environment.

  3. Integrating Fingerprint Verification into the Smart Card-Based Healthcare Information System

    Science.gov (United States)

    Moon, Daesung; Chung, Yongwha; Pan, Sung Bum; Park, Jin-Won

    2009-12-01

    As VLSI technology has been improved, a smart card employing 32-bit processors has been released, and more personal information such as medical, financial data can be stored in the card. Thus, it becomes important to protect personal information stored in the card. Verification of the card holder's identity using a fingerprint has advantages over the present practices of Personal Identification Numbers (PINs) and passwords. However, the computational workload of fingerprint verification is much heavier than that of the typical PIN-based solution. In this paper, we consider three strategies to implement fingerprint verification in a smart card environment and how to distribute the modules of fingerprint verification between the smart card and the card reader. We first evaluate the number of instructions of each step of a typical fingerprint verification algorithm, and estimate the execution time of several cryptographic algorithms to guarantee the security/privacy of the fingerprint data transmitted in the smart card with the client-server environment. Based on the evaluation results, we analyze each scenario with respect to the security level and the real-time execution requirements in order to implement fingerprint verification in the smart card with the client-server environment.

  4. Calibration and Verification of the Hydraulic Model for Blue Nile River from Roseires Dam to Khartoum City

    Directory of Open Access Journals (Sweden)

    Kamal edin ELsidig Bashar

    2015-12-01

    Full Text Available This research represents a practical attempt applied to calibrate and verify a hydraulic model for the Blue Nile River. The calibration procedures are performed using the observed data for a previous period and comparing them with the calibration results while verification requirements are achieved with the application of the observed data for another future period and comparing them with the verification results. The study objective covered a relationship of the river terrain with the distance between the assumed points of the dam failures along the river length. The computed model values and the observed data should conform to the theoretical analysis and the overall verification performance of the model by comparing it with another set of data. The model was calibrated using data from gauging stations (Khartoum, Wad Medani, downstream Sennar, and downstream Roseires during the period from the 1st of May to 31 of October 1988 and the verification was done using the data of the same gauging stations for years 2003 and 2010 for the same period. The required available data from these stations were collected, processed and used in the model calibration. The geometry input files for the HEC-RAS models were created using a combination of ArcGIS and HEC-GeoRAS. The results revealed high correlation (R2 ˃ 0.9 between the observed and calibrated water levels in all gauging stations during 1988 and also high correlation between the observed and verification water levels was obtained in years 2003 and 2010. Verification results with the equation and degree of correlation can be used to predict future data of any expected data for the same stations.

  5. A Systematic Review of Community-Based Participatory Research to Enhance Clinical Trials in Racial and Ethnic Minority Groups

    Science.gov (United States)

    De Las Nueces, Denise; Hacker, Karen; DiGirolamo, Ann; Hicks, LeRoi S

    2012-01-01

    Objective To examine the effectiveness of current community-based participatory research (CBPR) clinical trials involving racial and ethnic minorities. Data Source All published peer-reviewed CBPR intervention articles in PubMed and CINAHL databases from January 2003 to May 2010. Study Design We performed a systematic literature review. Data Collection/Extraction Methods Data were extracted on each study's characteristics, community involvement in research, subject recruitment and retention, and intervention effects. Principle Findings We found 19 articles meeting inclusion criteria. Of these, 14 were published from 2007 to 2010. Articles described some measures of community participation in research with great variability. Although CBPR trials examined a wide range of behavioral and clinical outcomes, such trials had very high success rates in recruiting and retaining minority participants and achieving significant intervention effects. Conclusions Significant publication gaps remain between CBPR and other interventional research methods. CBPR may be effective in increasing participation of racial and ethnic minority subjects in research and may be a powerful tool in testing the generalizability of effective interventions among these populations. CBPR holds promise as an approach that may contribute greatly to the study of health care delivery to disadvantaged populations. PMID:22353031

  6. Novel Verification Method for Timing Optimization Based on DPSO

    Directory of Open Access Journals (Sweden)

    Chuandong Chen

    2018-01-01

    Full Text Available Timing optimization for logic circuits is one of the key steps in logic synthesis. Extant research data are mainly proposed based on various intelligence algorithms. Hence, they are neither comparable with timing optimization data collected by the mainstream electronic design automation (EDA tool nor able to verify the superiority of intelligence algorithms to the EDA tool in terms of optimization ability. To address these shortcomings, a novel verification method is proposed in this study. First, a discrete particle swarm optimization (DPSO algorithm was applied to optimize the timing of the mixed polarity Reed-Muller (MPRM logic circuit. Second, the Design Compiler (DC algorithm was used to optimize the timing of the same MPRM logic circuit through special settings and constraints. Finally, the timing optimization results of the two algorithms were compared based on MCNC benchmark circuits. The timing optimization results obtained using DPSO are compared with those obtained from DC, and DPSO demonstrates an average reduction of 9.7% in the timing delays of critical paths for a number of MCNC benchmark circuits. The proposed verification method directly ascertains whether the intelligence algorithm has a better timing optimization ability than DC.

  7. 45 CFR 1626.7 - Verification of eligible alien status.

    Science.gov (United States)

    2010-10-01

    ... 45 Public Welfare 4 2010-10-01 2010-10-01 false Verification of eligible alien status. 1626.7... CORPORATION RESTRICTIONS ON LEGAL ASSISTANCE TO ALIENS § 1626.7 Verification of eligible alien status. (a) An alien seeking representation shall submit appropriate documents to verify eligibility, unless the only...

  8. BAVP: Blockchain-Based Access Verification Protocol in LEO Constellation Using IBE Keys

    OpenAIRE

    Wei, Songjie; Li, Shuai; Liu, Peilong; Liu, Meilin

    2018-01-01

    LEO constellation has received intensive research attention in the field of satellite communication. The existing centralized authentication protocols traditionally used for MEO/GEO satellite networks cannot accommodate LEO satellites with frequent user connection switching. This paper proposes a fast and efficient access verification protocol named BAVP by combining identity-based encryption and blockchain technology. Two different key management schemes with IBE and blockchain, respectively...

  9. From Controlled Trial to Community Adoption: The Multisite Translational Community Trial

    Science.gov (United States)

    Murimi, Mary; Gonzalez, Anjelica; Njike, Valentine; Green, Lawrence W.

    2011-01-01

    Methods for translating the findings of controlled trials, such as the Diabetes Prevention Program, into real-world community application have not been clearly defined. A standardized research methodology for making and evaluating such a transition is needed. We introduce the multisite translational community trial (mTCT) as the research analog to the multisite randomized controlled trial. The mTCT is adapted to incorporate the principles and practices of community-based participatory research and the increased relevance and generalizability gained from diverse community settings. The mTCT is a tool designed to bridge the gap between what a clinical trial demonstrates can work in principle and what is needed to make it workable and effective in real-world settings. Its utility could be put to the test, in particular with practice-based research networks such as the Prevention Research Centers. PMID:21680935

  10. Distorted Fingerprint Verification System

    Directory of Open Access Journals (Sweden)

    Divya KARTHIKAESHWARAN

    2011-01-01

    Full Text Available Fingerprint verification is one of the most reliable personal identification methods. Fingerprint matching is affected by non-linear distortion introduced in fingerprint impression during the image acquisition process. This non-linear deformation changes both the position and orientation of minutiae. The proposed system operates in three stages: alignment based fingerprint matching, fuzzy clustering and classifier framework. First, an enhanced input fingerprint image has been aligned with the template fingerprint image and matching score is computed. To improve the performance of the system, a fuzzy clustering based on distance and density has been used to cluster the feature set obtained from the fingerprint matcher. Finally a classifier framework has been developed and found that cost sensitive classifier produces better results. The system has been evaluated on fingerprint database and the experimental result shows that system produces a verification rate of 96%. This system plays an important role in forensic and civilian applications.

  11. Enhanced verification test suite for physics simulation codes

    Energy Technology Data Exchange (ETDEWEB)

    Kamm, James R.; Brock, Jerry S.; Brandon, Scott T.; Cotrell, David L.; Johnson, Bryan; Knupp, Patrick; Rider, William J.; Trucano, Timothy G.; Weirs, V. Gregory

    2008-09-01

    This document discusses problems with which to augment, in quantity and in quality, the existing tri-laboratory suite of verification problems used by Los Alamos National Laboratory (LANL), Lawrence Livermore National Laboratory (LLNL), and Sandia National Laboratories (SNL). The purpose of verification analysis is demonstrate whether the numerical results of the discretization algorithms in physics and engineering simulation codes provide correct solutions of the corresponding continuum equations.

  12. Software engineering and automatic continuous verification of scientific software

    Science.gov (United States)

    Piggott, M. D.; Hill, J.; Farrell, P. E.; Kramer, S. C.; Wilson, C. R.; Ham, D.; Gorman, G. J.; Bond, T.

    2011-12-01

    Software engineering of scientific code is challenging for a number of reasons including pressure to publish and a lack of awareness of the pitfalls of software engineering by scientists. The Applied Modelling and Computation Group at Imperial College is a diverse group of researchers that employ best practice software engineering methods whilst developing open source scientific software. Our main code is Fluidity - a multi-purpose computational fluid dynamics (CFD) code that can be used for a wide range of scientific applications from earth-scale mantle convection, through basin-scale ocean dynamics, to laboratory-scale classic CFD problems, and is coupled to a number of other codes including nuclear radiation and solid modelling. Our software development infrastructure consists of a number of free tools that could be employed by any group that develops scientific code and has been developed over a number of years with many lessons learnt. A single code base is developed by over 30 people for which we use bazaar for revision control, making good use of the strong branching and merging capabilities. Using features of Canonical's Launchpad platform, such as code review, blueprints for designing features and bug reporting gives the group, partners and other Fluidity uers an easy-to-use platform to collaborate and allows the induction of new members of the group into an environment where software development forms a central part of their work. The code repositoriy are coupled to an automated test and verification system which performs over 20,000 tests, including unit tests, short regression tests, code verification and large parallel tests. Included in these tests are build tests on HPC systems, including local and UK National HPC services. The testing of code in this manner leads to a continuous verification process; not a discrete event performed once development has ceased. Much of the code verification is done via the "gold standard" of comparisons to analytical

  13. When referring physicians and researchers disagree on equipoise: the TOTAL trial experience.

    Science.gov (United States)

    Rodrigues, H C M L; Deprest, J; v d Berg, P P

    2011-06-01

    In this article, we reflect on whether randomized controlled trials (RCTs) are adequate for the clinical evaluation of maternal-fetal surgery for congenital diaphragmatic hernia (CDH), focusing on the role of patients' preferences in the setting up of research protocols, on the requirement of equipoise and on the concept of therapeutic misconception (TM). We describe the conception and setting up of the tracheal occlusion (TO) to accelerate lung growth trial and analyze the ethical dilemmas faced by the research team during that time. Depending on the view adopted regarding the scope of equipoise, there are two ways of dealing with patient's preferences concerning fetoscopic endoluminal TO and expectant management during pregnancy for CDH. The solution adopted for fetoscopic endoluminal tracheal occlusion (FETO) is justified by the extended period of time it has been available to patients before the start of the RCT. Strong patient and referring physician preferences do not entail a right to have FETO, since it is a procedure of yet unproven efficacy and safety. In the future, to avoid the dilemmas posed by the TM and in name of the right of future generations of patients to have access to treatment of proven safety and efficacy, researchers must be able to plan RCT in due time. Copyright © 2011 John Wiley & Sons, Ltd.

  14. Test-driven verification/validation of model transformations

    Institute of Scientific and Technical Information of China (English)

    László LENGYEL; Hassan CHARAF

    2015-01-01

    Why is it important to verify/validate model transformations? The motivation is to improve the quality of the trans-formations, and therefore the quality of the generated software artifacts. Verified/validated model transformations make it possible to ensure certain properties of the generated software artifacts. In this way, verification/validation methods can guarantee different requirements stated by the actual domain against the generated/modified/optimized software products. For example, a verified/ validated model transformation can ensure the preservation of certain properties during the model-to-model transformation. This paper emphasizes the necessity of methods that make model transformation verified/validated, discusses the different scenarios of model transformation verification and validation, and introduces the principles of a novel test-driven method for verifying/ validating model transformations. We provide a solution that makes it possible to automatically generate test input models for model transformations. Furthermore, we collect and discuss the actual open issues in the field of verification/validation of model transformations.

  15. Design verification enhancement of field programmable gate array-based safety-critical I&C system of nuclear power plant

    Energy Technology Data Exchange (ETDEWEB)

    Ahmed, Ibrahim [Department of Nuclear Engineering, Kyung Hee University, 1732 Deogyeong-daero, Giheung-gu, Yongin-si, Gyeonggi-do 17104 (Korea, Republic of); Jung, Jaecheon, E-mail: jcjung@kings.ac.kr [Department of Nuclear Power Plant Engineering, KEPCO International Nuclear Graduate School, 658-91 Haemaji-ro, Seosang-myeon, Ulju-gun, Ulsan 45014 (Korea, Republic of); Heo, Gyunyoung [Department of Nuclear Engineering, Kyung Hee University, 1732 Deogyeong-daero, Giheung-gu, Yongin-si, Gyeonggi-do 17104 (Korea, Republic of)

    2017-06-15

    Highlights: • An enhanced, systematic and integrated design verification approach is proposed for V&V of FPGA-based I&C system of NPP. • RPS bistable fixed setpoint trip algorithm is designed, analyzed, verified and discussed using the proposed approaches. • The application of integrated verification approach simultaneously verified the entire design modules. • The applicability of the proposed V&V facilitated the design verification processes. - Abstract: Safety-critical instrumentation and control (I&C) system in nuclear power plant (NPP) implemented on programmable logic controllers (PLCs) plays a vital role in safe operation of the plant. The challenges such as fast obsolescence, the vulnerability to cyber-attack, and other related issues of software systems have currently led to the consideration of field programmable gate arrays (FPGAs) as an alternative to PLCs because of their advantages and hardware related benefits. However, safety analysis for FPGA-based I&C systems, and verification and validation (V&V) assessments still remain important issues to be resolved, which are now become a global research point of interests. In this work, we proposed a systematic design and verification strategies from start to ready-to-use in form of model-based approaches for FPGA-based reactor protection system (RPS) that can lead to the enhancement of the design verification and validation processes. The proposed methodology stages are requirement analysis, enhanced functional flow block diagram (EFFBD) models, finite state machine with data path (FSMD) models, hardware description language (HDL) code development, and design verifications. The design verification stage includes unit test – Very high speed integrated circuit Hardware Description Language (VHDL) test and modified condition decision coverage (MC/DC) test, module test – MATLAB/Simulink Co-simulation test, and integration test – FPGA hardware test beds. To prove the adequacy of the proposed

  16. Design verification enhancement of field programmable gate array-based safety-critical I&C system of nuclear power plant

    International Nuclear Information System (INIS)

    Ahmed, Ibrahim; Jung, Jaecheon; Heo, Gyunyoung

    2017-01-01

    Highlights: • An enhanced, systematic and integrated design verification approach is proposed for V&V of FPGA-based I&C system of NPP. • RPS bistable fixed setpoint trip algorithm is designed, analyzed, verified and discussed using the proposed approaches. • The application of integrated verification approach simultaneously verified the entire design modules. • The applicability of the proposed V&V facilitated the design verification processes. - Abstract: Safety-critical instrumentation and control (I&C) system in nuclear power plant (NPP) implemented on programmable logic controllers (PLCs) plays a vital role in safe operation of the plant. The challenges such as fast obsolescence, the vulnerability to cyber-attack, and other related issues of software systems have currently led to the consideration of field programmable gate arrays (FPGAs) as an alternative to PLCs because of their advantages and hardware related benefits. However, safety analysis for FPGA-based I&C systems, and verification and validation (V&V) assessments still remain important issues to be resolved, which are now become a global research point of interests. In this work, we proposed a systematic design and verification strategies from start to ready-to-use in form of model-based approaches for FPGA-based reactor protection system (RPS) that can lead to the enhancement of the design verification and validation processes. The proposed methodology stages are requirement analysis, enhanced functional flow block diagram (EFFBD) models, finite state machine with data path (FSMD) models, hardware description language (HDL) code development, and design verifications. The design verification stage includes unit test – Very high speed integrated circuit Hardware Description Language (VHDL) test and modified condition decision coverage (MC/DC) test, module test – MATLAB/Simulink Co-simulation test, and integration test – FPGA hardware test beds. To prove the adequacy of the proposed

  17. Technical safety requirements control level verification

    International Nuclear Information System (INIS)

    STEWART, J.L.

    1999-01-01

    A Technical Safety Requirement (TSR) control level verification process was developed for the Tank Waste Remediation System (TWRS) TSRs at the Hanford Site in Richland, WA, at the direction of the US. Department of Energy, Richland Operations Office (RL). The objective of the effort was to develop a process to ensure that the TWRS TSR controls are designated and managed at the appropriate levels as Safety Limits (SLs), Limiting Control Settings (LCSs), Limiting Conditions for Operation (LCOs), Administrative Controls (ACs), or Design Features. The TSR control level verification process was developed and implemented by a team of contractor personnel with the participation of Fluor Daniel Hanford, Inc. (FDH), the Project Hanford Management Contract (PHMC) integrating contractor, and RL representatives. The team was composed of individuals with the following experience base: nuclear safety analysis; licensing; nuclear industry and DOE-complex TSR preparation/review experience; tank farm operations; FDH policy and compliance; and RL-TWRS oversight. Each TSR control level designation was completed utilizing TSR control logic diagrams and TSR criteria checklists based on DOE Orders, Standards, Contractor TSR policy, and other guidance. The control logic diagrams and criteria checklists were reviewed and modified by team members during team meetings. The TSR control level verification process was used to systematically evaluate 12 LCOs, 22 AC programs, and approximately 100 program key elements identified in the TWRS TSR document. The verification of each TSR control required a team consensus. Based on the results of the process, refinements were identified and the TWRS TSRs were modified as appropriate. A final report documenting key assumptions and the control level designation for each TSR control was prepared and is maintained on file for future reference. The results of the process were used as a reference in the RL review of the final TWRS TSRs and control suite. RL

  18. Computer Generated Inputs for NMIS Processor Verification

    International Nuclear Information System (INIS)

    J. A. Mullens; J. E. Breeding; J. A. McEvers; R. W. Wysor; L. G. Chiang; J. R. Lenarduzzi; J. T. Mihalczo; J. K. Mattingly

    2001-01-01

    Proper operation of the Nuclear Identification Materials System (NMIS) processor can be verified using computer-generated inputs [BIST (Built-In-Self-Test)] at the digital inputs. Preselected sequences of input pulses to all channels with known correlation functions are compared to the output of the processor. These types of verifications have been utilized in NMIS type correlation processors at the Oak Ridge National Laboratory since 1984. The use of this test confirmed a malfunction in a NMIS processor at the All-Russian Scientific Research Institute of Experimental Physics (VNIIEF) in 1998. The NMIS processor boards were returned to the U.S. for repair and subsequently used in NMIS passive and active measurements with Pu at VNIIEF in 1999

  19. Verification Survey of Uranium Mine Remediation

    International Nuclear Information System (INIS)

    Ron, Stager

    2009-01-01

    The Canadian Nuclear Safety Commission (CNSC) contracted an independent verification of an intensive gamma radiation survey conducted by a mining company to demonstrate that remediation of disturbed areas was complete. This site was the first of the recent mines being decommissioned in Canada and experience gained here may be applied to other mines being decommissioned in the future. The review included examination of the site-specific basis for clean-up criteria and ALARA as required by CNSC guidance. A paper review of the company report was conducted to determine if protocols were followed and that the summarized results could be independently reproduced. An independent verification survey was conducted on parts of the site and comparisons were made between gamma radiation measurements from the verification survey and the original company survey. Some aspects of data collection using rate meters linked to GPS data loggers are discussed as are aspects for data management and analyses methods required for the large amount of data collected during these surveys. Recommendations were made for implementation of future surveys and reporting the data from those surveys in order to ensure that remediation was complete. (authors)

  20. Verification and validation of RADMODL Version 1.0

    International Nuclear Information System (INIS)

    Kimball, K.D.

    1993-03-01

    RADMODL is a system of linked computer codes designed to calculate the radiation environment following an accident in which nuclear materials are released. The RADMODL code and the corresponding Verification and Validation (V ampersand V) calculations (Appendix A), were developed for Westinghouse Savannah River Company (WSRC) by EGS Corporation (EGS). Each module of RADMODL is an independent code and was verified separately. The full system was validated by comparing the output of the various modules with the corresponding output of a previously verified version of the modules. The results of the verification and validation tests show that RADMODL correctly calculates the transport of radionuclides and radiation doses. As a result of this verification and validation effort, RADMODL Version 1.0 is certified for use in calculating the radiation environment following an accident

  1. Verification and validation of RADMODL Version 1.0

    Energy Technology Data Exchange (ETDEWEB)

    Kimball, K.D.

    1993-03-01

    RADMODL is a system of linked computer codes designed to calculate the radiation environment following an accident in which nuclear materials are released. The RADMODL code and the corresponding Verification and Validation (V&V) calculations (Appendix A), were developed for Westinghouse Savannah River Company (WSRC) by EGS Corporation (EGS). Each module of RADMODL is an independent code and was verified separately. The full system was validated by comparing the output of the various modules with the corresponding output of a previously verified version of the modules. The results of the verification and validation tests show that RADMODL correctly calculates the transport of radionuclides and radiation doses. As a result of this verification and validation effort, RADMODL Version 1.0 is certified for use in calculating the radiation environment following an accident.

  2. Validation and verification of the MTRPC thermohydraulic package

    International Nuclear Information System (INIS)

    Doval, Alicia

    1998-01-01

    The MTR P C v2.6 is a computational package developed for research reactor design and calculation. It covers three of the main aspects of a research reactor: neutronic, shielding and thermohydraulic. In this work only the thermohydraulic package will be covered, dealing with verification and validation aspects. The package consists of the following steady state programs: CAUDVAP 2.60 for the hydraulic calculus, estimates the velocity distribution through different parallel channels connected to a common inlet and outlet common plenum. TERMIC 1H v3.0, used for the thermal design of research reactors, provides information about heat flux for a given maximum wall temperature, onset of nucleate boiling, redistribution phenomena and departure from nucleate boiling. CONVEC V3.0 allows natural convection calculations, giving information on heat fluxes for onset of nucleate boiling, pulsed and burn-out phenomena as well as total coolant flow. Results have been validated against experimental values and verified against theoretical and computational programmes results, showing a good agreement. (author)

  3. Online 3D EPID-based dose verification: Proof of concept

    Energy Technology Data Exchange (ETDEWEB)

    Spreeuw, Hanno; Rozendaal, Roel, E-mail: r.rozendaal@nki.nl; Olaciregui-Ruiz, Igor; González, Patrick; Mans, Anton; Mijnheer, Ben [Department of Radiation Oncology, The Netherlands Cancer Institute, Amsterdam 1066 CX (Netherlands); Herk, Marcel van [University of Manchester, Manchester Academic Health Science Centre, The Christie NHS Foundation Trust, Manchester M20 4BX (United Kingdom)

    2016-07-15

    Purpose: Delivery errors during radiotherapy may lead to medical harm and reduced life expectancy for patients. Such serious incidents can be avoided by performing dose verification online, i.e., while the patient is being irradiated, creating the possibility of halting the linac in case of a large overdosage or underdosage. The offline EPID-based 3D in vivo dosimetry system clinically employed at our institute is in principle suited for online treatment verification, provided the system is able to complete 3D dose reconstruction and verification within 420 ms, the present acquisition time of a single EPID frame. It is the aim of this study to show that our EPID-based dosimetry system can be made fast enough to achieve online 3D in vivo dose verification. Methods: The current dose verification system was sped up in two ways. First, a new software package was developed to perform all computations that are not dependent on portal image acquisition separately, thus removing the need for doing these calculations in real time. Second, the 3D dose reconstruction algorithm was sped up via a new, multithreaded implementation. Dose verification was implemented by comparing planned with reconstructed 3D dose distributions delivered to two regions in a patient: the target volume and the nontarget volume receiving at least 10 cGy. In both volumes, the mean dose is compared, while in the nontarget volume, the near-maximum dose (D2) is compared as well. The real-time dosimetry system was tested by irradiating an anthropomorphic phantom with three VMAT plans: a 6 MV head-and-neck treatment plan, a 10 MV rectum treatment plan, and a 10 MV prostate treatment plan. In all plans, two types of serious delivery errors were introduced. The functionality of automatically halting the linac was also implemented and tested. Results: The precomputation time per treatment was ∼180 s/treatment arc, depending on gantry angle resolution. The complete processing of a single portal frame

  4. Online 3D EPID-based dose verification: Proof of concept

    International Nuclear Information System (INIS)

    Spreeuw, Hanno; Rozendaal, Roel; Olaciregui-Ruiz, Igor; González, Patrick; Mans, Anton; Mijnheer, Ben; Herk, Marcel van

    2016-01-01

    Purpose: Delivery errors during radiotherapy may lead to medical harm and reduced life expectancy for patients. Such serious incidents can be avoided by performing dose verification online, i.e., while the patient is being irradiated, creating the possibility of halting the linac in case of a large overdosage or underdosage. The offline EPID-based 3D in vivo dosimetry system clinically employed at our institute is in principle suited for online treatment verification, provided the system is able to complete 3D dose reconstruction and verification within 420 ms, the present acquisition time of a single EPID frame. It is the aim of this study to show that our EPID-based dosimetry system can be made fast enough to achieve online 3D in vivo dose verification. Methods: The current dose verification system was sped up in two ways. First, a new software package was developed to perform all computations that are not dependent on portal image acquisition separately, thus removing the need for doing these calculations in real time. Second, the 3D dose reconstruction algorithm was sped up via a new, multithreaded implementation. Dose verification was implemented by comparing planned with reconstructed 3D dose distributions delivered to two regions in a patient: the target volume and the nontarget volume receiving at least 10 cGy. In both volumes, the mean dose is compared, while in the nontarget volume, the near-maximum dose (D2) is compared as well. The real-time dosimetry system was tested by irradiating an anthropomorphic phantom with three VMAT plans: a 6 MV head-and-neck treatment plan, a 10 MV rectum treatment plan, and a 10 MV prostate treatment plan. In all plans, two types of serious delivery errors were introduced. The functionality of automatically halting the linac was also implemented and tested. Results: The precomputation time per treatment was ∼180 s/treatment arc, depending on gantry angle resolution. The complete processing of a single portal frame

  5. Online 3D EPID-based dose verification: Proof of concept.

    Science.gov (United States)

    Spreeuw, Hanno; Rozendaal, Roel; Olaciregui-Ruiz, Igor; González, Patrick; Mans, Anton; Mijnheer, Ben; van Herk, Marcel

    2016-07-01

    Delivery errors during radiotherapy may lead to medical harm and reduced life expectancy for patients. Such serious incidents can be avoided by performing dose verification online, i.e., while the patient is being irradiated, creating the possibility of halting the linac in case of a large overdosage or underdosage. The offline EPID-based 3D in vivo dosimetry system clinically employed at our institute is in principle suited for online treatment verification, provided the system is able to complete 3D dose reconstruction and verification within 420 ms, the present acquisition time of a single EPID frame. It is the aim of this study to show that our EPID-based dosimetry system can be made fast enough to achieve online 3D in vivo dose verification. The current dose verification system was sped up in two ways. First, a new software package was developed to perform all computations that are not dependent on portal image acquisition separately, thus removing the need for doing these calculations in real time. Second, the 3D dose reconstruction algorithm was sped up via a new, multithreaded implementation. Dose verification was implemented by comparing planned with reconstructed 3D dose distributions delivered to two regions in a patient: the target volume and the nontarget volume receiving at least 10 cGy. In both volumes, the mean dose is compared, while in the nontarget volume, the near-maximum dose (D2) is compared as well. The real-time dosimetry system was tested by irradiating an anthropomorphic phantom with three VMAT plans: a 6 MV head-and-neck treatment plan, a 10 MV rectum treatment plan, and a 10 MV prostate treatment plan. In all plans, two types of serious delivery errors were introduced. The functionality of automatically halting the linac was also implemented and tested. The precomputation time per treatment was ∼180 s/treatment arc, depending on gantry angle resolution. The complete processing of a single portal frame, including dose verification, took

  6. Formal Verification of Real-Time System Requirements

    Directory of Open Access Journals (Sweden)

    Marcin Szpyrka

    2000-01-01

    Full Text Available The methodology of system requirements verification presented in this paper is a proposition of a practical procedure for reducing some negatives of the specification of requirements. The main problem that is considered is to create a complete description of the system requirements without any negatives. Verification of the initially defined requirements is based on the coloured Petri nets. Those nets are useful for testing some properties of system requirements such as completeness, consistency and optimality. An example ofthe litt controller is presented.

  7. Finite Countermodel Based Verification for Program Transformation (A Case Study

    Directory of Open Access Journals (Sweden)

    Alexei P. Lisitsa

    2015-12-01

    Full Text Available Both automatic program verification and program transformation are based on program analysis. In the past decade a number of approaches using various automatic general-purpose program transformation techniques (partial deduction, specialization, supercompilation for verification of unreachability properties of computing systems were introduced and demonstrated. On the other hand, the semantics based unfold-fold program transformation methods pose themselves diverse kinds of reachability tasks and try to solve them, aiming at improving the semantics tree of the program being transformed. That means some general-purpose verification methods may be used for strengthening program transformation techniques. This paper considers the question how finite countermodels for safety verification method might be used in Turchin's supercompilation method. We extract a number of supercompilation sub-algorithms trying to solve reachability problems and demonstrate use of an external countermodel finder for solving some of the problems.

  8. Technical workshop on safeguards, verification technologies, and other related experience

    International Nuclear Information System (INIS)

    1998-01-01

    The aim of the Technical Workshop on safeguards was to encourage a clearer understanding of the IAEA Safeguards System, its origins and evolution and the present state of the art. Presentations held by the IAEA officials and outside experts examined as well other components of the non-proliferation regime, the current practices and procedures, and the future prospects. A series of presentations described the characteristics of the interaction between global and regional verification systems and described relevant past and present experience. Prominence given to such state of the art verification technologies as environmental sampling, satellite imaging and monitoring thorough remote and unattended techniques demonstrated, beyond any doubt, the essentially dynamic nature of verification. It is generally acknowledged that there have been major achievements in preventing spread of nuclear weapons, but no verification system can in itself prevent proliferation

  9. Technical workshop on safeguards, verification technologies, and other related experience

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1999-12-31

    The aim of the Technical Workshop on safeguards was to encourage a clearer understanding of the IAEA Safeguards System, its origins and evolution and the present state of the art. Presentations held by the IAEA officials and outside experts examined as well other components of the non-proliferation regime, the current practices and procedures, and the future prospects. A series of presentations described the characteristics of the interaction between global and regional verification systems and described relevant past and present experience. Prominence given to such state of the art verification technologies as environmental sampling, satellite imaging and monitoring thorough remote and unattended techniques demonstrated, beyond any doubt, the essentially dynamic nature of verification. It is generally acknowledged that there have been major achievements in preventing spread of nuclear weapons, but no verification system can in itself prevent proliferation Refs, figs, tabs

  10. Learning a Genetic Measure for Kinship Verification Using Facial Images

    Directory of Open Access Journals (Sweden)

    Lu Kou

    2015-01-01

    Full Text Available Motivated by the key observation that children generally resemble their parents more than other persons with respect to facial appearance, distance metric (similarity learning has been the dominant choice for state-of-the-art kinship verification via facial images in the wild. Most existing learning-based approaches to kinship verification, however, are focused on learning a genetic similarity measure in a batch learning manner, leading to less scalability for practical applications with ever-growing amount of data. To address this, we propose a new kinship verification approach by learning a sparse similarity measure in an online fashion. Experimental results on the kinship datasets show that our approach is highly competitive to the state-of-the-art alternatives in terms of verification accuracy, yet it is superior in terms of scalability for practical applications.

  11. Accuracy verification methods theory and algorithms

    CERN Document Server

    Mali, Olli; Repin, Sergey

    2014-01-01

    The importance of accuracy verification methods was understood at the very beginning of the development of numerical analysis. Recent decades have seen a rapid growth of results related to adaptive numerical methods and a posteriori estimates. However, in this important area there often exists a noticeable gap between mathematicians creating the theory and researchers developing applied algorithms that could be used in engineering and scientific computations for guaranteed and efficient error control.   The goals of the book are to (1) give a transparent explanation of the underlying mathematical theory in a style accessible not only to advanced numerical analysts but also to engineers and students; (2) present detailed step-by-step algorithms that follow from a theory; (3) discuss their advantages and drawbacks, areas of applicability, give recommendations and examples.

  12. Understanding the patient perspective on research access to national health records databases for conduct of randomized registry trials.

    Science.gov (United States)

    Avram, Robert; Marquis-Gravel, Guillaume; Simard, François; Pacheco, Christine; Couture, Étienne; Tremblay-Gravel, Maxime; Desplantie, Olivier; Malhamé, Isabelle; Bibas, Lior; Mansour, Samer; Parent, Marie-Claude; Farand, Paul; Harvey, Luc; Lessard, Marie-Gabrielle; Ly, Hung; Liu, Geoffrey; Hay, Annette E; Marc Jolicoeur, E

    2018-07-01

    Use of health administrative databases is proposed for screening and monitoring of participants in randomized registry trials. However, access to these databases raises privacy concerns. We assessed patient's preferences regarding use of personal information to link their research records with national health databases, as part of a hypothetical randomized registry trial. Cardiology patients were invited to complete an anonymous self-reported survey that ascertained preferences related to the concept of accessing government health databases for research, the type of personal identifiers to be shared and the type of follow-up preferred as participants in a hypothetical trial. A total of 590 responders completed the survey (90% response rate), the majority of which were Caucasians (90.4%), male (70.0%) with a median age of 65years (interquartile range, 8). The majority responders (80.3%) would grant researchers access to health administrative databases for screening and follow-up. To this end, responders endorsed the recording of their personal identifiers by researchers for future record linkage, including their name (90%), and health insurance number (83.9%), but fewer responders agreed with the recording of their social security number (61.4%, pgranting researchers access to the administrative databases (OR: 1.69, 95% confidence interval: 1.03-2.90; p=0.04). The majority of Cardiology patients surveyed were supportive of use of their personal identifiers to access administrative health databases and conduct long-term monitoring in the context of a randomized registry trial. Copyright © 2018 Elsevier Ireland Ltd. All rights reserved.

  13. Towards Model Validation and Verification with SAT Techniques

    OpenAIRE

    Gogolla, Martin

    2010-01-01

    After sketching how system development and the UML (Unified Modeling Language) and the OCL (Object Constraint Language) are related, validation and verification with the tool USE (UML-based Specification Environment) is demonstrated. As a more efficient alternative for verification tasks, two approaches using SAT-based techniques are put forward: First, a direct encoding of UML and OCL with Boolean variables and propositional formulas, and second, an encoding employing an...

  14. Engineering a static verification tool for GPU kernels

    OpenAIRE

    Bardsley, E; Betts, A; Chong, N; Collingbourne, P; Deligiannis, P; Donaldson, AF; Ketema, J; Liew, D; Qadeer, S

    2014-01-01

    We report on practical experiences over the last 2.5 years related to the engineering of GPUVerify, a static verification tool for OpenCL and CUDA GPU kernels, plotting the progress of GPUVerify from a prototype to a fully functional and relatively efficient analysis tool. Our hope is that this experience report will serve the verification community by helping to inform future tooling efforts. ? 2014 Springer International Publishing.

  15. Symposium on international safeguards: Verification and nuclear material security. Book of extended synopses. Addendum

    International Nuclear Information System (INIS)

    2001-01-01

    The symposium covered the topics related to international safeguards, verification and nuclear materials security, namely: verification and nuclear material security; the NPT regime: progress and promises; the Additional Protocol as an important tool for the strengthening of the safeguards system; the nuclear threat and the nuclear threat initiative. Eighteen sessions dealt with the following subjects: the evolution of IAEA safeguards ( including strengthened safeguards, present and future challenges; verification of correctness and completeness of initial declarations; implementation of the Additional Protocol, progress and experience; security of material; nuclear disarmament and ongoing monitoring and verification in Iraq; evolution of IAEA verification in relation to nuclear disarmament); integrated safeguards; physical protection and illicit trafficking; destructive analysis for safeguards; the additional protocol; innovative safeguards approaches; IAEA verification and nuclear disarmament; environmental sampling; safeguards experience; safeguards equipment; panel discussion on development of state systems of accountancy and control; information analysis in the strengthened safeguard system; satellite imagery and remote monitoring; emerging IAEA safeguards issues; verification technology for nuclear disarmament; the IAEA and the future of nuclear verification and security

  16. Formal verification of reactor process control software using assertion checking environment

    International Nuclear Information System (INIS)

    Sharma, Babita; Balaji, Sowmya; John, Ajith K.; Bhattacharjee, A.K.; Dhodapkar, S.D.

    2005-01-01

    Assertion Checking Environment (ACE) was developed in-house for carrying out formal (rigorous/ mathematical) functional verification of embedded software written in MISRA C. MISRA C is an industrially sponsored safe sub-set of C programming language and is well accepted in the automotive and aerospace industries. ACE uses static assertion checking technique for verification of MISRA C programs. First the functional specifications of the program are derived from the specifications in the form of pre- and post-conditions for each C function. These pre- and post-conditions are then introduced as assertions (formal comments) in the program code. The annotated C code is then formally verified using ACE. In this paper we present our experience of using ACE for the formal verification of process control software of a nuclear reactor. The Software Requirements Document (SRD) contained textual specifications of the process control software. The SRD was used by the designers to draw logic diagrams which were given as input to a code generator. The verification of the generated C code was done at 2 levels viz. (i) verification against specifications derived from logic diagrams, and (ii) verification against specifications derived from SRD. In this work we checked approximately 600 functional specifications of the software having roughly 15000 lines of code. (author)

  17. Perspectives on barriers and facilitators to minority recruitment for clinical trials among cancer center leaders, investigators, research staff, and referring clinicians: enhancing minority participation in clinical trials (EMPaCT).

    Science.gov (United States)

    Durant, Raegan W; Wenzel, Jennifer A; Scarinci, Isabel C; Paterniti, Debora A; Fouad, Mona N; Hurd, Thelma C; Martin, Michelle Y

    2014-04-01

    The study of disparities in minority recruitment to cancer clinical trials has focused primarily on inquiries among minority populations. Yet very little is known about the perceptions of individuals actively involved in minority recruitment to clinical trials within cancer centers. Therefore, the authors assessed the perspectives of cancer center clinical and research personnel on barriers and facilitators to minority recruitment. In total, 91 qualitative interviews were conducted at 5 US cancer centers among 4 stakeholder groups: cancer center leaders, principal investigators, research staff, and referring clinicians. All interviews were recorded and transcribed. Qualitative analyses of response data was focused on identifying prominent themes related to barriers and facilitators to minority recruitment. The perspectives of the 4 stakeholder groups were largely overlapping with some variations based on their unique roles in minority recruitment. Four prominent themes were identified: 1) racial and ethnic minorities are influenced by varying degrees of skepticism related to trial participation, 2) potential minority participants often face multilevel barriers that preclude them from being offered an opportunity to participate in a clinical trial, 3) facilitators at both the institutional and participant level potentially encourage minority recruitment, and 4) variation between internal and external trial referral procedures may limit clinical trial opportunities for racial and ethnic minorities. Multilevel approaches are needed to address barriers and optimize facilitators within cancer centers to enhance minority recruitment for cancer clinical trials. © 2014 American Cancer Society.

  18. IP cores design from specifications to production modeling, verification, optimization, and protection

    CERN Document Server

    Mohamed, Khaled Salah

    2016-01-01

    This book describes the life cycle process of IP cores, from specification to production, including IP modeling, verification, optimization, and protection. Various trade-offs in the design process are discussed, including  those associated with many of the most common memory cores, controller IPs  and system-on-chip (SoC) buses. Readers will also benefit from the author’s practical coverage of new verification methodologies. such as bug localization, UVM, and scan-chain.  A SoC case study is presented to compare traditional verification with the new verification methodologies. ·         Discusses the entire life cycle process of IP cores, from specification to production, including IP modeling, verification, optimization, and protection; ·         Introduce a deep introduction for Verilog for both implementation and verification point of view.  ·         Demonstrates how to use IP in applications such as memory controllers and SoC buses. ·         Describes a new ver...

  19. Collaborative research between academia and industry using a large clinical trial database: a case study in Alzheimer's disease

    DEFF Research Database (Denmark)

    Jones, Roy; Wilkinson, David; Lopez, Oscar L

    2011-01-01

    Large clinical trials databases, developed over the course of a comprehensive clinical trial programme, represent an invaluable resource for clinical researchers. Data mining projects sponsored by industry that use these databases, however, are often not viewed favourably in the academic medical...... community because of concerns that commercial, rather than scientific, goals are the primary purpose of such endeavours. Thus, there are few examples of sustained collaboration between leading academic clinical researchers and industry professionals in a large-scale data mining project. We present here...

  20. Effect of Contract Research Organization Bureaucracy in Clinical Trial Management: A Model From Lung Cancer.

    Science.gov (United States)

    Gobbini, Elisa; Pilotto, Sara; Pasello, Giulia; Polo, Valentina; Di Maio, Massimo; Arizio, Francesca; Galetta, Domenico; Petrillo, Patrizia; Chiari, Rita; Matocci, Roberta; Di Costanzo, Alessandro; Di Stefano, Teresa Severina; Aglietta, Massimo; Cagnazzo, Celeste; Sperduti, Isabella; Bria, Emilio; Novello, Silvia

    2018-03-01

    Contract research organization (CRO) support is largely included in clinical trial management, although its effect in terms of time savings and benefit has not yet been quantified. We performed a retrospective multicenter analysis of lung cancer trials to explore differences in term of trial activation timelines and accrual for studies with and without CRO involvement. Results regarding study timelines from feasibility data to first patient enrollment were collected from 7 Italian thoracic oncology departments. The final accruals (screened/enrolled patients) are reported. We considered CRO/sponsor-administered and CRO-free trials according to who was responsible for the management of the crucial setup phases. Of 113 trials, 62 (54.9%) were CRO-administered, 34 (30.1%) were sponsor-administered, and 17 (15.0%) were CRO-free. The median time from feasibility invitation to documentation obtainment was 151 days in the CRO-administered trials versus 128 in the sponsor-administered and 120 in the CRO-free trials. The time from document submission to contract signature was 142 days in the CRO-administered versus 128 in the sponsor-administered and 132 in the CRO-free trials. The time from global accrual opening to first patient enrollment was 247 days for the CRO-administered versus 194 in the sponsor-administered and 151 in the CRO-free trials. No significant differences were observed in terms of the median overall timeline: 21 months in the CRO-administered, 15 in the sponsor-administered, and 18 months in the CRO-free studies (P = .29). Although no statistically significant differences were identified, the results of our analysis support the idea that bureaucratic procedures might require more time in CRO-administered trials than in sponsor-administered and CRO-free studies. This bureaucratic delay could negatively affect Italian patients' screening and enrollment compared with other countries. Copyright © 2017 Elsevier Inc. All rights reserved.