WorldWideScience

Sample records for set validation matrix

  1. Evaluation of the separate effects tests (SET) validation matrix

    International Nuclear Information System (INIS)

    1996-11-01

    This work is the result of a one year extended mandate which has been given by the CSNI on the request of the PWG 2 and the Task Group on Thermal Hydraulic System Behaviour (TG THSB) in late 1994. The aim was to evaluate the SET validation matrix in order to define the real needs for further experimental work. The statistical evaluation tables of the SET matrix provide an overview of the data base including the parameter ranges covered for each phenomenon and selected parameters, and questions posed to obtain answers concerning the need for additional experimental data with regard to the objective of nuclear power plant safety. A global view of the data base is first presented focussing on areas lacking in data and on hot topics. A new systematic evaluation has been done based on the authors technical judgments and giving evaluation tables. In these tables, global and indicative information are included. Four main parameters have been chosen as the most important and relevant parameters: a state parameter given by the operating pressure of the tests, a flow parameter expressed as mass flux, mass flow rate or volumetric flow rate in the tests, a geometrical parameter provided through a typical dimension expressed by a diameter, an equivalent diameter (hydraulic or heated) or a cross sectional area of the test sections, and an energy or heat transfer parameter given as the fluid temperature, the heat flux or the heat transfer surface temperature of the tests

  2. S.E.T., CSNI Separate Effects Test Facility Validation Matrix

    International Nuclear Information System (INIS)

    1997-01-01

    1 - Description of test facility: The SET matrix of experiments is suitable for the developmental assessment of thermal-hydraulics transient system computer codes by selecting individual tests from selected facilities, relevant to each phenomena. Test facilities differ from one another in geometrical dimensions, geometrical configuration and operating capabilities or conditions. Correlation between SET facility and phenomena were calculated on the basis of suitability for model validation (which means that a facility is designed in such a way as to stimulate the phenomena assumed to occur in a plant and is sufficiently instrumented); limited suitability for model variation (which means that a facility is designed in such a way as to stimulate the phenomena assumed to occur in a plant but has problems associated with imperfect scaling, different test fluids or insufficient instrumentation); and unsuitability for model validation. 2 - Description of test: Whereas integral experiments are usually designed to follow the behaviour of a reactor system in various off-normal or accident transients, separate effects tests focus on the behaviour of a single component, or on the characteristics of one thermal-hydraulic phenomenon. The construction of a separate effects test matrix is an attempt to collect together the best sets of openly available test data for code validation, assessment and improvement, from the wide range of experiments that have been carried out world-wide in the field of thermal hydraulics. In all, 2094 tests are included in the SET matrix

  3. Containment Code Validation Matrix

    International Nuclear Information System (INIS)

    Chin, Yu-Shan; Mathew, P.M.; Glowa, Glenn; Dickson, Ray; Liang, Zhe; Leitch, Brian; Barber, Duncan; Vasic, Aleks; Bentaib, Ahmed; Journeau, Christophe; Malet, Jeanne; Studer, Etienne; Meynet, Nicolas; Piluso, Pascal; Gelain, Thomas; Michielsen, Nathalie; Peillon, Samuel; Porcheron, Emmanuel; Albiol, Thierry; Clement, Bernard; Sonnenkalb, Martin; Klein-Hessling, Walter; Arndt, Siegfried; Weber, Gunter; Yanez, Jorge; Kotchourko, Alexei; Kuznetsov, Mike; Sangiorgi, Marco; Fontanet, Joan; Herranz, Luis; Garcia De La Rua, Carmen; Santiago, Aleza Enciso; Andreani, Michele; Paladino, Domenico; Dreier, Joerg; Lee, Richard; Amri, Abdallah

    2014-01-01

    The Committee on the Safety of Nuclear Installations (CSNI) formed the CCVM (Containment Code Validation Matrix) task group in 2002. The objective of this group was to define a basic set of available experiments for code validation, covering the range of containment (ex-vessel) phenomena expected in the course of light and heavy water reactor design basis accidents and beyond design basis accidents/severe accidents. It was to consider phenomena relevant to pressurised heavy water reactor (PHWR), pressurised water reactor (PWR) and boiling water reactor (BWR) designs of Western origin as well as of Eastern European VVER types. This work would complement the two existing CSNI validation matrices for thermal hydraulic code validation (NEA/CSNI/R(1993)14) and In-vessel core degradation (NEA/CSNI/R(2001)21). The report initially provides a brief overview of the main features of a PWR, BWR, CANDU and VVER reactors. It also provides an overview of the ex-vessel corium retention (core catcher). It then provides a general overview of the accident progression for light water and heavy water reactors. The main focus is to capture most of the phenomena and safety systems employed in these reactor types and to highlight the differences. This CCVM contains a description of 127 phenomena, broken down into 6 categories: - Containment Thermal-hydraulics Phenomena; - Hydrogen Behaviour (Combustion, Mitigation and Generation) Phenomena; - Aerosol and Fission Product Behaviour Phenomena; - Iodine Chemistry Phenomena; - Core Melt Distribution and Behaviour in Containment Phenomena; - Systems Phenomena. A synopsis is provided for each phenomenon, including a description, references for further information, significance for DBA and SA/BDBA and a list of experiments that may be used for code validation. The report identified 213 experiments, broken down into the same six categories (as done for the phenomena). An experiment synopsis is provided for each test. Along with a test description

  4. Overview of CSNI separate effects tests validation matrix

    Energy Technology Data Exchange (ETDEWEB)

    Aksan, N. [Paul Scherrer Institute, Villigen (Switzerland); Auria, F.D. [Univ. of Pisa (Italy); Glaeser, H. [Gesellschaft fuer anlagen und Reaktorsicherheit, (GRS), Garching (Germany)] [and others

    1995-09-01

    An internationally agreed separate effects test (SET) Validation Matrix for thermal-hydraulic system codes has been established by a sub-group of the Task Group on Thermal Hydraulic System Behaviour as requested by the OECD/NEA Committee on Safety of Nuclear Installations (SCNI) Principal Working Group No. 2 on Coolant System Behaviour. The construction of such a Matrix is an attempt to collect together in a systematic way the best sets of openly available test data for code validation, assessment and improvement and also for quantitative code assessment with respect to quantification of uncertainties to the modeling of individual phenomena by the codes. The methodology, that has been developed during the process of establishing CSNI-SET validation matrix, was an important outcome of the work on SET matrix. In addition, all the choices which have been made from the 187 identified facilities covering the 67 phenomena will be investigated together with some discussions on the data base.

  5. Position Error Covariance Matrix Validation and Correction

    Science.gov (United States)

    Frisbee, Joe, Jr.

    2016-01-01

    In order to calculate operationally accurate collision probabilities, the position error covariance matrices predicted at times of closest approach must be sufficiently accurate representations of the position uncertainties. This presentation will discuss why the Gaussian distribution is a reasonable expectation for the position uncertainty and how this assumed distribution type is used in the validation and correction of position error covariance matrices.

  6. Automatic Generation of Validated Specific Epitope Sets

    Directory of Open Access Journals (Sweden)

    Sebastian Carrasco Pro

    2015-01-01

    Full Text Available Accurate measurement of B and T cell responses is a valuable tool to study autoimmunity, allergies, immunity to pathogens, and host-pathogen interactions and assist in the design and evaluation of T cell vaccines and immunotherapies. In this context, it is desirable to elucidate a method to select validated reference sets of epitopes to allow detection of T and B cells. However, the ever-growing information contained in the Immune Epitope Database (IEDB and the differences in quality and subjects studied between epitope assays make this task complicated. In this study, we develop a novel method to automatically select reference epitope sets according to a categorization system employed by the IEDB. From the sets generated, three epitope sets (EBV, mycobacteria and dengue were experimentally validated by detection of T cell reactivity ex vivo from human donors. Furthermore, a web application that will potentially be implemented in the IEDB was created to allow users the capacity to generate customized epitope sets.

  7. Shield verification and validation action matrix summary

    International Nuclear Information System (INIS)

    Boman, C.

    1992-02-01

    WSRC-RP-90-26, Certification Plan for Reactor Analysis Computer Codes, describes a series of action items to be completed for certification of reactor analysis computer codes used in Technical Specifications development and for other safety and production support calculations. Validation and verification are integral part of the certification process. This document identifies the work performed and documentation generated to satisfy these action items for the SHIELD, SHLDED, GEDIT, GENPRT, FIPROD, FPCALC, and PROCES modules of the SHIELD system, it is not certification of the complete SHIELD system. Complete certification will follow at a later date. Each action item is discussed with the justification for its completion. Specific details of the work performed are not included in this document but can be found in the references. The validation and verification effort for the SHIELD, SHLDED, GEDIT, GENPRT, FIPROD, FPCALC, and PROCES modules of the SHIELD system computer code is completed

  8. GRIMHX verification and validation action matrix summary

    International Nuclear Information System (INIS)

    Trumble, E.F.

    1991-12-01

    WSRC-RP-90-026, Certification Plan for Reactor Analysis Computer Codes, describes a series of action items to be completed for certification of reactor analysis computer codes used in Technical Specifications development and for other safety and production support calculations. Validation and verification of the code is an integral part of this process. This document identifies the work performed and documentation generated to satisfy these action items for the Reactor Physics computer code GRIMHX. Each action item is discussed with the justification for its completion. Specific details of the work performed are not included in this document but are found in the references. The publication of this document signals the validation and verification effort for the GRIMHX code is completed

  9. In-vessel core degradation code validation matrix

    International Nuclear Information System (INIS)

    Haste, T.J.; Adroguer, B.; Gauntt, R.O.; Martinez, J.A.; Ott, L.J.; Sugimoto, J.; Trambauer, K.

    1996-01-01

    The objective of the current Validation Matrix is to define a basic set of experiments, for which comparison of the measured and calculated parameters forms a basis for establishing the accuracy of test predictions, covering the full range of in-vessel core degradation phenomena expected in light water reactor severe accident transients. The scope of the review covers PWR and BWR designs of Western origin: the coverage of phenomena extends from the initial heat-up through to the introduction of melt into the lower plenum. Concerning fission product behaviour, the effect of core degradation on fission product release is considered. The report provides brief overviews of the main LWR severe accident sequences and of the dominant phenomena involved. The experimental database is summarised. These data are cross-referenced against a condensed set of the phenomena and test condition headings presented earlier, judging the results against a set of selection criteria and identifying key tests of particular value. The main conclusions and recommendations are listed. (K.A.)

  10. Setting research priorities by applying the combined approach matrix.

    Science.gov (United States)

    Ghaffar, Abdul

    2009-04-01

    Priority setting in health research is a dynamic process. Different organizations and institutes have been working in the field of research priority setting for many years. In 1999 the Global Forum for Health Research presented a research priority setting tool called the Combined Approach Matrix or CAM. Since its development, the CAM has been successfully applied to set research priorities for diseases, conditions and programmes at global, regional and national levels. This paper briefly explains the CAM methodology and how it could be applied in different settings, giving examples and describing challenges encountered in the process of setting research priorities and providing recommendations for further work in this field. The construct and design of the CAM is explained along with different steps needed, including planning and organization of a priority-setting exercise and how it could be applied in different settings. The application of the CAM are described by using three examples. The first concerns setting research priorities for a global programme, the second describes application at the country level and the third setting research priorities for diseases. Effective application of the CAM in different and diverse environments proves its utility as a tool for setting research priorities. Potential challenges encountered in the process of research priority setting are discussed and some recommendations for further work in this field are provided.

  11. 45 CFR 162.1011 - Valid code sets.

    Science.gov (United States)

    2010-10-01

    ... 45 Public Welfare 1 2010-10-01 2010-10-01 false Valid code sets. 162.1011 Section 162.1011 Public... ADMINISTRATIVE REQUIREMENTS Code Sets § 162.1011 Valid code sets. Each code set is valid within the dates specified by the organization responsible for maintaining that code set. ...

  12. The provisional matrix: setting the stage for tissue repair outcomes.

    Science.gov (United States)

    Barker, Thomas H; Engler, Adam J

    2017-07-01

    Since its conceptualization in the 1980s, the provisional matrix has often been characterized as a simple fibrin-containing scaffold for wound healing that supports the nascent blood clot and is functionally distinct from the basement membrane. However subsequent advances have shown that this matrix is far from passive, with distinct compositional differences as the wound matures, and providing an active role for wound remodeling. Here we review the stages of this matrix, provide an update on the state of our understanding of provisional matrix, and present some of the outstanding issues related to the provisional matrix, its components, and their assembly and use in vivo. Copyright © 2017. Published by Elsevier B.V.

  13. Reliability and Validity of 10 Different Standard Setting Procedures.

    Science.gov (United States)

    Halpin, Glennelle; Halpin, Gerald

    Research indicating that different cut-off points result from the use of different standard-setting techniques leaves decision makers with a disturbing dilemma: Which standard-setting method is best? This investigation of the reliability and validity of 10 different standard-setting approaches was designed to provide information that might help…

  14. RELAP-7 Software Verification and Validation Plan: Requirements Traceability Matrix (RTM) Part 1 – Physics and numerical methods

    Energy Technology Data Exchange (ETDEWEB)

    Choi, Yong Joon [Idaho National Lab. (INL), Idaho Falls, ID (United States); Yoo, Jun Soo [Idaho National Lab. (INL), Idaho Falls, ID (United States); Smith, Curtis Lee [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2015-09-01

    This INL plan comprehensively describes the Requirements Traceability Matrix (RTM) on main physics and numerical method of the RELAP-7. The plan also describes the testing-based software verification and validation (SV&V) process—a set of specially designed software models used to test RELAP-7.

  15. Validation of the PHEEM instrument in a Danish hospital setting

    DEFF Research Database (Denmark)

    Aspegren, Knut; Bastholt, Lars; Bested, K.M.

    2007-01-01

    The Postgraduate Hospital Educational Environment Measure (PHEEM) has been translated into Danish and then validated with good internal consistency by 342 Danish junior and senior hospital doctors. Four of the 40 items are culturally dependent in the Danish hospital setting. Factor analysis...... demonstrated that seven items are interconnected. This information can be used to shorten the instrument by perhaps another three items...

  16. Minimal set of auxiliary fields and S-matrix for extended supergravity

    Energy Technology Data Exchange (ETDEWEB)

    Fradkin, E S; Vasiliev, M A [Physical Lebedev Institute - Moscow

    1979-05-19

    Minimal set of auxiliary fields for linearized SO(2) supergravity and one-parameter extension of the minimal auxiliary fields in the SO(1) supergravity are constructed. The expression for the S-matrix in SO(2) supergravity are given.

  17. Validation of the TRUST tool in a Greek perioperative setting.

    Science.gov (United States)

    Chatzea, Vasiliki-Eirini; Sifaki-Pistolla, Dimitra; Dey, Nilanjan; Melidoniotis, Evangelos

    2017-06-01

    The aim of this study was to translate, culturally adapt and validate the TRUST questionnaire in a Greek perioperative setting. The TRUST questionnaire assesses the relationship between trust and performance. The study assessed the levels of trust and performance in the surgery and anaesthesiology department during a very stressful period for Greece (economic crisis) and offered a user friendly and robust assessment tool. The study concludes that the Greek version of the TRUST questionnaire is a reliable and valid instrument for measuring team performance among Greek perioperative teams. Copyright the Association for Perioperative Practice.

  18. On the validity of cosmological Fisher matrix forecasts

    International Nuclear Information System (INIS)

    Wolz, Laura; Kilbinger, Martin; Weller, Jochen; Giannantonio, Tommaso

    2012-01-01

    We present a comparison of Fisher matrix forecasts for cosmological probes with Monte Carlo Markov Chain (MCMC) posterior likelihood estimation methods. We analyse the performance of future Dark Energy Task Force (DETF) stage-III and stage-IV dark-energy surveys using supernovae, baryon acoustic oscillations and weak lensing as probes. We concentrate in particular on the dark-energy equation of state parameters w 0 and w a . For purely geometrical probes, and especially when marginalising over w a , we find considerable disagreement between the two methods, since in this case the Fisher matrix can not reproduce the highly non-elliptical shape of the likelihood function. More quantitatively, the Fisher method underestimates the marginalized errors for purely geometrical probes between 30%-70%. For cases including structure formation such as weak lensing, we find that the posterior probability contours from the Fisher matrix estimation are in good agreement with the MCMC contours and the forecasted errors only changing on the 5% level. We then explore non-linear transformations resulting in physically-motivated parameters and investigate whether these parameterisations exhibit a Gaussian behaviour. We conclude that for the purely geometrical probes and, more generally, in cases where it is not known whether the likelihood is close to Gaussian, the Fisher matrix is not the appropriate tool to produce reliable forecasts

  19. CSNI Integral test facility validation matrix for the assessment of thermal-hydraulic codes for LWR LOCA and transients

    International Nuclear Information System (INIS)

    1996-07-01

    This report deals with an internationally agreed integral test facility (ITF) matrix for the validation of best estimate thermal-hydraulic computer codes. Firstly, the main physical phenomena that occur during the considered accidents are identified, test types are specified, and test facilities suitable for reproducing these aspects are selected. Secondly, a life of selected experiments carried out in these facilities has been set down. The criteria to achieve the objectives are outlined. The construction of such a matrix is an attempt to collect together in a systematic way the best sets of openly available test data for code validation, assessment and improvement, including quantitative assessment of uncertainties in the modelling of phenomena by the codes. In addition to this objective, it is an attempt to record information which has been generated around the world over the last 20 years so that it is more accessible to present and future workers in that field than would otherwise be the case

  20. Development and validation of a job exposure matrix for physical risk factors in low back pain.

    Directory of Open Access Journals (Sweden)

    Svetlana Solovieva

    Full Text Available OBJECTIVES: The aim was to construct and validate a gender-specific job exposure matrix (JEM for physical exposures to be used in epidemiological studies of low back pain (LBP. MATERIALS AND METHODS: We utilized two large Finnish population surveys, one to construct the JEM and another to test matrix validity. The exposure axis of the matrix included exposures relevant to LBP (heavy physical work, heavy lifting, awkward trunk posture and whole body vibration and exposures that increase the biomechanical load on the low back (arm elevation or those that in combination with other known risk factors could be related to LBP (kneeling or squatting. Job titles with similar work tasks and exposures were grouped. Exposure information was based on face-to-face interviews. Validity of the matrix was explored by comparing the JEM (group-based binary measures with individual-based measures. The predictive validity of the matrix against LBP was evaluated by comparing the associations of the group-based (JEM exposures with those of individual-based exposures. RESULTS: The matrix includes 348 job titles, representing 81% of all Finnish job titles in the early 2000s. The specificity of the constructed matrix was good, especially in women. The validity measured with kappa-statistic ranged from good to poor, being fair for most exposures. In men, all group-based (JEM exposures were statistically significantly associated with one-month prevalence of LBP. In women, four out of six group-based exposures showed an association with LBP. CONCLUSIONS: The gender-specific JEM for physical exposures showed relatively high specificity without compromising sensitivity. The matrix can therefore be considered as a valid instrument for exposure assessment in large-scale epidemiological studies, when more precise but more labour-intensive methods are not feasible. Although the matrix was based on Finnish data we foresee that it could be applicable, with some modifications, in

  1. Development and validation of a job exposure matrix for physical risk factors in low back pain.

    Science.gov (United States)

    Solovieva, Svetlana; Pehkonen, Irmeli; Kausto, Johanna; Miranda, Helena; Shiri, Rahman; Kauppinen, Timo; Heliövaara, Markku; Burdorf, Alex; Husgafvel-Pursiainen, Kirsti; Viikari-Juntura, Eira

    2012-01-01

    The aim was to construct and validate a gender-specific job exposure matrix (JEM) for physical exposures to be used in epidemiological studies of low back pain (LBP). We utilized two large Finnish population surveys, one to construct the JEM and another to test matrix validity. The exposure axis of the matrix included exposures relevant to LBP (heavy physical work, heavy lifting, awkward trunk posture and whole body vibration) and exposures that increase the biomechanical load on the low back (arm elevation) or those that in combination with other known risk factors could be related to LBP (kneeling or squatting). Job titles with similar work tasks and exposures were grouped. Exposure information was based on face-to-face interviews. Validity of the matrix was explored by comparing the JEM (group-based) binary measures with individual-based measures. The predictive validity of the matrix against LBP was evaluated by comparing the associations of the group-based (JEM) exposures with those of individual-based exposures. The matrix includes 348 job titles, representing 81% of all Finnish job titles in the early 2000s. The specificity of the constructed matrix was good, especially in women. The validity measured with kappa-statistic ranged from good to poor, being fair for most exposures. In men, all group-based (JEM) exposures were statistically significantly associated with one-month prevalence of LBP. In women, four out of six group-based exposures showed an association with LBP. The gender-specific JEM for physical exposures showed relatively high specificity without compromising sensitivity. The matrix can therefore be considered as a valid instrument for exposure assessment in large-scale epidemiological studies, when more precise but more labour-intensive methods are not feasible. Although the matrix was based on Finnish data we foresee that it could be applicable, with some modifications, in other countries with a similar level of technology.

  2. TRAC-P validation test matrix. Revision 1.0

    International Nuclear Information System (INIS)

    Hughes, E.D.; Boyack, B.E.

    1997-01-01

    This document briefly describes the elements of the Nuclear Regulatory Commission's (NRC's) software quality assurance program leading to software (code) qualification and identifies a test matrix for qualifying Transient Reactor Analysis Code (TRAC)-Pressurized Water Reactor Version (-P), or TRAC-P, to the NRC's software quality assurance requirements. Code qualification is the outcome of several software life-cycle activities, specifically, (1) Requirements Definition, (2) Design, (3) Implementation, and (4) Qualification Testing. The major objective of this document is to define the TRAC-P Qualification Testing effort

  3. TRAC-P validation test matrix. Revision 1.0

    Energy Technology Data Exchange (ETDEWEB)

    Hughes, E.D.; Boyack, B.E.

    1997-09-05

    This document briefly describes the elements of the Nuclear Regulatory Commission`s (NRC`s) software quality assurance program leading to software (code) qualification and identifies a test matrix for qualifying Transient Reactor Analysis Code (TRAC)-Pressurized Water Reactor Version (-P), or TRAC-P, to the NRC`s software quality assurance requirements. Code qualification is the outcome of several software life-cycle activities, specifically, (1) Requirements Definition, (2) Design, (3) Implementation, and (4) Qualification Testing. The major objective of this document is to define the TRAC-P Qualification Testing effort.

  4. Validation matrix for the assessment of thermal-hydraulic codes for VVER LOCA and transients. A report by the OECD support group on the VVER thermal-hydraulic code validation matrix

    International Nuclear Information System (INIS)

    2001-06-01

    This report deals with an internationally agreed experimental test facility matrix for the validation of best estimate thermal-hydraulic computer codes applied for the analysis of VVER reactor primary systems in accident and transient conditions. Firstly, the main physical phenomena that occur during the considered accidents are identified, test types are specified, and test facilities that supplement the CSNI CCVMs and are suitable for reproducing these aspects are selected. Secondly, a list of selected experiments carried out in these facilities has been set down. The criteria to achieve the objectives are outlined. The construction of VVER Thermal-Hydraulic Code Validation Matrix follows the logic of the CSNI Code Validation Matrices (CCVM). Similar to the CCVM it is an attempt to collect together in a systematic way the best sets of available test data for VVER specific code validation, assessment and improvement, including quantitative assessment of uncertainties in the modelling of phenomena by the codes. In addition to this objective, it is an attempt to record information which has been generated in countries operating VVER reactors over the last 20 years so that it is more accessible to present and future workers in that field than would otherwise be the case. (authors)

  5. The Visual Matrix Method: Imagery and Affect in a Group-Based Research Setting

    Directory of Open Access Journals (Sweden)

    Lynn Froggett

    2015-07-01

    Full Text Available The visual matrix is a method for researching shared experience, stimulated by sensory material relevant to a research question. It is led by imagery, visualization and affect, which in the matrix take precedence over discourse. The method enables the symbolization of imaginative and emotional material, which might not otherwise be articulated and allows "unthought" dimensions of experience to emerge into consciousness in a participatory setting. We describe the process of the matrix with reference to the study "Public Art and Civic Engagement" (FROGGETT, MANLEY, ROY, PRIOR & DOHERTY, 2014 in which it was developed and tested. Subsequently, examples of its use in other contexts are provided. Both the matrix and post-matrix discussions are described, as is the interpretive process that follows. Theoretical sources are highlighted: its origins in social dreaming; the atemporal, associative nature of the thinking during and after the matrix which we describe through the Deleuzian idea of the rhizome; and the hermeneutic analysis which draws from object relations theory and the Lorenzerian tradition of scenic understanding. The matrix has been conceptualized as a "scenic rhizome" to account for its distinctive quality and hybrid origins in research practice. The scenic rhizome operates as a "third" between participants and the "objects" of contemplation. We suggest that some of the drawbacks of other group-based methods are avoided in the visual matrix—namely the tendency for inter-personal dynamics to dominate the event. URN: http://nbn-resolving.de/urn:nbn:de:0114-fqs150369

  6. New angular quadrature sets: effect on the conditioning number of the LTSN two dimensional transport matrix

    International Nuclear Information System (INIS)

    Hauser, Eliete Biasotto; Romero, Debora Angrizano

    2009-01-01

    The main objective of this work is to utilize a new angular quadrature sets based on Legendre and Chebyshev polynomials, and to analyse their effects on the number of LTS N matrix conditioning for the problem of discrete coordinates of neutron transport with two dimension cartesian geometry with isotropic scattering, and an energy group, in non multiplicative homogenous domains

  7. Shrinkage covariance matrix approach based on robust trimmed mean in gene sets detection

    Science.gov (United States)

    Karjanto, Suryaefiza; Ramli, Norazan Mohamed; Ghani, Nor Azura Md; Aripin, Rasimah; Yusop, Noorezatty Mohd

    2015-02-01

    Microarray involves of placing an orderly arrangement of thousands of gene sequences in a grid on a suitable surface. The technology has made a novelty discovery since its development and obtained an increasing attention among researchers. The widespread of microarray technology is largely due to its ability to perform simultaneous analysis of thousands of genes in a massively parallel manner in one experiment. Hence, it provides valuable knowledge on gene interaction and function. The microarray data set typically consists of tens of thousands of genes (variables) from just dozens of samples due to various constraints. Therefore, the sample covariance matrix in Hotelling's T2 statistic is not positive definite and become singular, thus it cannot be inverted. In this research, the Hotelling's T2 statistic is combined with a shrinkage approach as an alternative estimation to estimate the covariance matrix to detect significant gene sets. The use of shrinkage covariance matrix overcomes the singularity problem by converting an unbiased to an improved biased estimator of covariance matrix. Robust trimmed mean is integrated into the shrinkage matrix to reduce the influence of outliers and consequently increases its efficiency. The performance of the proposed method is measured using several simulation designs. The results are expected to outperform existing techniques in many tested conditions.

  8. Discriminant Validity of the WISC-IV Culture-Language Interpretive Matrix

    Science.gov (United States)

    Styck, Kara M.; Watkins, Marley W.

    2014-01-01

    The Culture-Language Interpretive Matrix (C-LIM) was developed to help practitioners determine the validity of test scores obtained from students who are culturally and linguistically different from the normative group of a test. The present study used an idiographic approach to investigate the diagnostic utility of the C-LIM for the Wechsler…

  9. Numerical modelling of transdermal delivery from matrix systems: parametric study and experimental validation with silicone matrices.

    Science.gov (United States)

    Snorradóttir, Bergthóra S; Jónsdóttir, Fjóla; Sigurdsson, Sven Th; Másson, Már

    2014-08-01

    A model is presented for transdermal drug delivery from single-layered silicone matrix systems. The work is based on our previous results that, in particular, extend the well-known Higuchi model. Recently, we have introduced a numerical transient model describing matrix systems where the drug dissolution can be non-instantaneous. Furthermore, our model can describe complex interactions within a multi-layered matrix and the matrix to skin boundary. The power of the modelling approach presented here is further illustrated by allowing the possibility of a donor solution. The model is validated by a comparison with experimental data, as well as validating the parameter values against each other, using various configurations with donor solution, silicone matrix and skin. Our results show that the model is a good approximation to real multi-layered delivery systems. The model offers the ability of comparing drug release for ibuprofen and diclofenac, which cannot be analysed by the Higuchi model because the dissolution in the latter case turns out to be limited. The experiments and numerical model outlined in this study could also be adjusted to more general formulations, which enhances the utility of the numerical model as a design tool for the development of drug-loaded matrices for trans-membrane and transdermal delivery. © 2014 Wiley Periodicals, Inc. and the American Pharmacists Association.

  10. Generation of the covariance matrix for a set of nuclear data produced by collapsing a larger parent set through the weighted averaging of equivalent data points

    International Nuclear Information System (INIS)

    Smith, D.L.

    1987-01-01

    A method is described for generating the covariance matrix of a set of experimental nuclear data which has been collapsed in size by the averaging of equivalent data points belonging to a larger parent data set. It is assumed that the data values and covariance matrix for the parent set are provided. The collapsed set is obtained by a proper weighted-averaging procedure based on the method of least squares. It is then shown by means of the law of error propagation that the elements of the covariance matrix for the collapsed set are linear combinations of elements from the parent set covariance matrix. The coefficients appearing in these combinations are binary products of the same coefficients which appear as weighting factors in the data collapsing procedure. As an example, the procedure is applied to a collection of recently-measured integral neutron-fission cross-section ratios. (orig.)

  11. A note on the estimation of the Pareto efficient set for multiobjective matrix permutation problems.

    Science.gov (United States)

    Brusco, Michael J; Steinley, Douglas

    2012-02-01

    There are a number of important problems in quantitative psychology that require the identification of a permutation of the n rows and columns of an n × n proximity matrix. These problems encompass applications such as unidimensional scaling, paired-comparison ranking, and anti-Robinson forms. The importance of simultaneously incorporating multiple objective criteria in matrix permutation applications is well recognized in the literature; however, to date, there has been a reliance on weighted-sum approaches that transform the multiobjective problem into a single-objective optimization problem. Although exact solutions to these single-objective problems produce supported Pareto efficient solutions to the multiobjective problem, many interesting unsupported Pareto efficient solutions may be missed. We illustrate the limitation of the weighted-sum approach with an example from the psychological literature and devise an effective heuristic algorithm for estimating both the supported and unsupported solutions of the Pareto efficient set. © 2011 The British Psychological Society.

  12. A set of pathological tests to validate new finite elements

    Indian Academy of Sciences (India)

    M. Senthilkumar (Newgen Imaging) 1461 1996 Oct 15 13:05:22

    The finite element method entails several approximations. Hence it ... researchers have designed several pathological tests to validate any new finite element. The .... Three dimensional thick shell elements using a hybrid/mixed formu- lation.

  13. Validity of Two WPPSI Short Forms in Outpatient Clinic Settings.

    Science.gov (United States)

    Haynes, Jack P.; Atkinson, David

    1983-01-01

    Investigated the validity of subtest short forms for the Wechsler Preschool and Primary Scale of Intelligence in an outpatient population of 116 children. Data showed that the short forms underestimated actual level of intelligence and supported use of a short form only as a brief screening device. (LLL)

  14. Data Set for Emperical Validation of Double Skin Facade Model

    DEFF Research Database (Denmark)

    Kalyanova, Olena; Jensen, Rasmus Lund; Heiselberg, Per

    2008-01-01

    During the recent years the attention to the double skin facade (DSF) concept has greatly increased. Nevertheless, the application of the concept depends on whether a reliable model for simulation of the DSF performance will be developed or pointed out. This is, however, not possible to do, until...... the International Energy Agency (IEA) Task 34 Annex 43. This paper describes the full-scale outdoor experimental test facility ‘the Cube', where the experiments were conducted, the experimental set-up and the measurements procedure for the data sets. The empirical data is composed for the key-functioning modes...

  15. Numerical Aspects of Atomic Physics: Helium Basis Sets and Matrix Diagonalization

    Science.gov (United States)

    Jentschura, Ulrich; Noble, Jonathan

    2014-03-01

    We present a matrix diagonalization algorithm for complex symmetric matrices, which can be used in order to determine the resonance energies of auto-ionizing states of comparatively simple quantum many-body systems such as helium. The algorithm is based in multi-precision arithmetic and proceeds via a tridiagonalization of the complex symmetric (not necessarily Hermitian) input matrix using generalized Householder transformations. Example calculations involving so-called PT-symmetric quantum systems lead to reference values which pertain to the imaginary cubic perturbation (the imaginary cubic anharmonic oscillator). We then proceed to novel basis sets for the helium atom and present results for Bethe logarithms in hydrogen and helium, obtained using the enhanced numerical techniques. Some intricacies of ``canned'' algorithms such as those used in LAPACK will be discussed. Our algorithm, for complex symmetric matrices such as those describing cubic resonances after complex scaling, is faster than LAPACK's built-in routines, for specific classes of input matrices. It also offer flexibility in terms of the calculation of the so-called implicit shift, which is used in order to ``pivot'' the system toward the convergence to diagonal form. We conclude with a wider overview.

  16. GSMA: Gene Set Matrix Analysis, An Automated Method for Rapid Hypothesis Testing of Gene Expression Data

    Directory of Open Access Journals (Sweden)

    Chris Cheadle

    2007-01-01

    Full Text Available Background: Microarray technology has become highly valuable for identifying complex global changes in gene expression patterns. The assignment of functional information to these complex patterns remains a challenging task in effectively interpreting data and correlating results from across experiments, projects and laboratories. Methods which allow the rapid and robust evaluation of multiple functional hypotheses increase the power of individual researchers to data mine gene expression data more efficiently.Results: We have developed (gene set matrix analysis GSMA as a useful method for the rapid testing of group-wise up- or downregulation of gene expression simultaneously for multiple lists of genes (gene sets against entire distributions of gene expression changes (datasets for single or multiple experiments. The utility of GSMA lies in its flexibility to rapidly poll gene sets related by known biological function or as designated solely by the end-user against large numbers of datasets simultaneously.Conclusions: GSMA provides a simple and straightforward method for hypothesis testing in which genes are tested by groups across multiple datasets for patterns of expression enrichment.

  17. Good validity of the international spinal cord injury quality of life basic data set

    NARCIS (Netherlands)

    Post, M. W. M.; Adriaansen, J. J. E.; Charlifue, S.; Biering-Sorensen, F.; van Asbeck, F. W. A.

    Study design: Cross-sectional validation study. Objectives: To examine the construct and concurrent validity of the International Spinal Cord Injury (SCI) Quality of Life (QoL) Basic Data Set. Setting: Dutch community. Participants: People 28-65 years of age, who obtained their SCI between 18 and 35

  18. EXAMINATION OF A PROPOSED VALIDATION DATA SET USING CFD CALCULATIONS

    International Nuclear Information System (INIS)

    Johnson, Richard W.

    2009-01-01

    The United States Department of Energy is promoting the resurgence of nuclear power in the U. S. for both electrical power generation and production of process heat required for industrial processes such as the manufacture of hydrogen for use as a fuel in automobiles. The DOE project is called the next generation nuclear plant (NGNP) and is based on a Generation IV reactor concept called the very high temperature reactor (VHTR), which will use helium as the coolant at temperatures ranging from 450 C to perhaps 1000 C. While computational fluid dynamics (CFD) has not been used for past safety analysis for nuclear reactors in the U. S., it is being considered for such for future reactors. It is fully recognized that CFD simulation codes will have to be validated for flow physics reasonably close to actual fluid dynamic conditions expected in normal and accident operational situations. To this end, experimental data have been obtained in a scaled model of a narrow slice of the lower plenum of a prismatic VHTR. The present article presents new results of CFD examinations of these data to explore potential issues with the geometry, the initial conditions, the flow dynamics and the data needed to fully specify the inlet and boundary conditions; results for several turbulence models are examined. Issues are addressed and recommendations about the data are made

  19. The Impact of Goal Setting and Empowerment on Governmental Matrix Organizations

    Science.gov (United States)

    1993-09-01

    shared. In a study of matrix management, Eduardo Vasconcellos further describes various matrix structures in the Galbraith model. In a functional...Technology/LAR, Wright-Patterson AFB OH, 1992. Vasconcellos , Eduardo . "A Model For a Better Understanding of the Matrix Structure," IEEE Transactions on...project matrix, the project manager maintains more influence and the structure lies to the right-of center ( Vasconcellos , 1979:58). Different Types of

  20. The development and validation of the Closed-set Mandarin Sentence (CMS) test.

    Science.gov (United States)

    Tao, Duo-Duo; Fu, Qian-Jie; Galvin, John J; Yu, Ya-Feng

    2017-09-01

    Matrix-styled sentence tests offer a closed-set paradigm that may be useful when evaluating speech intelligibility. Ideally, sentence test materials should reflect the distribution of phonemes within the target language. We developed and validated the Closed-set Mandarin Sentence (CMS) test to assess Mandarin speech intelligibility in noise. CMS test materials were selected to be familiar words and to represent the natural distribution of vowels, consonants, and lexical tones found in Mandarin Chinese. Ten key words in each of five categories (Name, Verb, Number, Color, and Fruit) were produced by a native Mandarin talker, resulting in a total of 50 words that could be combined to produce 100,000 unique sentences. Normative data were collected in 10 normal-hearing, adult Mandarin-speaking Chinese listeners using a closed-set test paradigm. Two test runs were conducted for each subject, and 20 sentences per run were randomly generated while ensuring that each word was presented only twice in each run. First, the level of the words in each category were adjusted to produce equal intelligibility in noise. Test-retest reliability for word-in-sentence recognition was excellent according to Cronbach's alpha (0.952). After the category level adjustments, speech reception thresholds (SRTs) for sentences in noise, defined as the signal-to-noise ratio (SNR) that produced 50% correct whole sentence recognition, were adaptively measured by adjusting the SNR according to the correctness of response. The mean SRT was -7.9 (SE=0.41) and -8.1 (SE=0.34) dB for runs 1 and 2, respectively. The mean standard deviation across runs was 0.93 dB, and paired t-tests showed no significant difference between runs 1 and 2 (p=0.74) despite random sentences being generated for each run and each subject. The results suggest that the CMS provides large stimulus set with which to repeatedly and reliably measure Mandarin-speaking listeners' speech understanding in noise using a closed-set paradigm.

  1. Assessing the validity of commercial and municipal food environment data sets in Vancouver, Canada.

    Science.gov (United States)

    Daepp, Madeleine Ig; Black, Jennifer

    2017-10-01

    The present study assessed systematic bias and the effects of data set error on the validity of food environment measures in two municipal and two commercial secondary data sets. Sensitivity, positive predictive value (PPV) and concordance were calculated by comparing two municipal and two commercial secondary data sets with ground-truthed data collected within 800 m buffers surrounding twenty-six schools. Logistic regression examined associations of sensitivity and PPV with commercial density and neighbourhood socio-economic deprivation. Kendall's τ estimated correlations between density and proximity of food outlets near schools constructed with secondary data sets v. ground-truthed data. Vancouver, Canada. Food retailers located within 800 m of twenty-six schools RESULTS: All data sets scored relatively poorly across validity measures, although, overall, municipal data sets had higher levels of validity than did commercial data sets. Food outlets were more likely to be missing from municipal health inspections lists and commercial data sets in neighbourhoods with higher commercial density. Still, both proximity and density measures constructed from all secondary data sets were highly correlated (Kendall's τ>0·70) with measures constructed from ground-truthed data. Despite relatively low levels of validity in all secondary data sets examined, food environment measures constructed from secondary data sets remained highly correlated with ground-truthed data. Findings suggest that secondary data sets can be used to measure the food environment, although estimates should be treated with caution in areas with high commercial density.

  2. The Metadistrict as the Territorial Strategy: From Set Theory and a Matrix Organization Model Hypothesis

    Directory of Open Access Journals (Sweden)

    Francesco Contò

    2012-06-01

    Full Text Available The purpose of this proposal is to explore a new concept of 'Metadistrict' to be applied in a region of Southern Italy – Apulia ‐ in order to analyze the impact that the activation of a special network between different sector chains and several integrated projects may have for revitalizing the local economy; an important role is assigned to the network of relationships and so to the social capital. The Metadistrict model stems from the Local Action Groups and the Integrated Projects of Food Chain frameworks. It may represent a crucial driver of the rural economy through the realization of sector circuits connected to the concept of multi‐functionality in agriculture, that is Network of the Territorial Multi‐functionality. It was formalized by making use of a set of theories and of a Matrix Organization Model. The adoption of the Metadistrict perspective as the territorial strategy may play a key role to revitalize the primary sector, through the increase of economic and productive opportunities due to the implementation of a common and shared strategy and organization.

  3. Good validity of the international spinal cord injury quality of life basic data set

    DEFF Research Database (Denmark)

    Post, M W M; Adriaansen, J J E; Charlifue, S

    2016-01-01

    STUDY DESIGN: Cross-sectional validation study. OBJECTIVES: To examine the construct and concurrent validity of the International Spinal Cord Injury (SCI) Quality of Life (QoL) Basic Data Set. SETTING: Dutch community. PARTICIPANTS: People 28-65 years of age, who obtained their SCI between 18...... and 35 years of age, were at least 10 years post SCI and were wheelchair users in daily life.Measure(s):The International SCI QoL Basic Data Set consists of three single items on satisfaction with life as a whole, physical health and psychological health (0=complete dissatisfaction; 10=complete...... and psychological health (0.70). CONCLUSIONS: This first validity study of the International SCI QoL Basic Data Set shows that it appears valid for persons with SCI....

  4. Kohn-Sham potentials from electron densities using a matrix representation within finite atomic orbital basis sets

    Science.gov (United States)

    Zhang, Xing; Carter, Emily A.

    2018-01-01

    We revisit the static response function-based Kohn-Sham (KS) inversion procedure for determining the KS effective potential that corresponds to a given target electron density within finite atomic orbital basis sets. Instead of expanding the potential in an auxiliary basis set, we directly update the potential in its matrix representation. Through numerical examples, we show that the reconstructed density rapidly converges to the target density. Preliminary results are presented to illustrate the possibility of obtaining a local potential in real space from the optimized potential in its matrix representation. We have further applied this matrix-based KS inversion approach to density functional embedding theory. A proof-of-concept study of a solvated proton transfer reaction demonstrates the method's promise.

  5. Moving faces, looking places: validation of the Amsterdam Dynamic Facial Expression Set (ADFES)

    NARCIS (Netherlands)

    van der Schalk, J.; Hawk, S.T.; Fischer, A.H.; Doosje, B.

    2011-01-01

    We report two studies validating a new standardized set of filmed emotion expressions, the Amsterdam Dynamic Facial Expression Set (ADFES). The ADFES is distinct from existing datasets in that it includes a face-forward version and two different head-turning versions (faces turning toward and away

  6. All the mathematics in the world: logical validity and classical set theory

    Directory of Open Access Journals (Sweden)

    David Charles McCarty

    2017-12-01

    Full Text Available A recognizable topological model construction shows that any consistent principles of classical set theory, including the validity of the law of the excluded third, together with a standard class theory, do not suffice to demonstrate the general validity of the law of the excluded third. This result calls into question the classical mathematician's ability to offer solid justifications for the logical principles he or she favors.

  7. Evaluation of the confusion matrix method in the validation of an automated system for measuring feeding behaviour of cattle.

    Science.gov (United States)

    Ruuska, Salla; Hämäläinen, Wilhelmiina; Kajava, Sari; Mughal, Mikaela; Matilainen, Pekka; Mononen, Jaakko

    2018-03-01

    The aim of the present study was to evaluate empirically confusion matrices in device validation. We compared the confusion matrix method to linear regression and error indices in the validation of a device measuring feeding behaviour of dairy cattle. In addition, we studied how to extract additional information on classification errors with confusion probabilities. The data consisted of 12 h behaviour measurements from five dairy cows; feeding and other behaviour were detected simultaneously with a device and from video recordings. The resulting 216 000 pairs of classifications were used to construct confusion matrices and calculate performance measures. In addition, hourly durations of each behaviour were calculated and the accuracy of measurements was evaluated with linear regression and error indices. All three validation methods agreed when the behaviour was detected very accurately or inaccurately. Otherwise, in the intermediate cases, the confusion matrix method and error indices produced relatively concordant results, but the linear regression method often disagreed with them. Our study supports the use of confusion matrix analysis in validation since it is robust to any data distribution and type of relationship, it makes a stringent evaluation of validity, and it offers extra information on the type and sources of errors. Copyright © 2018 Elsevier B.V. All rights reserved.

  8. Microscopically based energy density functionals for nuclei using the density matrix expansion. II. Full optimization and validation

    Science.gov (United States)

    Navarro Pérez, R.; Schunck, N.; Dyhdalo, A.; Furnstahl, R. J.; Bogner, S. K.

    2018-05-01

    Background: Energy density functional methods provide a generic framework to compute properties of atomic nuclei starting from models of nuclear potentials and the rules of quantum mechanics. Until now, the overwhelming majority of functionals have been constructed either from empirical nuclear potentials such as the Skyrme or Gogny forces, or from systematic gradient-like expansions in the spirit of the density functional theory for atoms. Purpose: We seek to obtain a usable form of the nuclear energy density functional that is rooted in the modern theory of nuclear forces. We thus consider a functional obtained from the density matrix expansion of local nuclear potentials from chiral effective field theory. We propose a parametrization of this functional carefully calibrated and validated on selected ground-state properties that is suitable for large-scale calculations of nuclear properties. Methods: Our energy functional comprises two main components. The first component is a non-local functional of the density and corresponds to the direct part (Hartree term) of the expectation value of local chiral potentials on a Slater determinant. Contributions to the mean field and the energy of this term are computed by expanding the spatial, finite-range components of the chiral potential onto Gaussian functions. The second component is a local functional of the density and is obtained by applying the density matrix expansion to the exchange part (Fock term) of the expectation value of the local chiral potential. We apply the UNEDF2 optimization protocol to determine the coupling constants of this energy functional. Results: We obtain a set of microscopically constrained functionals for local chiral potentials from leading order up to next-to-next-to-leading order with and without three-body forces and contributions from Δ excitations. These functionals are validated on the calculation of nuclear and neutron matter, nuclear mass tables, single-particle shell structure

  9. Construct Validity and Reliability of Structured Assessment of endoVascular Expertise in a Simulated Setting

    DEFF Research Database (Denmark)

    Bech, B; Lönn, L; Falkenberg, M

    2011-01-01

    Objectives To study the construct validity and reliability of a novel endovascular global rating scale, Structured Assessment of endoVascular Expertise (SAVE). Design A Clinical, experimental study. Materials Twenty physicians with endovascular experiences ranging from complete novices to highly....... Validity was analysed by correlating experience with performance results. Reliability was analysed according to generalisability theory. Results The mean score on the 29 items of the SAVE scale correlated well with clinical experience (R = 0.84, P ... with clinical experience (R = -0.53, P validity and reliability of assessment with the SAVE scale was high when applied to performances in a simulation setting with advanced realism. No ceiling effect...

  10. Introducing Matrix Management within a Children's Services Setting--Personal Reflections

    Science.gov (United States)

    Brooks, Michael; Kakabadse, Nada K.

    2014-01-01

    This article reflects on the introduction of "matrix management" arrangements for an Educational Psychology Service (EPS) within a Children's Service Directorate of a Local Authority (LA). It seeks to demonstrate critical self-awareness, consider relevant literature with a view to bringing insights to processes and outcomes, and offers…

  11. Development and validation of an Argentine set of facial expressions of emotion.

    Science.gov (United States)

    Vaiman, Marcelo; Wagner, Mónica Anna; Caicedo, Estefanía; Pereno, Germán Leandro

    2017-02-01

    Pictures of facial expressions of emotion are used in a wide range of experiments. The last decade has seen an increase in the number of studies presenting local sets of emotion stimuli. However, only a few existing sets contain pictures of Latin Americans, despite the growing attention emotion research is receiving in this region. Here we present the development and validation of the Universidad Nacional de Cordoba, Expresiones de Emociones Faciales (UNCEEF), a Facial Action Coding System (FACS)-verified set of pictures of Argentineans expressing the six basic emotions, plus neutral expressions. FACS scores, recognition rates, Hu scores, and discrimination indices are reported. Evidence of convergent validity was obtained using the Pictures of Facial Affect in an Argentine sample. However, recognition accuracy was greater for UNCEEF. The importance of local sets of emotion pictures is discussed.

  12. Symmetric geometric transfer matrix partial volume correction for PET imaging: principle, validation and robustness

    Science.gov (United States)

    Sattarivand, Mike; Kusano, Maggie; Poon, Ian; Caldwell, Curtis

    2012-11-01

    Limited spatial resolution of positron emission tomography (PET) often requires partial volume correction (PVC) to improve the accuracy of quantitative PET studies. Conventional region-based PVC methods use co-registered high resolution anatomical images (e.g. computed tomography (CT) or magnetic resonance images) to identify regions of interest. Spill-over between regions is accounted for by calculating regional spread functions (RSFs) in a geometric transfer matrix (GTM) framework. This paper describes a new analytically derived symmetric GTM (sGTM) method that relies on spill-over between RSFs rather than between regions. It is shown that the sGTM is mathematically equivalent to Labbe's method; however it is a region-based method rather than a voxel-based method and it avoids handling large matrices. The sGTM method was validated using two three-dimensional (3D) digital phantoms and one physical phantom. A 3D digital sphere phantom with sphere diameters ranging from 5 to 30 mm and a sphere-to-background uptake ratio of 3-to-1 was used. A 3D digital brain phantom was used with four different anatomical regions and a background region with different activities assigned to each region. A physical sphere phantom with the same geometry and uptake as the digital sphere phantom was manufactured and PET-CT images were acquired. Using these three phantoms, the performance of the sGTM method was assessed against that of the GTM method in terms of accuracy, precision, noise propagation and robustness. The robustness was assessed by applying mis-registration errors and errors in estimates of PET point spread function (PSF). In all three phantoms, the results showed that the sGTM method has accuracy similar to that of the GTM method and within 5%. However, the sGTM method showed better precision and noise propagation than the GTM method, especially for spheres smaller than 13 mm. Moreover, the sGTM method was more robust than the GTM method when mis-registration errors or

  13. Symmetric geometric transfer matrix partial volume correction for PET imaging: principle, validation and robustness

    International Nuclear Information System (INIS)

    Sattarivand, Mike; Caldwell, Curtis; Kusano, Maggie; Poon, Ian

    2012-01-01

    Limited spatial resolution of positron emission tomography (PET) often requires partial volume correction (PVC) to improve the accuracy of quantitative PET studies. Conventional region-based PVC methods use co-registered high resolution anatomical images (e.g. computed tomography (CT) or magnetic resonance images) to identify regions of interest. Spill-over between regions is accounted for by calculating regional spread functions (RSFs) in a geometric transfer matrix (GTM) framework. This paper describes a new analytically derived symmetric GTM (sGTM) method that relies on spill-over between RSFs rather than between regions. It is shown that the sGTM is mathematically equivalent to Labbe's method; however it is a region-based method rather than a voxel-based method and it avoids handling large matrices. The sGTM method was validated using two three-dimensional (3D) digital phantoms and one physical phantom. A 3D digital sphere phantom with sphere diameters ranging from 5 to 30 mm and a sphere-to-background uptake ratio of 3-to-1 was used. A 3D digital brain phantom was used with four different anatomical regions and a background region with different activities assigned to each region. A physical sphere phantom with the same geometry and uptake as the digital sphere phantom was manufactured and PET-CT images were acquired. Using these three phantoms, the performance of the sGTM method was assessed against that of the GTM method in terms of accuracy, precision, noise propagation and robustness. The robustness was assessed by applying mis-registration errors and errors in estimates of PET point spread function (PSF). In all three phantoms, the results showed that the sGTM method has accuracy similar to that of the GTM method and within 5%. However, the sGTM method showed better precision and noise propagation than the GTM method, especially for spheres smaller than 13 mm. Moreover, the sGTM method was more robust than the GTM method when mis-registration errors or

  14. In-vessel core degradation code validation matrix update 1996-1999. Report by an OECD/NEA group of experts

    International Nuclear Information System (INIS)

    2001-02-01

    In 1991 the Committee on the Safety of Nuclear Installations (CSNI) issued a State-of-the-Art Report (SOAR) on In-Vessel Core Degradation in Light Water Reactor (LWR) Severe Accidents. Based on the recommendations of this report a Validation Matrix for severe accident modelling codes was produced. Experiments performed up to the end of 1993 were considered for this validation matrix. To include recent experiments and to enlarge the scope, an update was formally inaugurated in January 1999 by the Task Group on Degraded Core Cooling, a sub-group of Principal Working Group 2 (PWG-2) on Coolant System Behaviour, and a selection of writing group members was commissioned. The present report documents the results of this study. The objective of the Validation Matrix is to define a basic set of experiments, for which comparison of the measured and calculated parameters forms a basis for establishing the accuracy of test predictions, covering the full range of in-vessel core degradation phenomena expected in light water reactor severe accident transients. The emphasis is on integral experiments, where interactions amongst key phenomena as well as the phenomena themselves are explored; however separate-effects experiments are also considered especially where these extend the parameter ranges to cover those expected in postulated LWR severe accident transients. As well as covering PWR and BWR designs of Western origin, the scope of the review has been extended to Eastern European (VVER) types. Similarly, the coverage of phenomena has been extended, starting as before from the initial heat-up but now proceeding through the in-core stage to include introduction of melt into the lower plenum and further to core coolability and retention to the lower plenum, with possible external cooling. Items of a purely thermal hydraulic nature involving no core degradation are excluded, having been covered in other validation matrix studies. Concerning fission product behaviour, the effect

  15. The Child Affective Facial Expression (CAFE) set: validity and reliability from untrained adults.

    Science.gov (United States)

    LoBue, Vanessa; Thrasher, Cat

    2014-01-01

    Emotional development is one of the largest and most productive areas of psychological research. For decades, researchers have been fascinated by how humans respond to, detect, and interpret emotional facial expressions. Much of the research in this area has relied on controlled stimulus sets of adults posing various facial expressions. Here we introduce a new stimulus set of emotional facial expressions into the domain of research on emotional development-The Child Affective Facial Expression set (CAFE). The CAFE set features photographs of a racially and ethnically diverse group of 2- to 8-year-old children posing for six emotional facial expressions-angry, fearful, sad, happy, surprised, and disgusted-and a neutral face. In the current work, we describe the set and report validity and reliability data on the set from 100 untrained adult participants.

  16. The Child Affective Facial Expression (CAFE Set: Validity and Reliability from Untrained Adults

    Directory of Open Access Journals (Sweden)

    Vanessa eLoBue

    2015-01-01

    Full Text Available Emotional development is one of the largest and most productive areas of psychological research. For decades, researchers have been fascinated by how humans respond to, detect, and interpret emotional facial expressions. Much of the research in this area has relied on controlled stimulus sets of adults posing various facial expressions. Here we introduce a new stimulus set of emotional facial expressions into the domain of research on emotional development—The Child Affective Facial Expression set (CAFE. The CAFE set features photographs of a racially and ethnically diverse group of 2- to 8-year-old children posing for 6 emotional facial expressions—angry, fearful, sad, happy, surprised, and disgusted—and a neutral face. In the current work, we describe the set and report validity and reliability data on the set from 100 untrained adult participants.

  17. Analysis and classification of data sets for calibration and validation of agro-ecosystem models

    DEFF Research Database (Denmark)

    Kersebaum, K C; Boote, K J; Jorgenson, J S

    2015-01-01

    Experimental field data are used at different levels of complexity to calibrate, validate and improve agro-ecosystem models to enhance their reliability for regional impact assessment. A methodological framework and software are presented to evaluate and classify data sets into four classes regar...

  18. Review and evaluation of performance measures for survival prediction models in external validation settings

    Directory of Open Access Journals (Sweden)

    M. Shafiqur Rahman

    2017-04-01

    Full Text Available Abstract Background When developing a prediction model for survival data it is essential to validate its performance in external validation settings using appropriate performance measures. Although a number of such measures have been proposed, there is only limited guidance regarding their use in the context of model validation. This paper reviewed and evaluated a wide range of performance measures to provide some guidelines for their use in practice. Methods An extensive simulation study based on two clinical datasets was conducted to investigate the performance of the measures in external validation settings. Measures were selected from categories that assess the overall performance, discrimination and calibration of a survival prediction model. Some of these have been modified to allow their use with validation data, and a case study is provided to describe how these measures can be estimated in practice. The measures were evaluated with respect to their robustness to censoring and ease of interpretation. All measures are implemented, or are straightforward to implement, in statistical software. Results Most of the performance measures were reasonably robust to moderate levels of censoring. One exception was Harrell’s concordance measure which tended to increase as censoring increased. Conclusions We recommend that Uno’s concordance measure is used to quantify concordance when there are moderate levels of censoring. Alternatively, Gönen and Heller’s measure could be considered, especially if censoring is very high, but we suggest that the prediction model is re-calibrated first. We also recommend that Royston’s D is routinely reported to assess discrimination since it has an appealing interpretation. The calibration slope is useful for both internal and external validation settings and recommended to report routinely. Our recommendation would be to use any of the predictive accuracy measures and provide the corresponding predictive

  19. Development and characterization of a snapshot Mueller matrix polarimeter for the determination of cervical cancer risk in the low resource setting

    Science.gov (United States)

    Ramella-Roman, Jessica C.; Gonzalez, Mariacarla; Chue-Sang, Joseph; Montejo, Karla; Krup, Karl; Srinivas, Vijaya; DeHoog, Edward; Madhivanan, Purnima

    2018-04-01

    Mueller Matrix polarimetry can provide useful information about the function and structure of the extracellular matrix. Mueller Matrix systems are sophisticated and costly optical tools that have been used primarily in the laboratory or in hospital settings. Here we introduce a low-cost snapshot Mueller Matrix polarimeter that that does not require external power, has no moving parts, and can acquire a full Mueller Matrix in less than 50 milliseconds. We utilized this technology in the study of cervical cancer in Mysore India, yet the system could be translated in multiple diagnostic applications.

  20. The Outcome and Assessment Information Set (OASIS): A Review of Validity and Reliability

    Science.gov (United States)

    O’CONNOR, MELISSA; DAVITT, JOAN K.

    2015-01-01

    The Outcome and Assessment Information Set (OASIS) is the patient-specific, standardized assessment used in Medicare home health care to plan care, determine reimbursement, and measure quality. Since its inception in 1999, there has been debate over the reliability and validity of the OASIS as a research tool and outcome measure. A systematic literature review of English-language articles identified 12 studies published in the last 10 years examining the validity and reliability of the OASIS. Empirical findings indicate the validity and reliability of the OASIS range from low to moderate but vary depending on the item studied. Limitations in the existing research include: nonrepresentative samples; inconsistencies in methods used, items tested, measurement, and statistical procedures; and the changes to the OASIS itself over time. The inconsistencies suggest that these results are tentative at best; additional research is needed to confirm the value of the OASIS for measuring patient outcomes, research, and quality improvement. PMID:23216513

  1. A multiple criteria decision making for raking alternatives using preference relation matrix based on intuitionistic fuzzy sets

    Directory of Open Access Journals (Sweden)

    Mehdi Bahramloo

    2013-10-01

    Full Text Available Ranking various alternatives has been under investigation and there are literally various methods and techniques for making a decision based on various criteria. One of the primary concerns on ranking methodologies such as analytical hierarchy process (AHP is that decision makers cannot express his/her feeling in crisp form. Therefore, we need to use linguistic terms to receive the relative weights for comparing various alternatives. In this paper, we discuss ranking different alternatives based on the implementation of preference relation matrix based on intuitionistic fuzzy sets.

  2. System and method for the adaptive mapping of matrix data to sets of polygons

    Science.gov (United States)

    Burdon, David (Inventor)

    2003-01-01

    A system and method for converting bitmapped data, for example, weather data or thermal imaging data, to polygons is disclosed. The conversion of the data into polygons creates smaller data files. The invention is adaptive in that it allows for a variable degree of fidelity of the polygons. Matrix data is obtained. A color value is obtained. The color value is a variable used in the creation of the polygons. A list of cells to check is determined based on the color value. The list of cells to check is examined in order to determine a boundary list. The boundary list is then examined to determine vertices. The determination of the vertices is based on a prescribed maximum distance. When drawn, the ordered list of vertices create polygons which depict the cell data. The data files which include the vertices for the polygons are much smaller than the corresponding cell data files. The fidelity of the polygon representation can be adjusted by repeating the logic with varying fidelity values to achieve a given maximum file size or a maximum number of vertices per polygon.

  3. Validation of a global scale to assess the quality of interprofessional teamwork in mental health settings.

    Science.gov (United States)

    Tomizawa, Ryoko; Yamano, Mayumi; Osako, Mitue; Hirabayashi, Naotugu; Oshima, Nobuo; Sigeta, Masahiro; Reeves, Scott

    2017-12-01

    Few scales currently exist to assess the quality of interprofessional teamwork through team members' perceptions of working together in mental health settings. The purpose of this study was to revise and validate an interprofessional scale to assess the quality of teamwork in inpatient psychiatric units and to use it multi-nationally. A literature review was undertaken to identify evaluative teamwork tools and develop an additional 12 items to ensure a broad global focus. Focus group discussions considered adaptation to different care systems using subjective judgements from 11 participants in a pre-test of items. Data quality, construct validity, reproducibility, and internal consistency were investigated in the survey using an international comparative design. Exploratory factor analysis yielded five factors with 21 items: 'patient/community centred care', 'collaborative communication', 'interprofessional conflict', 'role clarification', and 'environment'. High overall internal consistency, reproducibility, adequate face validity, and reasonable construct validity were shown in the USA and Japan. The revised Collaborative Practice Assessment Tool (CPAT) is a valid measure to assess the quality of interprofessional teamwork in psychiatry and identifies the best strategies to improve team performance. Furthermore, the revised scale will generate more rigorous evidence for collaborative practice in psychiatry internationally.

  4. Validation of the Essentials of Magnetism II in Chinese critical care settings.

    Science.gov (United States)

    Bai, Jinbing; Hsu, Lily; Zhang, Qing

    2015-05-01

    To translate and evaluate the psychometric properties of the Essentials of Magnetism II tool (EOM II) for Chinese nurses in critical care settings. The EOM II is a reliable and valid scale for measuring the healthy work environment (HWE) for nurses in Western countries, however, it has not been validated among Chinese nurses. The translation of the EOM II followed internationally recognized guidelines. The Chinese version of the Essentials of Magnetism II tool (C-EOM II) was reviewed by an expert panel for culturally semantic equivalence and content validity. Then, 706 nurses from 28 intensive care units (ICUs) affiliated with 14 tertiary hospitals participated in this study. The reliability of the C-EOM II was assessed using the Cronbach's alpha coefficient; the content validity of this scale was assessed using the content validity index (CVI); and the construct validity was assessed using the confirmatory factor analysis (CFA). The C-EOM II showed excellent content validity with a CVI of 0·92. All the subscales of the C-EOM II were significantly correlated with overall nurse job satisfaction and nurse-assessed quality of care. The CFA showed that the C-EOM II was composed of 45 items with nine factors, accounting for 46·51% of the total variance. Cronbach's alpha coefficients for these factors ranged from 0·56 to 0·89. The C-EOM II is a promising scale to assess the HWE for Chinese ICU nurses. Nursing administrators and health care policy-makers can use the C-EOM II to evaluate clinical work environment so that a healthier work environment can be created and sustained for staff nurses. © 2013 British Association of Critical Care Nurses.

  5. Structural exploration for the refinement of anticancer matrix metalloproteinase-2 inhibitor designing approaches through robust validated multi-QSARs

    Science.gov (United States)

    Adhikari, Nilanjan; Amin, Sk. Abdul; Saha, Achintya; Jha, Tarun

    2018-03-01

    Matrix metalloproteinase-2 (MMP-2) is a promising pharmacological target for designing potential anticancer drugs. MMP-2 plays critical functions in apoptosis by cleaving the DNA repair enzyme namely poly (ADP-ribose) polymerase (PARP). Moreover, MMP-2 expression triggers the vascular endothelial growth factor (VEGF) having a positive influence on tumor size, invasion, and angiogenesis. Therefore, it is an urgent need to develop potential MMP-2 inhibitors without any toxicity but better pharmacokinetic property. In this article, robust validated multi-quantitative structure-activity relationship (QSAR) modeling approaches were attempted on a dataset of 222 MMP-2 inhibitors to explore the important structural and pharmacophoric requirements for higher MMP-2 inhibition. Different validated regression and classification-based QSARs, pharmacophore mapping and 3D-QSAR techniques were performed. These results were challenged and subjected to further validation to explain 24 in house MMP-2 inhibitors to judge the reliability of these models further. All these models were individually validated internally as well as externally and were supported and validated by each other. These results were further justified by molecular docking analysis. Modeling techniques adopted here not only helps to explore the necessary structural and pharmacophoric requirements but also for the overall validation and refinement techniques for designing potential MMP-2 inhibitors.

  6. Validation of the Comprehensive ICF Core Set for obstructive pulmonary diseases from the perspective of physiotherapists.

    Science.gov (United States)

    Rauch, Alexandra; Kirchberger, Inge; Stucki, Gerold; Cieza, Alarcos

    2009-12-01

    The 'Comprehensive ICF Core Set for obstructive pulmonary diseases' (OPD) is an application of the International Classification of Functioning, Disability and Health (ICF) and represents the typical spectrum of problems in functioning of patients with OPD. To optimize a multidisciplinary and patient-oriented approach in pulmonary rehabilitation, in which physiotherapy plays an important role, the ICF offers a standardized language and understanding of functioning. For it to be a useful tool for physiotherapists in rehabilitation of patients with OPD, the objective of this study was to validate this Comprehensive ICF Core Set for OPD from the perspective of physiotherapists. A three-round survey based on the Delphi technique of physiotherapists who are experienced in the treatment of OPD asked about the problems, resources and aspects of environment of patients with OPD that physiotherapists treat in clinical practice (physiotherapy intervention categories). Responses were linked to the ICF and compared with the existing Comprehensive ICF Core Set for OPD. Fifty-one physiotherapists from 18 countries named 904 single terms that were linked to 124 ICF categories, 9 personal factors and 16 'not classified' concepts. The identified ICF categories were mainly third-level categories compared with mainly second-level categories of the Comprehensive ICF Core Set for OPD. Seventy of the ICF categories, all personal factors and 15 'not classified' concepts gained more than 75% agreement among the physiotherapists. Of these ICF categories, 55 (78.5%) were covered by the Comprehensive ICF Core Set for OPD. The validity of the Comprehensive ICF Core Set for OPD was largely supported by the physiotherapists. Nevertheless, ICF categories that were not covered, personal factors and not classified terms offer opportunities towards the final ICF Core Set for OPD and further research to strengthen physiotherapists' perspective in pulmonary rehabilitation.

  7. An Ethical Issue Scale for Community Pharmacy Setting (EISP): Development and Validation.

    Science.gov (United States)

    Crnjanski, Tatjana; Krajnovic, Dusanka; Tadic, Ivana; Stojkov, Svetlana; Savic, Mirko

    2016-04-01

    Many problems that arise when providing pharmacy services may contain some ethical components and the aims of this study were to develop and validate a scale that could assess difficulties of ethical issues, as well as the frequency of those occurrences in everyday practice of community pharmacists. Development and validation of the scale was conducted in three phases: (1) generating items for the initial survey instrument after qualitative analysis; (2) defining the design and format of the instrument; (3) validation of the instrument. The constructed Ethical Issue scale for community pharmacy setting has two parts containing the same 16 items for assessing the difficulty and frequency thereof. The results of the 171 completely filled out scales were analyzed (response rate 74.89%). The Cronbach's α value of the part of the instrument that examines difficulties of the ethical situations was 0.83 and for the part of the instrument that examined frequency of the ethical situations was 0.84. Test-retest reliability for both parts of the instrument was satisfactory with all Interclass correlation coefficient (ICC) values above 0.6, (for the part that examines severity ICC = 0.809, for the part that examines frequency ICC = 0.929). The 16-item scale, as a self assessment tool, demonstrated a high degree of content, criterion, and construct validity and test-retest reliability. The results support its use as a research tool to asses difficulty and frequency of ethical issues in community pharmacy setting. The validated scale needs to be further employed on a larger sample of pharmacists.

  8. The development and validation of an interprofessional scale to assess teamwork in mental health settings.

    Science.gov (United States)

    Tomizawa, Ryoko; Yamano, Mayumi; Osako, Mitue; Misawa, Takeshi; Hirabayashi, Naotugu; Oshima, Nobuo; Sigeta, Masahiro; Reeves, Scott

    2014-09-01

    Currently, no evaluative scale exists to assess the quality of interprofessional teamwork in mental health settings across the globe. As a result, little is known about the detailed process of team development within this setting. The purpose of this study is to develop and validate a global interprofessional scale that assesses teamwork in mental health settings using an international comparative study based in Japan and the United States. This report provides a description of this study and reports progress made to date. Specifically, it outlines work on literature reviews to identify evaluative teamwork tools as well as identify relevant teamwork models and theories. It also outlines plans for empirical work that will be undertaken in both Japan and the United States.

  9. The Set of Fear Inducing Pictures (SFIP): Development and validation in fearful and nonfearful individuals.

    Science.gov (United States)

    Michałowski, Jarosław M; Droździel, Dawid; Matuszewski, Jacek; Koziejowski, Wojtek; Jednoróg, Katarzyna; Marchewka, Artur

    2017-08-01

    Emotionally charged pictorial materials are frequently used in phobia research, but no existing standardized picture database is dedicated to the study of different phobias. The present work describes the results of two independent studies through which we sought to develop and validate this type of database-a Set of Fear Inducing Pictures (SFIP). In Study 1, 270 fear-relevant and 130 neutral stimuli were rated for fear, arousal, and valence by four groups of participants; small-animal (N = 34), blood/injection (N = 26), social-fearful (N = 35), and nonfearful participants (N = 22). The results from Study 1 were employed to develop the final version of the SFIP, which includes fear-relevant images of social exposure (N = 40), blood/injection (N = 80), spiders/bugs (N = 80), and angry faces (N = 30), as well as 726 neutral photographs. In Study 2, we aimed to validate the SFIP in a sample of spider, blood/injection, social-fearful, and control individuals (N = 66). The fear-relevant images were rated as being more unpleasant and led to greater fear and arousal in fearful than in nonfearful individuals. The fear images differentiated between the three fear groups in the expected directions. Overall, the present findings provide evidence for the high validity of the SFIP and confirm that the set may be successfully used in phobia research.

  10. AOAC Official MethodSM Matrix Extension Validation Study of Assurance GDSTM for the Detection of Salmonella in Selected Spices.

    Science.gov (United States)

    Feldsine, Philip; Kaur, Mandeep; Shah, Khyati; Immerman, Amy; Jucker, Markus; Lienau, Andrew

    2015-01-01

    Assurance GDSTM for Salmonella Tq has been validated according to the AOAC INTERNATIONAL Methods Committee Guidelines for Validation of Microbiological Methods for Food and Environmental Surfaces for the detection of selected foods and environmental surfaces (Official Method of AnalysisSM 2009.03, Performance Tested MethodSM No. 050602). The method also completed AFNOR validation (following the ISO 16140 standard) compared to the reference method EN ISO 6579. For AFNOR, GDS was given a scope covering all human food, animal feed stuff, and environmental surfaces (Certificate No. TRA02/12-01/09). Results showed that Assurance GDS for Salmonella (GDS) has high sensitivity and is equivalent to the reference culture methods for the detection of motile and non-motile Salmonella. As part of the aforementioned validations, inclusivity and exclusivity studies, stability, and ruggedness studies were also conducted. Assurance GDS has 100% inclusivity and exclusivity among the 100 Salmonella serovars and 35 non-Salmonella organisms analyzed. To add to the scope of the Assurance GDS for Salmonella method, a matrix extension study was conducted, following the AOAC guidelines, to validate the application of the method for selected spices, specifically curry powder, cumin powder, and chili powder, for the detection of Salmonella.

  11. Community Priority Index: utility, applicability and validation for priority setting in community-based participatory research

    Directory of Open Access Journals (Sweden)

    Hamisu M. Salihu

    2015-07-01

    Full Text Available Background. Providing practitioners with an intuitive measure for priority setting that can be combined with diverse data collection methods is a necessary step to foster accountability of the decision-making process in community settings. Yet, there is a lack of easy-to-use, but methodologically robust measures, that can be feasibly implemented for reliable decision-making in community settings. To address this important gap in community based participatory research (CBPR, the purpose of this study was to demonstrate the utility, applicability, and validation of a community priority index in a community-based participatory research setting. Design and Methods. Mixed-method study that combined focus groups findings, nominal group technique with six key informants, and the generation of a Community Priority Index (CPI that integrated community importance, changeability, and target populations. Bootstrapping and simulation were performed for validation. Results. For pregnant mothers, the top three highly important and highly changeable priorities were: stress (CPI=0.85; 95%CI: 0.70, 1.00, lack of affection (CPI=0.87; 95%CI: 0.69, 1.00, and nutritional issues (CPI=0.78; 95%CI: 0.48, 1.00. For non-pregnant women, top priorities were: low health literacy (CPI=0.87; 95%CI: 0.69, 1.00, low educational attainment (CPI=0.78; 95%CI: 0.48, 1.00, and lack of self-esteem (CPI=0.72; 95%CI: 0.44, 1.00. For children and adolescents, the top three priorities were: obesity (CPI=0.88; 95%CI: 0.69, 1.00, low self-esteem (CPI=0.81; 95%CI: 0.69, 0.94, and negative attitudes toward education (CPI=0.75; 95%CI: 0.50, 0.94. Conclusions. This study demonstrates the applicability of the CPI as a simple and intuitive measure for priority setting in CBPR.

  12. Use of the recognition heuristic depends on the domain's recognition validity, not on the recognition validity of selected sets of objects.

    Science.gov (United States)

    Pohl, Rüdiger F; Michalkiewicz, Martha; Erdfelder, Edgar; Hilbig, Benjamin E

    2017-07-01

    According to the recognition-heuristic theory, decision makers solve paired comparisons in which one object is recognized and the other not by recognition alone, inferring that recognized objects have higher criterion values than unrecognized ones. However, success-and thus usefulness-of this heuristic depends on the validity of recognition as a cue, and adaptive decision making, in turn, requires that decision makers are sensitive to it. To this end, decision makers could base their evaluation of the recognition validity either on the selected set of objects (the set's recognition validity), or on the underlying domain from which the objects were drawn (the domain's recognition validity). In two experiments, we manipulated the recognition validity both in the selected set of objects and between domains from which the sets were drawn. The results clearly show that use of the recognition heuristic depends on the domain's recognition validity, not on the set's recognition validity. In other words, participants treat all sets as roughly representative of the underlying domain and adjust their decision strategy adaptively (only) with respect to the more general environment rather than the specific items they are faced with.

  13. Basic Laparoscopic Skills Assessment Study: Validation and Standard Setting among Canadian Urology Trainees.

    Science.gov (United States)

    Lee, Jason Y; Andonian, Sero; Pace, Kenneth T; Grober, Ethan

    2017-06-01

    As urology training programs move to a competency based medical education model, iterative assessments with objective standards will be required. To develop a valid set of technical skills standards we initiated a national skills assessment study focusing initially on laparoscopic skills. Between February 2014 and March 2016 the basic laparoscopic skill of Canadian urology trainees and attending urologists was assessed using 4 standardized tasks from the AUA (American Urological Association) BLUS (Basic Laparoscopic Urological Surgery) curriculum, including peg transfer, pattern cutting, suturing and knot tying, and vascular clip applying. All performances were video recorded and assessed using 3 methods, including time and error based scoring, expert global rating scores and C-SATS (Crowd-Sourced Assessments of Technical Skill Global Rating Scale), a novel, crowd sourced assessment platform. Different methods of standard setting were used to develop pass-fail cut points. Six attending urologists and 99 trainees completed testing. Reported laparoscopic experience and training level correlated with performance (p standard setting methods to define pass-fail cut points for all 4 AUA BLUS tasks. The 4 AUA BLUS tasks demonstrated good construct validity evidence for use in assessing basic laparoscopic skill. Performance scores using the novel C-SATS platform correlated well with traditional time-consuming methods of assessment. Various standard setting methods were used to develop pass-fail cut points for educators to use when making formative and summative assessments of basic laparoscopic skill. Copyright © 2017 American Urological Association Education and Research, Inc. Published by Elsevier Inc. All rights reserved.

  14. Open-Switch Fault Diagnosis and Fault Tolerant for Matrix Converter with Finite Control Set-Model Predictive Control

    DEFF Research Database (Denmark)

    Peng, Tao; Dan, Hanbing; Yang, Jian

    2016-01-01

    To improve the reliability of the matrix converter (MC), a fault diagnosis method to identify single open-switch fault is proposed in this paper. The introduced fault diagnosis method is based on finite control set-model predictive control (FCS-MPC), which employs a time-discrete model of the MC...... topology and a cost function to select the best switching state for the next sampling period. The proposed fault diagnosis method is realized by monitoring the load currents and judging the switching state to locate the faulty switch. Compared to the conventional modulation strategies such as carrier......-based modulation method, indirect space vector modulation and optimum Alesina-Venturini, the FCS-MPC has known and unchanged switching state in a sampling period. It is simpler to diagnose the exact location of the open switch in MC with FCS-MPC. To achieve better quality of the output current under single open...

  15. Evaluation of the validity of job exposure matrix for psychosocial factors at work.

    Directory of Open Access Journals (Sweden)

    Svetlana Solovieva

    Full Text Available To study the performance of a developed job exposure matrix (JEM for the assessment of psychosocial factors at work in terms of accuracy, possible misclassification bias and predictive ability to detect known associations with depression and low back pain (LBP.We utilized two large population surveys (the Health 2000 Study and the Finnish Work and Health Surveys, one to construct the JEM and another to test matrix performance. In the first study, information on job demands, job control, monotonous work and social support at work was collected via face-to-face interviews. Job strain was operationalized based on job demands and job control using quadrant approach. In the second study, the sensitivity and specificity were estimated applying a Bayesian approach. The magnitude of misclassification error was examined by calculating the biased odds ratios as a function of the sensitivity and specificity of the JEM and fixed true prevalence and odds ratios. Finally, we adjusted for misclassification error the observed associations between JEM measures and selected health outcomes.The matrix showed a good accuracy for job control and job strain, while its performance for other exposures was relatively low. Without correction for exposure misclassification, the JEM was able to detect the association between job strain and depression in men and between monotonous work and LBP in both genders.Our results suggest that JEM more accurately identifies occupations with low control and high strain than those with high demands or low social support. Overall, the present JEM is a useful source of job-level psychosocial exposures in epidemiological studies lacking individual-level exposure information. Furthermore, we showed the applicability of a Bayesian approach in the evaluation of the performance of the JEM in a situation where, in practice, no gold standard of exposure assessment exists.

  16. Evaluation of the validity of job exposure matrix for psychosocial factors at work.

    Science.gov (United States)

    Solovieva, Svetlana; Pensola, Tiina; Kausto, Johanna; Shiri, Rahman; Heliövaara, Markku; Burdorf, Alex; Husgafvel-Pursiainen, Kirsti; Viikari-Juntura, Eira

    2014-01-01

    To study the performance of a developed job exposure matrix (JEM) for the assessment of psychosocial factors at work in terms of accuracy, possible misclassification bias and predictive ability to detect known associations with depression and low back pain (LBP). We utilized two large population surveys (the Health 2000 Study and the Finnish Work and Health Surveys), one to construct the JEM and another to test matrix performance. In the first study, information on job demands, job control, monotonous work and social support at work was collected via face-to-face interviews. Job strain was operationalized based on job demands and job control using quadrant approach. In the second study, the sensitivity and specificity were estimated applying a Bayesian approach. The magnitude of misclassification error was examined by calculating the biased odds ratios as a function of the sensitivity and specificity of the JEM and fixed true prevalence and odds ratios. Finally, we adjusted for misclassification error the observed associations between JEM measures and selected health outcomes. The matrix showed a good accuracy for job control and job strain, while its performance for other exposures was relatively low. Without correction for exposure misclassification, the JEM was able to detect the association between job strain and depression in men and between monotonous work and LBP in both genders. Our results suggest that JEM more accurately identifies occupations with low control and high strain than those with high demands or low social support. Overall, the present JEM is a useful source of job-level psychosocial exposures in epidemiological studies lacking individual-level exposure information. Furthermore, we showed the applicability of a Bayesian approach in the evaluation of the performance of the JEM in a situation where, in practice, no gold standard of exposure assessment exists.

  17. Development and construct validation of the Client-Centredness of Goal Setting (C-COGS) scale.

    Science.gov (United States)

    Doig, Emmah; Prescott, Sarah; Fleming, Jennifer; Cornwell, Petrea; Kuipers, Pim

    2015-07-01

    Client-centred philosophy is integral to occupational therapy practice and client-centred goal planning is considered fundamental to rehabilitation. Evaluation of whether goal-planning practices are client-centred requires an understanding of the client's perspective about goal-planning processes and practices. The Client-Centredness of Goal Setting (C-COGS) was developed for use by practitioners who seek to be more client-centred and who require a scale to guide and evaluate individually orientated practice, especially with adults with cognitive impairment related to acquired brain injury. To describe development of the C-COGS scale and examine its construct validity. The C-COGS was administered to 42 participants with acquired brain injury after multidisciplinary goal planning. C-COGS scores were correlated with the Canadian Occupational Performance Measure (COPM) importance scores, and measures of therapeutic alliance, motivation, and global functioning to establish construct validity. The C-COGS scale has three subscales evaluating goal alignment, goal planning participation, and client-centredness of goals. The C-COGS subscale items demonstrated moderately significant correlations with scales measuring similar constructs. Findings provide preliminary evidence to support the construct validity of the C-COGS scale, which is intended to be used to evaluate and reflect on client-centred goal planning in clinical practice, and to highlight factors contributing to best practice rehabilitation.

  18. Validating hierarchical verbal autopsy expert algorithms in a large data set with known causes of death.

    Science.gov (United States)

    Kalter, Henry D; Perin, Jamie; Black, Robert E

    2016-06-01

    Physician assessment historically has been the most common method of analyzing verbal autopsy (VA) data. Recently, the World Health Organization endorsed two automated methods, Tariff 2.0 and InterVA-4, which promise greater objectivity and lower cost. A disadvantage of the Tariff method is that it requires a training data set from a prior validation study, while InterVA relies on clinically specified conditional probabilities. We undertook to validate the hierarchical expert algorithm analysis of VA data, an automated, intuitive, deterministic method that does not require a training data set. Using Population Health Metrics Research Consortium study hospital source data, we compared the primary causes of 1629 neonatal and 1456 1-59 month-old child deaths from VA expert algorithms arranged in a hierarchy to their reference standard causes. The expert algorithms were held constant, while five prior and one new "compromise" neonatal hierarchy, and three former child hierarchies were tested. For each comparison, the reference standard data were resampled 1000 times within the range of cause-specific mortality fractions (CSMF) for one of three approximated community scenarios in the 2013 WHO global causes of death, plus one random mortality cause proportions scenario. We utilized CSMF accuracy to assess overall population-level validity, and the absolute difference between VA and reference standard CSMFs to examine particular causes. Chance-corrected concordance (CCC) and Cohen's kappa were used to evaluate individual-level cause assignment. Overall CSMF accuracy for the best-performing expert algorithm hierarchy was 0.80 (range 0.57-0.96) for neonatal deaths and 0.76 (0.50-0.97) for child deaths. Performance for particular causes of death varied, with fairly flat estimated CSMF over a range of reference values for several causes. Performance at the individual diagnosis level was also less favorable than that for overall CSMF (neonatal: best CCC = 0.23, range 0

  19. Antibody Selection for Cancer Target Validation of FSH-Receptor in Immunohistochemical Settings

    Directory of Open Access Journals (Sweden)

    Nina Moeker

    2017-10-01

    Full Text Available Background: The follicle-stimulating hormone (FSH-receptor (FSHR has been reported to be an attractive target for antibody therapy in human cancer. However, divergent immunohistochemical (IHC findings have been reported for FSHR expression in tumor tissues, which could be due to the specificity of the antibodies used. Methods: Three frequently used antibodies (sc-7798, sc-13935, and FSHR323 were validated for their suitability in an immunohistochemical study for FSHR expression in different tissues. As quality control, two potential therapeutic anti-hFSHR Ylanthia® antibodies (Y010913, Y010916 were used. The specificity criteria for selection of antibodies were binding to native hFSHR of different sources, and no binding to non-related proteins. The ability of antibodies to stain the paraffin-embedded Flp-In Chinese hamster ovary (CHO/FSHR cells was tested after application of different epitope retrieval methods. Results: From the five tested anti-hFSHR antibodies, only Y010913, Y010916, and FSHR323 showed specific binding to native, cell-presented hFSHR. Since Ylanthia® antibodies were selected to specifically recognize native FSHR, as required for a potential therapeutic antibody candidate, FSHR323 was the only antibody to detect the receptor in IHC/histochemical settings on transfected cells, and at markedly lower, physiological concentrations (ex., in Sertoli cells of human testes. The pattern of FSH323 staining noticed for ovarian, prostatic, and renal adenocarcinomas indicated that FSHR was expressed mainly in the peripheral tumor blood vessels. Conclusion: Of all published IHC antibodies tested, only antibody FSHR323 proved suitable for target validation of hFSHR in an IHC setting for cancer. Our studies could not confirm the previously reported FSHR overexpression in ovarian and prostate cancer cells. Instead, specific overexpression in peripheral tumor blood vessels could be confirmed after thorough validation of the antibodies used.

  20. Urine specimen validity test for drug abuse testing in workplace and court settings.

    Science.gov (United States)

    Lin, Shin-Yu; Lee, Hei-Hwa; Lee, Jong-Feng; Chen, Bai-Hsiun

    2018-01-01

    In recent decades, urine drug testing in the workplace has become common in many countries in the world. There have been several studies concerning the use of the urine specimen validity test (SVT) for drug abuse testing administered in the workplace. However, very little data exists concerning the urine SVT on drug abuse tests from court specimens, including dilute, substituted, adulterated, and invalid tests. We investigated 21,696 submitted urine drug test samples for SVT from workplace and court settings in southern Taiwan over 5 years. All immunoassay screen-positive urine specimen drug tests were confirmed by gas chromatography/mass spectrometry. We found that the mean 5-year prevalence of tampering (dilute, substituted, or invalid tests) in urine specimens from the workplace and court settings were 1.09% and 3.81%, respectively. The mean 5-year percentage of dilute, substituted, and invalid urine specimens from the workplace were 89.2%, 6.8%, and 4.1%, respectively. The mean 5-year percentage of dilute, substituted, and invalid urine specimens from the court were 94.8%, 1.4%, and 3.8%, respectively. No adulterated cases were found among the workplace or court samples. The most common drug identified from the workplace specimens was amphetamine, followed by opiates. The most common drug identified from the court specimens was ketamine, followed by amphetamine. We suggest that all urine specimens taken for drug testing from both the workplace and court settings need to be tested for validity. Copyright © 2017. Published by Elsevier B.V.

  1. Imaging Matrix Metalloproteases in Spontaneous Colon Tumors: Validation by Correlation with Histopathology.

    Science.gov (United States)

    Hensley, Harvey; Cooper, Harry S; Chang, Wen-Chi L; Clapper, Margie L

    2017-01-01

    The use of fluorescent probes in conjunction with white-light colonoscopy is a promising strategy for improving the detection of precancerous colorectal lesions, in particular flat (sessile) lesions that do not protrude into the lumen of the colon. We describe a method for determining the sensitivity and specificity of an enzymatically activated near-infrared probe (MMPSense680) for the detection of colon lesions in a mouse model (APC +/Min-FCCC ) of spontaneous colorectal cancer. Fluorescence intensity correlates directly with the activity of matrix metalloproteinases (MMPs). Overexpression of MMPs is an early event in the development of colorectal lesions. Although the probe employed serves as a reporter of the activity of MMPs, our method can be applied to any fluorescent probe that targets an early molecular event in the development of colorectal tumors.

  2. Signal-to-noise assessment for diffusion tensor imaging with single data set and validation using a difference image method with data from a multicenter study

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Zhiyue J., E-mail: jerry.wang@childrens.com [Department of Radiology, Children' s Medical Center, Dallas, Texas 75235 and Department of Radiology, University of Texas Southwestern Medical Center, Dallas, Texas 75390 (United States); Chia, Jonathan M. [Clinical Science, Philips Healthcare, Cleveland, Ohio 44143 (United States); Ahmed, Shaheen; Rollins, Nancy K. [Department of Radiology, Children' s Medical Center, Dallas, TX 75235 and Department of Radiology, University of Texas Southwestern Medical Center, Dallas, TX 75390 (United States)

    2014-09-15

    Purpose: To describe a quantitative method for determination of SNR that extracts the local noise level using a single diffusion data set. Methods: Brain data sets came from a multicenter study (eight sites; three MR vendors). Data acquisition protocol required b = 0, 700 s/mm{sup 2}, fov = 256 × 256 mm{sup 2}, acquisition matrix size 128 × 128, reconstruction matrix size 256 × 256; 30 gradient encoding directions and voxel size 2 × 2 × 2 mm{sup 3}. Regions-of-interest (ROI) were placed manually on the b = 0 image volume on transverse slices, and signal was recorded as the mean value of the ROI. The noise level from the ROI was evaluated using Fourier Transform based Butterworth high-pass filtering. Patients were divided into two groups, one for filter parameter optimization (N = 17) and one for validation (N = 10). Six white matter areas (the genu and splenium of corpus callosum, right and left centrum semiovale, right and left anterior corona radiata) were analyzed. The Bland–Altman method was used to compare the resulting SNR with that from the difference image method. The filter parameters were optimized for each brain area, and a set of “global” parameters was also obtained, which represent an average of all regions. Results: The Bland–Altman analysis on the validation group using “global” filter parameters revealed that the 95% limits of agreement of percent bias between the SNR obtained with the new and the reference methods were −15.5% (median of the lower limit, range [−24.1%, −8.9%]) and 14.5% (median of the higher limits, range [12.7%, 18.0%]) for the 6 brain areas. Conclusions: An FT-based high-pass filtering method can be used for local area SNR assessment using only one DTI data set. This method could be used to evaluate SNR for patient studies in a multicenter setting.

  3. Validity of the Perceived Health Competence Scale in a UK primary care setting.

    Science.gov (United States)

    Dempster, Martin; Donnelly, Michael

    2008-01-01

    The Perceived Health Competence Scale (PHCS) is a measure of self-efficacy regarding general health-related behaviour. This brief paper examines the psychometric properties of the PHCS in a UK context. Questionnaires containing the PHCS, the SF-36 and questions about perceived health needs were posted to 486 patients randomly selected from a GP practice list. Complete questionnaires were returned by 320 patients. Analyses of these responses provide strong evidence for the validity of the PHCS in this setting. Consequently, we conclude that the PHCS is a useful addition to measures of global self-efficacy and measures of self-efficacy regarding specific behaviours in the toolkit of health psychologists. This range of self-efficacy assessment tools will ensure that psychologists can match the level of specificity of the measure of expectancy beliefs to the level of specificity of the outcome of interest.

  4. A strategy for developing representative germplasm sets for systematic QTL validation, demonstrated for apple, peach, and sweet cherry

    NARCIS (Netherlands)

    Peace, C.P.; Luby, J.; Weg, van de W.E.; Bink, M.C.A.M.; Iezzoni, A.F.

    2014-01-01

    Horticultural crop improvement would benefit from a standardized, systematic, and statistically robust procedure for validating quantitative trait loci (QTLs) in germplasm relevant to breeding programs. Here, we describe and demonstrate a strategy for developing reference germplasm sets of

  5. Older adult mistreatment risk screening: contribution to the validation of a screening tool in a domestic setting.

    Science.gov (United States)

    Lindenbach, Jeannette M; Larocque, Sylvie; Lavoie, Anne-Marise; Garceau, Marie-Luce

    2012-06-01

    ABSTRACTThe hidden nature of older adult mistreatment renders its detection in the domestic setting particularly challenging. A validated screening instrument that can provide a systematic assessment of risk factors can facilitate this detection. One such instrument, the "expanded Indicators of Abuse" tool, has been previously validated in the Hebrew language in a hospital setting. The present study has contributed to the validation of the "e-IOA" in an English-speaking community setting in Ontario, Canada. It consisted of two phases: (a) a content validity review and adaptation of the instrument by experts throughout Ontario, and (b) an inter-rater reliability assessment by home visiting nurses. The adaptation, the "Mistreatment of Older Adult Risk Factors" tool, offers a comprehensive tool for screening in the home setting. This instrument is significant to professional practice as practitioners working with older adults will be better equipped to assess for risk of mistreatment.

  6. Setting and validating the pass/fail score for the NBDHE.

    Science.gov (United States)

    Tsai, Tsung-Hsun; Dixon, Barbara Leatherman

    2013-04-01

    This report describes the overall process used for setting the pass/fail score for the National Board Dental Hygiene Examination (NBDHE). The Objective Standard Setting (OSS) method was used for setting the pass/fail score for the NBDHE. The OSS method requires a panel of experts to determine the criterion items and proportion of these items that minimally competent candidates would answer correctly, the percentage of mastery and the confidence level of the error band. A panel of 11 experts was selected by the Joint Commission on National Dental Examinations (Joint Commission). Panel members represented geographic distribution across the U.S. and had the following characteristics: full-time dental hygiene practitioners with experience in areas of preventive, periodontal, geriatric and special needs care, and full-time dental hygiene educators with experience in areas of scientific basis for dental hygiene practice, provision of clinical dental hygiene services and community health/research principles. Utilizing the expert panel's judgments, the pass/fail score was set and then the score scale was established using the Rasch measurement model. Statistical and psychometric analysis shows the actual failure rate and the OSS failure rate are reasonably consistent (2.4% vs. 2.8%). The analysis also showed the lowest error of measurement, an index of the precision at the pass/fail score point and that the highest reliability (0.97) are achieved at the pass/fail score point. The pass/fail score is a valid guide for making decisions about candidates for dental hygiene licensure. This new standard was reviewed and approved by the Joint Commission and was implemented beginning in 2011.

  7. Cross-cultural validation of Lupus Impact Tracker in five European clinical practice settings.

    Science.gov (United States)

    Schneider, Matthias; Mosca, Marta; Pego-Reigosa, José-Maria; Gunnarsson, Iva; Maurel, Frédérique; Garofano, Anna; Perna, Alessandra; Porcasi, Rolando; Devilliers, Hervé

    2017-05-01

    The aim was to evaluate the cross-cultural validity of the Lupus Impact Tracker (LIT) in five European countries and to assess its acceptability and feasibility from the patient and physician perspectives. A prospective, observational, cross-sectional and multicentre validation study was conducted in clinical settings. Before the visit, patients completed LIT, Short Form 36 (SF-36) and care satisfaction questionnaires. During the visit, physicians assessed disease activity [Safety of Estrogens in Lupus Erythematosus National Assessment (SELENA)-SLEDAI], organ damage [SLICC/ACR damage index (SDI)] and flare occurrence. Cross-cultural validity was assessed using the Differential Item Functioning method. Five hundred and sixty-nine SLE patients were included by 25 specialists; 91.7% were outpatients and 89.9% female, with mean age 43.5 (13.0) years. Disease profile was as follows: 18.3% experienced flares; mean SELENA-SLEDAI score 3.4 (4.5); mean SDI score 0.8 (1.4); and SF-36 mean physical and mental component summary scores: physical component summary 42.8 (10.8) and mental component summary 43.0 (12.3). Mean LIT score was 34.2 (22.3) (median: 32.5), indicating that lupus moderately impacted patients' daily life. A cultural Differential Item Functioning of negligible magnitude was detected across countries (pseudo- R 2 difference of 0.01-0.04). Differences were observed between LIT scores and Physician Global Assessment, SELENA-SLEDAI, SDI scores = 0 (P cultural invariability across countries. They suggest that LIT can be used in routine clinical practice to evaluate and follow patient-reported outcomes in order to improve patient-physician interaction. © The Author 2017. Published by Oxford University Press on behalf of the British Society for Rheumatology. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  8. Development and validation of micellar liquid chromatographic methods for the determination of antibiotics in different matrixes.

    Science.gov (United States)

    Rambla-Alegre, Maria; Esteve-Romero, Josep; Carda-Broch, Samuel

    2011-01-01

    Antibiotics are the most important bioactive and chemotherapeutic compounds to be produced by microbiological synthesis, and they have proved their worth in a variety of fields, such as medicinal chemistry, agriculture, and the food industry. Interest in antibiotics has grown in parallel with an increasingly high degree of productivity in the field of analytical applications. Therefore, it is necessary to develop chromatographic procedures capable of determining various drugs simultaneously in the shortest possible time. Micellar liquid chromatography (MLC) is an RP-HPLC technique that offers advantages over conventional HPLC as far as sample preparation, selectivity, and versatility are concerned. Its main advantage is that samples can be injected directly into the chromatographic system with no previous preparation step. This paper mainly focuses on the results of the authors' own recent research and reports the chromatographic conditions for determination of various antibiotics (penicillins, quinolones, and sulfonamides) in different matrixes (pharmaceuticals, biological fluids, and food). The work of other authors on MLC-based antibiotic determination has been included.

  9. Adaption and validation of the Safety Attitudes Questionnaire for the Danish hospital setting

    Directory of Open Access Journals (Sweden)

    Kristensen S

    2015-02-01

    Full Text Available Solvejg Kristensen,1–3 Svend Sabroe,4 Paul Bartels,1,5 Jan Mainz,3,5 Karl Bang Christensen6 1The Danish Clinical Registries, Aarhus, Denmark; 2Department of Health Science and Technology, Aalborg University, Aalborg, Denmark; 3Aalborg University Hospital, Psychiatry, Aalborg, Denmark; 4Department of Public Health, Aarhus University, Aarhus, Denmark; 5Department of Clinical Medicine, Aalborg University, Aalborg, Denmark; 6Department of Biostatistics, University of Copenhagen, Copenhagen, Denmark Purpose: Measuring and developing a safe culture in health care is a focus point in creating highly reliable organizations being successful in avoiding patient safety incidents where these could normally be expected. Questionnaires can be used to capture a snapshot of an employee's perceptions of patient safety culture. A commonly used instrument to measure safety climate is the Safety Attitudes Questionnaire (SAQ. The purpose of this study was to adapt the SAQ for use in Danish hospitals, assess its construct validity and reliability, and present benchmark data.Materials and methods: The SAQ was translated and adapted for the Danish setting (SAQ-DK. The SAQ-DK was distributed to 1,263 staff members from 31 in- and outpatient units (clinical areas across five somatic and one psychiatric hospitals through meeting administration, hand delivery, and mailing. Construct validity and reliability were tested in a cross-sectional study. Goodness-of-fit indices from confirmatory factor analysis were reported along with inter-item correlations, Cronbach's alpha (α, and item and subscale scores.Results: Participation was 73.2% (N=925 of invited health care workers. Goodness-of-fit indices from the confirmatory factor analysis showed: c2=1496.76, P<0.001, CFI 0.901, RMSEA (90%CI 0.053 (0.050-0056, Probability RMSEA (p close=0.057. Inter-scale correlations between the factors showed moderate-to-high correlations. The scale stress recognition had significant

  10. Validation of Spectral Unmixing Results from Informed Non-Negative Matrix Factorization (INMF) of Hyperspectral Imagery

    Science.gov (United States)

    Wright, L.; Coddington, O.; Pilewskie, P.

    2017-12-01

    Hyperspectral instruments are a growing class of Earth observing sensors designed to improve remote sensing capabilities beyond discrete multi-band sensors by providing tens to hundreds of continuous spectral channels. Improved spectral resolution, range and radiometric accuracy allow the collection of large amounts of spectral data, facilitating thorough characterization of both atmospheric and surface properties. We describe the development of an Informed Non-Negative Matrix Factorization (INMF) spectral unmixing method to exploit this spectral information and separate atmospheric and surface signals based on their physical sources. INMF offers marked benefits over other commonly employed techniques including non-negativity, which avoids physically impossible results; and adaptability, which tailors the method to hyperspectral source separation. The INMF algorithm is adapted to separate contributions from physically distinct sources using constraints on spectral and spatial variability, and library spectra to improve the initial guess. Using this INMF algorithm we decompose hyperspectral imagery from the NASA Hyperspectral Imager for the Coastal Ocean (HICO), with a focus on separating surface and atmospheric signal contributions. HICO's coastal ocean focus provides a dataset with a wide range of atmospheric and surface conditions. These include atmospheres with varying aerosol optical thicknesses and cloud cover. HICO images also provide a range of surface conditions including deep ocean regions, with only minor contributions from the ocean surfaces; and more complex shallow coastal regions with contributions from the seafloor or suspended sediments. We provide extensive comparison of INMF decomposition results against independent measurements of physical properties. These include comparison against traditional model-based retrievals of water-leaving, aerosol, and molecular scattering radiances and other satellite products, such as aerosol optical thickness from

  11. MMPI-2 Symptom Validity (FBS) Scale: psychometric characteristics and limitations in a Veterans Affairs neuropsychological setting.

    Science.gov (United States)

    Gass, Carlton S; Odland, Anthony P

    2014-01-01

    The Minnesota Multiphasic Personality Inventory-2 (MMPI-2) Symptom Validity (Fake Bad Scale [FBS]) Scale is widely used to assist in determining noncredible symptom reporting, despite a paucity of detailed research regarding its itemmetric characteristics. Originally designed for use in civil litigation, the FBS is often used in a variety of clinical settings. The present study explored its fundamental psychometric characteristics in a sample of 303 patients who were consecutively referred for a comprehensive examination in a Veterans Affairs (VA) neuropsychology clinic. FBS internal consistency (reliability) was .77. Its underlying factor structure consisted of three unitary dimensions (Tiredness/Distractibility, Stomach/Head Discomfort, and Claimed Virtue of Self/Others) accounting for 28.5% of the total variance. The FBS's internal structure showed factoral discordance, as Claimed Virtue was negatively related to most of the FBS and to its somatic complaint components. Scores on this 12-item FBS component reflected a denial of socially undesirable attitudes and behaviors (Antisocial Practices Scale) that is commonly expressed by the 1,138 males in the MMPI-2 normative sample. These 12 items significantly reduced FBS reliability, introducing systematic error variance. In this VA neuropsychological referral setting, scores on the FBS have ambiguous meaning because of its structural discordance.

  12. The validity of visual acuity assessment using mobile technology devices in the primary care setting.

    Science.gov (United States)

    O'Neill, Samuel; McAndrew, Darryl J

    2016-04-01

    The assessment of visual acuity is indicated in a number of clinical circumstances. It is commonly conducted through the use of a Snellen wall chart. Mobile technology developments and adoption rates by clinicians may potentially provide more convenient methods of assessing visual acuity. Limited data exist on the validity of these devices and applications. The objective of this study was to evaluate the assessment of distance visual acuity using mobile technology devices against the commonly used 3-metre Snellen chart in a primary care setting. A prospective quantitative comparative study was conducted at a regional medical practice. The visual acuity of 60 participants was assessed on a Snellen wall chart and two mobile technology devices (iPhone, iPad). Visual acuity intervals were converted to logarithm of minimum angle of resolution (logMAR) scores and subjected to intraclass correlation coefficient (ICC) assessment. The results show a high level of general agreement between testing modality (ICC 0.917 with a 95% confidence interval of 0.887-0.940). The high level of agreement of visual acuity results between the Snellen wall chart and both mobile technology devices suggests that clinicians can use this technology with confidence in the primary care setting.

  13. Validation of Fall Risk Assessment Specific to the Inpatient Rehabilitation Facility Setting.

    Science.gov (United States)

    Thomas, Dan; Pavic, Andrea; Bisaccia, Erin; Grotts, Jonathan

    2016-09-01

    To evaluate and compare the Morse Fall Scale (MFS) and the Casa Colina Fall Risk Assessment Scale (CCFRA) for identification of patients at risk for falling in an acute inpatient rehabilitation facility. The primary objective of this study was to perform a retrospective validation study of the CCFRAS, specifically for use in the inpatient rehabilitation facility (IRF) setting. Retrospective validation study. The study was approved under expedited review by the local Institutional Review Board. Data were collected on all patients admitted to Cottage Rehabiliation Hospital (CRH), a 38-bed acute inpatient rehabilitation hospital, from March 2012 to August 2013. Patients were excluded from the study if they had a length of stay less than 3 days or age less than 18. The area under the receiver operating characteristic curve (AUC) and the diagnostic odds ratio were used to examine the differences between the MFS and CCFRAS. AUC between fall scales was compared using the DeLong Test. There were 931 patients included in the study with 62 (6.7%) patient falls. The average age of the population was 68.8 with 503 males (51.2%). The AUC was 0.595 and 0.713 for the MFS and CCFRAS, respectively (0.006). The diagnostic odds ratio of the MFS was 2.0 and 3.6 for the CCFRAS using the recommended cutoffs of 45 for the MFS and 80 for the CCFRAS. The CCFRAS appears to be a better tool in detecting fallers vs. nonfallers specific to the IRF setting. The assessment and identification of patients at high risk for falling is important to implement specific precautions and care for these patients to reduce their risk of falling. The CCFRAS is more clinically relevant in identifying patients at high risk for falling in the IRF setting compared to other fall risk assessments. Implementation of this scale may lead to a reduction in fall rate and injuries from falls as it more appropriately identifies patients at high risk for falling. © 2015 Association of Rehabilitation Nurses.

  14. The use of questionnaires in colour research in real-life settings : In search of validity and methodological pitfalls

    NARCIS (Netherlands)

    Bakker, I.C.; van der Voordt, Theo; Vink, P.; de Boon, J

    2014-01-01

    This research discusses the validity of applying questionnaires in colour research in real life settings.
    In the literature the conclusions concerning the influences of colours on human performance and well-being are often conflicting. This can be caused by the artificial setting of the test

  15. Validity of Chinese Version of the Composite International Diagnostic Interview-3.0 in Psychiatric Settings

    Institute of Scientific and Technical Information of China (English)

    Jin Lu; Yue-Qin Huang; Zhao-Rui Liu; Xiao-Lan Cao

    2015-01-01

    Background:The Composite International Diagnostic Interview-3.0 (CIDI-3.0) is a fully structured lay-administered diagnostic interview for the assessment of mental disorders according to ICD-10 and Diagnostic and Statistical Manual of Mental Disorders,Fourth Edition (DSM-Ⅳ) criteria.The aim of the study was to investigate the concurrent validity of the Chinese CIDI in diagnosing mental disorders in psychiatric settings.Methods:We recruited 208 participants,of whom 148 were patients from two psychiatric hospitals and 60 healthy people from communities.These participants were administered with CIDI by six trained lay interviewers and the Structured Clinical Interview for DSM-Ⅳ Axis I Disorders (SCID-I,gold standard) by two psychiatrists.Agreement between CIDI and SCID-I was assessed with sensitivity,specificity,positive predictive value and negative predictive value.Individual-level CIDI-SCID diagnostic concordance was evaluated using the area under the receiver operator characteristic curve and Cohen's K.Results:Substantial to excellent CIDI to SCID concordance was found for any substance use disorder (area under the receiver operator characteristic curve [AUC] =0.926),any anxiety disorder (AUC =0.807) and any mood disorder (AUC =0.806).The concordance between the CIDI and the SCID for psychotic and eating disorders is moderate.However,for individual mental disorders,the CIDI-SCID concordance for bipolar disorders (AUC =0.55) and anorexia nervosa (AUC =0.50) was insufficient.Conclusions:Overall,the Chinese version of CIDI-3.0 has acceptable validity in diagnosing the substance use disorder,anxiety disorder and mood disorder among Chinese adult population.However,we should be cautious when using it for bipolar disorders and anorexia nervosa.

  16. Validity of Chinese Version of the Composite International Diagnostic Interview-3.0 in Psychiatric Settings

    Directory of Open Access Journals (Sweden)

    Jin Lu

    2015-01-01

    Full Text Available Background: The Composite International Diagnostic Interview-3.0 (CIDI-3.0 is a fully structured lay-administered diagnostic interview for the assessment of mental disorders according to ICD-10 and Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition (DSM-IV criteria. The aim of the study was to investigate the concurrent validity of the Chinese CIDI in diagnosing mental disorders in psychiatric settings. Methods: We recruited 208 participants, of whom 148 were patients from two psychiatric hospitals and 60 healthy people from communities. These participants were administered with CIDI by six trained lay interviewers and the Structured Clinical Interview for DSM-IV Axis I Disorders (SCID-I, gold standard by two psychiatrists. Agreement between CIDI and SCID-I was assessed with sensitivity, specificity, positive predictive value and negative predictive value. Individual-level CIDI-SCID diagnostic concordance was evaluated using the area under the receiver operator characteristic curve and Cohen′s K. Results: Substantial to excellent CIDI to SCID concordance was found for any substance use disorder (area under the receiver operator characteristic curve [AUC] = 0.926, any anxiety disorder (AUC = 0.807 and any mood disorder (AUC = 0.806. The concordance between the CIDI and the SCID for psychotic and eating disorders is moderate. However, for individual mental disorders, the CIDI-SCID concordance for bipolar disorders (AUC = 0.55 and anorexia nervosa (AUC = 0.50 was insufficient. Conclusions: Overall, the Chinese version of CIDI-3.0 has acceptable validity in diagnosing the substance use disorder, anxiety disorder and mood disorder among Chinese adult population. However, we should be cautious when using it for bipolar disorders and anorexia nervosa.

  17. Development and validation of factor analysis for dynamic in-vivo imaging data sets

    Science.gov (United States)

    Goldschmied, Lukas; Knoll, Peter; Mirzaei, Siroos; Kalchenko, Vyacheslav

    2018-02-01

    In-vivo optical imaging method provides information about the anatomical structures and function of tissues ranging from single cell to entire organisms. Dynamic Fluorescent Imaging (DFI) is used to examine dynamic events related to normal physiology or disease progression in real time. In this work we improve this method by using factor analysis (FA) to automatically separate overlying structures.The proposed method is based on a previously introduced Transcranial Optical Vascular Imaging (TOVI), which employs natural and sufficient transparency through the intact cranial bones of a mouse. Fluorescent image acquisition is performed after intravenous fluorescent tracer administration. Afterwards FA is used to extract structures with different temporal characteristics from dynamic contrast enhanced studies without making any a priori assumptions about physiology. The method was validated by a dynamic light phantom based on the Arduino hardware platform and dynamic fluorescent cerebral hemodynamics data sets. Using the phantom data FA can separate various light channels without user intervention. FA applied on an image sequence obtained after fluorescent tracer administration is allowing extracting valuable information about cerebral blood vessels anatomy and functionality without a-priory assumptions of their anatomy or physiology while keeping the mouse cranium intact. Unsupervised color-coding based on FA enhances visibility and distinguishing of blood vessels belonging to different compartments. DFI based on FA especially in case of transcranial imaging can be used to separate dynamic structures.

  18. ACE-FTS version 3.0 data set: validation and data processing update

    Directory of Open Access Journals (Sweden)

    Claire Waymark

    2014-01-01

    Full Text Available On 12 August 2003, the Canadian-led Atmospheric Chemistry Experiment (ACE was launched into a 74° inclination orbit at 650 km with the mission objective to measure atmospheric composition using infrared and UV-visible spectroscopy (Bernath et al. 2005. The ACE mission consists of two main instruments, ACE-FTS and MAESTRO (McElroy et al. 2007, which are being used to investigate the chemistry and dynamics of the Earth’s atmosphere.  Here, we focus on the high resolution (0.02 cm-1 infrared Fourier Transform Spectrometer, ACE-FTS, that measures in the 750-4400 cm-1 (2.2 to 13.3 µm spectral region.  This instrument has been making regular solar occultation observations for more than nine years.  The current ACE-FTS data version (version 3.0 provides profiles of temperature and volume mixing ratios (VMRs of more than 30 atmospheric trace gas species, as well as 20 subsidiary isotopologues of the most abundant trace atmospheric constituents over a latitude range of ~85°N to ~85°S.  This letter describes the current data version and recent validation comparisons and provides a description of our planned updates for the ACE-FTS data set. [...

  19. Validation of the Amsterdam Dynamic Facial Expression Set--Bath Intensity Variations (ADFES-BIV: A Set of Videos Expressing Low, Intermediate, and High Intensity Emotions.

    Directory of Open Access Journals (Sweden)

    Tanja S H Wingenbach

    Full Text Available Most of the existing sets of facial expressions of emotion contain static photographs. While increasing demand for stimuli with enhanced ecological validity in facial emotion recognition research has led to the development of video stimuli, these typically involve full-blown (apex expressions. However, variations of intensity in emotional facial expressions occur in real life social interactions, with low intensity expressions of emotions frequently occurring. The current study therefore developed and validated a set of video stimuli portraying three levels of intensity of emotional expressions, from low to high intensity. The videos were adapted from the Amsterdam Dynamic Facial Expression Set (ADFES and termed the Bath Intensity Variations (ADFES-BIV. A healthy sample of 92 people recruited from the University of Bath community (41 male, 51 female completed a facial emotion recognition task including expressions of 6 basic emotions (anger, happiness, disgust, fear, surprise, sadness and 3 complex emotions (contempt, embarrassment, pride that were expressed at three different intensities of expression and neutral. Accuracy scores (raw and unbiased (Hu hit rates were calculated, as well as response times. Accuracy rates above chance level of responding were found for all emotion categories, producing an overall raw hit rate of 69% for the ADFES-BIV. The three intensity levels were validated as distinct categories, with higher accuracies and faster responses to high intensity expressions than intermediate intensity expressions, which had higher accuracies and faster responses than low intensity expressions. To further validate the intensities, a second study with standardised display times was conducted replicating this pattern. The ADFES-BIV has greater ecological validity than many other emotion stimulus sets and allows for versatile applications in emotion research. It can be retrieved free of charge for research purposes from the

  20. Validation of the Amsterdam Dynamic Facial Expression Set--Bath Intensity Variations (ADFES-BIV): A Set of Videos Expressing Low, Intermediate, and High Intensity Emotions.

    Science.gov (United States)

    Wingenbach, Tanja S H; Ashwin, Chris; Brosnan, Mark

    2016-01-01

    Most of the existing sets of facial expressions of emotion contain static photographs. While increasing demand for stimuli with enhanced ecological validity in facial emotion recognition research has led to the development of video stimuli, these typically involve full-blown (apex) expressions. However, variations of intensity in emotional facial expressions occur in real life social interactions, with low intensity expressions of emotions frequently occurring. The current study therefore developed and validated a set of video stimuli portraying three levels of intensity of emotional expressions, from low to high intensity. The videos were adapted from the Amsterdam Dynamic Facial Expression Set (ADFES) and termed the Bath Intensity Variations (ADFES-BIV). A healthy sample of 92 people recruited from the University of Bath community (41 male, 51 female) completed a facial emotion recognition task including expressions of 6 basic emotions (anger, happiness, disgust, fear, surprise, sadness) and 3 complex emotions (contempt, embarrassment, pride) that were expressed at three different intensities of expression and neutral. Accuracy scores (raw and unbiased (Hu) hit rates) were calculated, as well as response times. Accuracy rates above chance level of responding were found for all emotion categories, producing an overall raw hit rate of 69% for the ADFES-BIV. The three intensity levels were validated as distinct categories, with higher accuracies and faster responses to high intensity expressions than intermediate intensity expressions, which had higher accuracies and faster responses than low intensity expressions. To further validate the intensities, a second study with standardised display times was conducted replicating this pattern. The ADFES-BIV has greater ecological validity than many other emotion stimulus sets and allows for versatile applications in emotion research. It can be retrieved free of charge for research purposes from the corresponding author.

  1. Validation of the Amsterdam Dynamic Facial Expression Set – Bath Intensity Variations (ADFES-BIV): A Set of Videos Expressing Low, Intermediate, and High Intensity Emotions

    Science.gov (United States)

    Wingenbach, Tanja S. H.

    2016-01-01

    Most of the existing sets of facial expressions of emotion contain static photographs. While increasing demand for stimuli with enhanced ecological validity in facial emotion recognition research has led to the development of video stimuli, these typically involve full-blown (apex) expressions. However, variations of intensity in emotional facial expressions occur in real life social interactions, with low intensity expressions of emotions frequently occurring. The current study therefore developed and validated a set of video stimuli portraying three levels of intensity of emotional expressions, from low to high intensity. The videos were adapted from the Amsterdam Dynamic Facial Expression Set (ADFES) and termed the Bath Intensity Variations (ADFES-BIV). A healthy sample of 92 people recruited from the University of Bath community (41 male, 51 female) completed a facial emotion recognition task including expressions of 6 basic emotions (anger, happiness, disgust, fear, surprise, sadness) and 3 complex emotions (contempt, embarrassment, pride) that were expressed at three different intensities of expression and neutral. Accuracy scores (raw and unbiased (Hu) hit rates) were calculated, as well as response times. Accuracy rates above chance level of responding were found for all emotion categories, producing an overall raw hit rate of 69% for the ADFES-BIV. The three intensity levels were validated as distinct categories, with higher accuracies and faster responses to high intensity expressions than intermediate intensity expressions, which had higher accuracies and faster responses than low intensity expressions. To further validate the intensities, a second study with standardised display times was conducted replicating this pattern. The ADFES-BIV has greater ecological validity than many other emotion stimulus sets and allows for versatile applications in emotion research. It can be retrieved free of charge for research purposes from the corresponding author

  2. Revising the retrieval technique of a long-term stratospheric HNO{sub 3} data set. From a constrained matrix inversion to the optimal estimation algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Fiorucci, I.; Muscari, G. [Istituto Nazionale di Geofisica e Vulcanologia, Rome (Italy); De Zafra, R.L. [State Univ. of New York, Stony Brook, NY (United States). Dept. of Physics and Astronomy

    2011-07-01

    The Ground-Based Millimeter-wave Spectrometer (GBMS) was designed and built at the State University of New York at Stony Brook in the early 1990s and since then has carried out many measurement campaigns of stratospheric O{sub 3}, HNO{sub 3}, CO and N{sub 2}O at polar and mid-latitudes. Its HNO{sub 3} data set shed light on HNO{sub 3} annual cycles over the Antarctic continent and contributed to the validation of both generations of the satellite-based JPL Microwave Limb Sounder (MLS). Following the increasing need for long-term data sets of stratospheric constituents, we resolved to establish a long-term GMBS observation site at the Arctic station of Thule (76.5 N, 68.8 W), Greenland, beginning in January 2009, in order to track the long- and short-term interactions between the changing climate and the seasonal processes tied to the ozone depletion phenomenon. Furthermore, we updated the retrieval algorithm adapting the Optimal Estimation (OE) method to GBMS spectral data in order to conform to the standard of the Network for the Detection of Atmospheric Composition Change (NDACC) microwave group, and to provide our retrievals with a set of averaging kernels that allow more straightforward comparisons with other data sets. The new OE algorithm was applied to GBMS HNO{sub 3} data sets from 1993 South Pole observations to date, in order to produce HNO{sub 3} version 2 (v2) profiles. A sample of results obtained at Antarctic latitudes in fall and winter and at mid-latitudes is shown here. In most conditions, v2 inversions show a sensitivity (i.e., sum of column elements of the averaging kernel matrix) of 100{+-}20% from 20 to 45 km altitude, with somewhat worse (better) sensitivity in the Antarctic winter lower (upper) stratosphere. The 1{sigma} uncertainty on HNO{sub 3} v2 mixing ratio vertical profiles depends on altitude and is estimated at {proportional_to}15% or 0.3 ppbv, whichever is larger. Comparisons of v2 with former (v1) GBMS HNO{sub 3} vertical profiles

  3. Affordances in the home environment for motor development: Validity and reliability for the use in daycare setting.

    Science.gov (United States)

    Müller, Alessandra Bombarda; Valentini, Nadia Cristina; Bandeira, Paulo Felipe Ribeiro

    2017-05-01

    The range of stimuli provided by physical space, toys and care practices contributes to the motor, cognitive and social development of children. However, assessing the quality of child education environments is a challenge, and can be considered a health promotion initiative. This study investigated the validity of the criterion, content, construct and reliability of the Affordances in the Home Environment for Motor Development - Infant Scale (AHEMD-IS), version 3-18 months, for the use in daycare settings. Content validation was conducted with the participation of seven motor development and health care experts; and, face validity by 20 specialists in health and education. The results indicate the suitability of the adapted AHEMD-IS, evidencing its validity for the daycare setting a potential tool to assess the opportunities that the collective context offers to child development. Copyright © 2017 Elsevier Inc. All rights reserved.

  4. Atmospheric correction at AERONET locations: A new science and validation data set

    Science.gov (United States)

    Wang, Y.; Lyapustin, A.I.; Privette, J.L.; Morisette, J.T.; Holben, B.

    2009-01-01

    This paper describes an Aerosol Robotic Network (AERONET)-based Surface Reflectance Validation Network (ASRVN) and its data set of spectral surface bidirectional reflectance and albedo based on Moderate Resolution Imaging Spectroradiometer (MODIS) TERRA and AQUA data. The ASRVN is an operational data collection and processing system. It receives 50 ?? 50 km2; subsets of MODIS level 1B (L1B) data from MODIS adaptive processing system and AERONET aerosol and water-vapor information. Then, it performs an atmospheric correction (AC) for about 100 AERONET sites based on accurate radiative-transfer theory with complex quality control of the input data. The ASRVN processing software consists of an L1B data gridding algorithm, a new cloud-mask (CM) algorithm based on a time-series analysis, and an AC algorithm using ancillary AERONET aerosol and water-vapor data. The AC is achieved by fitting the MODIS top-of-atmosphere measurements, accumulated for a 16-day interval, with theoretical reflectance parameterized in terms of the coefficients of the Li SparseRoss Thick (LSRT) model of the bidirectional reflectance factor (BRF). The ASRVN takes several steps to ensure high quality of results: 1) the filtering of opaque clouds by a CM algorithm; 2) the development of an aerosol filter to filter residual semitransparent and subpixel clouds, as well as cases with high inhomogeneity of aerosols in the processing area; 3) imposing the requirement of the consistency of the new solution with previously retrieved BRF and albedo; 4) rapid adjustment of the 16-day retrieval to the surface changes using the last day of measurements; and 5) development of a seasonal backup spectral BRF database to increase data coverage. The ASRVN provides a gapless or near-gapless coverage for the processing area. The gaps, caused by clouds, are filled most naturally with the latest solution for a given pixel. The ASRVN products include three parameters of the LSRT model (kL, kG, and kV), surface albedo

  5. Development and Validation of a Portable Platform for Deploying Decision-Support Algorithms in Prehospital Settings

    Science.gov (United States)

    Reisner, A. T.; Khitrov, M. Y.; Chen, L.; Blood, A.; Wilkins, K.; Doyle, W.; Wilcox, S.; Denison, T.; Reifman, J.

    2013-01-01

    Summary Background Advanced decision-support capabilities for prehospital trauma care may prove effective at improving patient care. Such functionality would be possible if an analysis platform were connected to a transport vital-signs monitor. In practice, there are technical challenges to implementing such a system. Not only must each individual component be reliable, but, in addition, the connectivity between components must be reliable. Objective We describe the development, validation, and deployment of the Automated Processing of Physiologic Registry for Assessment of Injury Severity (APPRAISE) platform, intended to serve as a test bed to help evaluate the performance of decision-support algorithms in a prehospital environment. Methods We describe the hardware selected and the software implemented, and the procedures used for laboratory and field testing. Results The APPRAISE platform met performance goals in both laboratory testing (using a vital-sign data simulator) and initial field testing. After its field testing, the platform has been in use on Boston MedFlight air ambulances since February of 2010. Conclusion These experiences may prove informative to other technology developers and to healthcare stakeholders seeking to invest in connected electronic systems for prehospital as well as in-hospital use. Our experiences illustrate two sets of important questions: are the individual components reliable (e.g., physical integrity, power, core functionality, and end-user interaction) and is the connectivity between components reliable (e.g., communication protocols and the metadata necessary for data interpretation)? While all potential operational issues cannot be fully anticipated and eliminated during development, thoughtful design and phased testing steps can reduce, if not eliminate, technical surprises. PMID:24155791

  6. Physical validation issue of the NEPTUNE two-phase modelling: validation plan to be adopted, experimental programs to be set up and associated instrumentation techniques developed

    International Nuclear Information System (INIS)

    Pierre Peturaud; Eric Hervieu

    2005-01-01

    Full text of publication follows: A long-term joint development program for the next generation of nuclear reactors simulation tools has been launched in 2001 by EDF (Electricite de France) and CEA (Commissariat a l'Energie Atomique). The NEPTUNE Project constitutes the Thermal-Hydraulics part of this comprehensive program. Along with the underway development of this new two-phase flow software platform, the physical validation of the involved modelling is a crucial issue, whatever the modelling scale is, and the present paper deals with this issue. After a brief recall about the NEPTUNE platform, the general validation strategy to be adopted is first of all clarified by means of three major features: (i) physical validation in close connection with the concerned industrial applications, (ii) involving (as far as possible) a two-step process successively focusing on dominant separate models and assessing the whole modelling capability, (iii) thanks to the use of relevant data with respect to the validation aims. Based on this general validation process, a four-step generic work approach has been defined; it includes: (i) a thorough analysis of the concerned industrial applications to identify the key physical phenomena involved and associated dominant basic models, (ii) an assessment of these models against the available validation pieces of information, to specify the additional validation needs and define dedicated validation plans, (iii) an inventory and assessment of existing validation data (with respect to the requirements specified in the previous task) to identify the actual needs for new validation data, (iv) the specification of the new experimental programs to be set up to provide the needed new data. This work approach has been applied to the NEPTUNE software, focusing on 8 high priority industrial applications, and it has resulted in the definition of (i) the validation plan and experimental programs to be set up for the open medium 3D modelling

  7. Certification & validation of biosafety level-2 & biosafety level-3 laboratories in Indian settings & common issues.

    Science.gov (United States)

    Mourya, Devendra T; Yadav, Pragya D; Khare, Ajay; Khan, Anwar H

    2017-10-01

    With increasing awareness regarding biorisk management worldwide, many biosafety laboratories are being setup in India. It is important for the facility users, project managers and the executing agencies to understand the process of validation and certification of such biosafety laboratories. There are some international guidelines available, but there are no national guidelines or reference standards available in India on certification and validation of biosafety laboratories. There is no accredited government/private agency available in India to undertake validation and certification of biosafety laboratories. Therefore, the reliance is mostly on indigenous experience, talent and expertise available, which is in short supply. This article elucidates the process of certification and validation of biosafety laboratories in a concise manner for the understanding of the concerned users and suggests the important parameters and criteria that should be considered and addressed during the laboratory certification and validation process.

  8. Certification & validation of biosafety level-2 & biosafety level-3 laboratories in Indian settings & common issues

    Directory of Open Access Journals (Sweden)

    Devendra T Mourya

    2017-01-01

    Full Text Available With increasing awareness regarding biorisk management worldwide, many biosafety laboratories are being setup in India. It is important for the facility users, project managers and the executing agencies to understand the process of validation and certification of such biosafety laboratories. There are some international guidelines available, but there are no national guidelines or reference standards available in India on certification and validation of biosafety laboratories. There is no accredited government/private agency available in India to undertake validation and certification of biosafety laboratories. Therefore, the reliance is mostly on indigenous experience, talent and expertise available, which is in short supply. This article elucidates the process of certification and validation of biosafety laboratories in a concise manner for the understanding of the concerned users and suggests the important parameters and criteria that should be considered and addressed during the laboratory certification and validation process.

  9. Validation and evaluation of common large-area display set (CLADS) performance specification

    Science.gov (United States)

    Hermann, David J.; Gorenflo, Ronald L.

    1998-09-01

    Battelle is under contract with Warner Robins Air Logistics Center to design a Common Large Area Display Set (CLADS) for use in multiple Command, Control, Communications, Computers, and Intelligence (C4I) applications that currently use 19- inch Cathode Ray Tubes (CRTs). Battelle engineers have built and fully tested pre-production prototypes of the CLADS design for AWACS, and are completing pre-production prototype displays for three other platforms simultaneously. With the CLADS design, any display technology that can be packaged to meet the form, fit, and function requirements defined by the Common Large Area Display Head Assembly (CLADHA) performance specification is a candidate for CLADS applications. This technology independent feature reduced the risk of CLADS development, permits life long technology insertion upgrades without unnecessary redesign, and addresses many of the obsolescence problems associated with COTS technology-based acquisition. Performance and environmental testing were performed on the AWACS CLADS and continues on other platforms as a part of the performance specification validation process. A simulator assessment and flight assessment were successfully completed for the AWACS CLADS, and lessons learned from these assessments are being incorporated into the performance specifications. Draft CLADS specifications were released to potential display integrators and manufacturers for review in 1997, and the final version of the performance specifications are scheduled to be released to display integrators and manufacturers in May, 1998. Initial USAF applications include replacements for the E-3 AWACS color monitor assembly, E-8 Joint STARS graphics display unit, and ABCCC airborne color display. Initial U.S. Navy applications include the E-2C ACIS display. For these applications, reliability and maintainability are key objectives. The common design will reduce the cost of operation and maintenance by an estimated 3.3M per year on E-3 AWACS

  10. Establishing the Reliability and Validity of a Computerized Assessment of Children's Working Memory for Use in Group Settings

    Science.gov (United States)

    St Clair-Thompson, Helen

    2014-01-01

    The aim of the present study was to investigate the reliability and validity of a brief standardized assessment of children's working memory; "Lucid Recall." Although there are many established assessments of working memory, "Lucid Recall" is fully automated and can therefore be administered in a group setting. It is therefore…

  11. Certification & validation of biosafety level-2 & biosafety level-3 laboratories in Indian settings & common issues

    OpenAIRE

    Devendra T Mourya; Pragya D Yadav; Ajay Khare; Anwar H Khan

    2017-01-01

    With increasing awareness regarding biorisk management worldwide, many biosafety laboratories are being setup in India. It is important for the facility users, project managers and the executing agencies to understand the process of validation and certification of such biosafety laboratories. There are some international guidelines available, but there are no national guidelines or reference standards available in India on certification and validation of biosafety laboratories. There is no ac...

  12. Validation of Correction Algorithms for Near-IR Analysis of Human Milk in an Independent Sample Set-Effect of Pasteurization.

    Science.gov (United States)

    Kotrri, Gynter; Fusch, Gerhard; Kwan, Celia; Choi, Dasol; Choi, Arum; Al Kafi, Nisreen; Rochow, Niels; Fusch, Christoph

    2016-02-26

    Commercial infrared (IR) milk analyzers are being increasingly used in research settings for the macronutrient measurement of breast milk (BM) prior to its target fortification. These devices, however, may not provide reliable measurement if not properly calibrated. In the current study, we tested a correction algorithm for a Near-IR milk analyzer (Unity SpectraStar, Brookfield, CT, USA) for fat and protein measurements, and examined the effect of pasteurization on the IR matrix and the stability of fat, protein, and lactose. Measurement values generated through Near-IR analysis were compared against those obtained through chemical reference methods to test the correction algorithm for the Near-IR milk analyzer. Macronutrient levels were compared between unpasteurized and pasteurized milk samples to determine the effect of pasteurization on macronutrient stability. The correction algorithm generated for our device was found to be valid for unpasteurized and pasteurized BM. Pasteurization had no effect on the macronutrient levels and the IR matrix of BM. These results show that fat and protein content can be accurately measured and monitored for unpasteurized and pasteurized BM. Of additional importance is the implication that donated human milk, generally low in protein content, has the potential to be target fortified.

  13. A Text Matching Method to Facilitate the Validation of Frequent Order Sets Obtained Through Data Mining

    OpenAIRE

    Che, Chengjian; Rocha, Roberto A.

    2006-01-01

    In order to compare order sets discovered using a data mining algorithm with existing order sets, we developed an order matching tool based on Oracle Text. The tool includes both automated searching and manual review processes. The comparison between the automated process and the manual review process indicates that the sensitivity of the automated matching is 81% and the specificity is 84%.

  14. Development and validation of an Argentine set of facial expressions of emotion

    NARCIS (Netherlands)

    Vaiman, M.; Wagner, M.A.; Caicedo, E.; Pereno, G.L.

    2017-01-01

    Pictures of facial expressions of emotion are used in a wide range of experiments. The last decade has seen an increase in the number of studies presenting local sets of emotion stimuli. However, only a few existing sets contain pictures of Latin Americans, despite the growing attention emotion

  15. Prospective Validation of the Decalogue, a Set of Doctor-Patient Communication Recommendations to Improve Patient Illness Experience and Mood States within a Hospital Cardiologic Ambulatory Setting

    Directory of Open Access Journals (Sweden)

    Piercarlo Ballo

    2017-01-01

    Full Text Available Strategies to improve doctor-patient communication may have a beneficial impact on patient’s illness experience and mood, with potential favorable clinical effects. We prospectively tested the psychometric and clinical validity of the Decalogue, a tool utilizing 10 communication recommendations for patients and physicians. The Decalogue was administered to 100 consecutive patients referred for a cardiologic consultation, whereas 49 patients served as controls. The POMS-2 questionnaire was used to measure the total mood disturbance at the end of the consultation. Structural equation modeling showed high internal consistency (Cronbach alpha 0.93, good test-retest reproducibility, and high validity of the psychometric construct (all > 0.80, suggesting a positive effect on patients’ illness experience. The total mood disturbance was lower in the patients exposed to the Decalogue as compared to the controls (1.4±12.1 versus 14.8±27.6, p=0.0010. In an additional questionnaire, patients in the Decalogue group showed a trend towards a better understanding of their state of health (p=0.07. In a cardiologic ambulatory setting, the Decalogue shows good validity and reliability as a tool to improve patients’ illness experience and could have a favorable impact on mood states. These effects might potentially improve patient engagement in care and adherence to therapy, as well as clinical outcome.

  16. A validated set of tool pictures with matched objects and non-objects for laterality research.

    Science.gov (United States)

    Verma, Ark; Brysbaert, Marc

    2015-01-01

    Neuropsychological and neuroimaging research has established that knowledge related to tool use and tool recognition is lateralized to the left cerebral hemisphere. Recently, behavioural studies with the visual half-field technique have confirmed the lateralization. A limitation of this research was that different sets of stimuli had to be used for the comparison of tools to other objects and objects to non-objects. Therefore, we developed a new set of stimuli containing matched triplets of tools, other objects and non-objects. With the new stimulus set, we successfully replicated the findings of no visual field advantage for objects in an object recognition task combined with a significant right visual field advantage for tools in a tool recognition task. The set of stimuli is available as supplemental data to this article.

  17. Validity of the Perceived Health Competence Scale in a UK primary care setting.

    OpenAIRE

    Dempster, Martin; Donnelly, Michael

    2008-01-01

    The Perceived Health Competence Scale (PHCS) is a measure of self-efficacy regarding general healthrelated behaviour. This brief paper examines the psychometric properties of the PHCS in a UK context. Questionnaires containing the PHCS, the SF-36 and questions about perceived health needs were posted to 486 patients randomly selected from a GP practice list. Complete questionnaires were returned by 320 patients. Analyses of these responses provide strong evidence for the validity of the PHCS ...

  18. Validation of KENO, ANISN and Hansen-Roach cross-section set on plutonium oxide and metal fuel system

    International Nuclear Information System (INIS)

    Matsumoto, Tadakuni; Yumoto, Ryozo; Nakano, Koh.

    1980-01-01

    In the previous report, the authors discussed the validity of KENO, ANISN and Hansen-Roach 16 group cross-section set on the critical plutonium nitrate solution systems with various geometries, absorbers and neutron interactions. The purpose of the present report is to examine the validity of the same calculation systems on the homogeneous plutonium oxide and plutonium-uranium mixed oxide fuels with various density values. Eleven experiments adopted for validation are summarized. First six experiments were performed at Pacific Northwest Laboratory of Battelle Memorial Institute, and the remaining five at Los Alamos Scientific Laboratory. The characteristics of core fuel are given, and the isotopic composition of plutonium, the relation between H/(Pu + U) atomic ratio and fuel density as compared with the atomic ratios of PuO 2 and mixed oxides in powder storage and pellet fabrication processes, and critical core dimensions and reflector conditions are shown. The effective multiplication factors were calculated with the KENO code. In case of the metal fuels with simple sphere geometry, additional calculations with the ANISN code were performed. The criticality calculation system composed of KENO, ANISN and Hansen-Roach cross-section set was found to be valid for calculating the criticality on plutonium oxide, plutonium-uranium mixed oxide, plutonium metal and uranium metal fuel systems as well as on plutonium solution systems with various geometries, absorbers and neutron interactions. There seems to remain some problems in the method for evaluating experimental correction. Some discussions foloow. (Wakatsuki, Y.)

  19. The impact of crowd noise on officiating in Muay Thai: achieving external validity in an experimental setting

    Directory of Open Access Journals (Sweden)

    Tony D Myers

    2012-09-01

    Full Text Available Numerous factors have been proposed to explain the home advantage in sport. Several authors have suggested that a partisan home crowd enhances home advantage and that this is at least in part a consequence of their influence on officiating. However, while experimental studies examining this phenomenon have high levels of internal validity (since only the ‘crowd noise’ intervention is allowed to vary, they suffer from a lack of external validity, with decision-making in a laboratory setting typically bearing little resemblance to decision-making in live sports settings. Conversely, observational and quasi-experimental studies with high levels of external validity suffer from low levels of internal validity as countless factors besides crowd noise vary. The present study provides a unique opportunity to address these criticisms, by conducting a controlled experiment on the impact of crowd noise on officiating in a live tournament setting. Seventeen qualified judges officiated on thirty Thai boxing bouts in a live international tournament setting featuring ‘home’ and ‘away’ boxers. In each bout, judges were randomised into a ‘noise’ (live sound or ‘no crowd noise’ (noise cancelling headphones and white noise condition, resulting in 59 judgements in the ‘no crowd noise’ and 61 in the ‘crowd noise’ condition. The results provide the first experimental evidence of the impact of live crowd noise on officials in sport. A cross-classified statistical model indicated that crowd noise had a statistically significant impact, equating to just over half a point per bout (in the context of five round bouts with the ‘ten point must’ scoring system shared with professional boxing. The practical significance of the findings, their implications for officiating and for the future conduct of crowd noise studies are discussed.

  20. The impact of crowd noise on officiating in muay thai: achieving external validity in an experimental setting.

    Science.gov (United States)

    Myers, Tony; Balmer, Nigel

    2012-01-01

    Numerous factors have been proposed to explain the home advantage in sport. Several authors have suggested that a partisan home crowd enhances home advantage and that this is at least in part a consequence of their influence on officiating. However, while experimental studies examining this phenomenon have high levels of internal validity (since only the "crowd noise" intervention is allowed to vary), they suffer from a lack of external validity, with decision-making in a laboratory setting typically bearing little resemblance to decision-making in live sports settings. Conversely, observational and quasi-experimental studies with high levels of external validity suffer from low levels of internal validity as countless factors besides crowd noise vary. The present study provides a unique opportunity to address these criticisms, by conducting a controlled experiment on the impact of crowd noise on officiating in a live tournament setting. Seventeen qualified judges officiated on thirty Thai boxing bouts in a live international tournament setting featuring "home" and "away" boxers. In each bout, judges were randomized into a "noise" (live sound) or "no crowd noise" (noise-canceling headphones and white noise) condition, resulting in 59 judgments in the "no crowd noise" and 61 in the "crowd noise" condition. The results provide the first experimental evidence of the impact of live crowd noise on officials in sport. A cross-classified statistical model indicated that crowd noise had a statistically significant impact, equating to just over half a point per bout (in the context of five round bouts with the "10-point must" scoring system shared with professional boxing). The practical significance of the findings, their implications for officiating and for the future conduct of crowd noise studies are discussed.

  1. Validating the WHO maternal near miss tool: comparing high- and low-resource settings.

    Science.gov (United States)

    Witteveen, Tom; Bezstarosti, Hans; de Koning, Ilona; Nelissen, Ellen; Bloemenkamp, Kitty W; van Roosmalen, Jos; van den Akker, Thomas

    2017-06-19

    WHO proposed the WHO Maternal Near Miss (MNM) tool, classifying women according to several (potentially) life-threatening conditions, to monitor and improve quality of obstetric care. The objective of this study is to analyse merged data of one high- and two low-resource settings where this tool was applied and test whether the tool may be suitable for comparing severe maternal outcome (SMO) between these settings. Using three cohort studies that included SMO cases, during two-year time frames in the Netherlands, Tanzania and Malawi we reassessed all SMO cases (as defined by the original studies) with the WHO MNM tool (five disease-, four intervention- and seven organ dysfunction-based criteria). Main outcome measures were prevalence of MNM criteria and case fatality rates (CFR). A total of 3172 women were studied; 2538 (80.0%) from the Netherlands, 248 (7.8%) from Tanzania and 386 (12.2%) from Malawi. Total SMO detection was 2767 (87.2%) for disease-based criteria, 2504 (78.9%) for intervention-based criteria and 1211 (38.2%) for organ dysfunction-based criteria. Including every woman who received ≥1 unit of blood in low-resource settings as life-threatening, as defined by organ dysfunction criteria, led to more equally distributed populations. In one third of all Dutch and Malawian maternal death cases, organ dysfunction criteria could not be identified from medical records. Applying solely organ dysfunction-based criteria may lead to underreporting of SMO. Therefore, a tool based on defining MNM only upon establishing organ failure is of limited use for comparing settings with varying resources. In low-resource settings, lowering the threshold of transfused units of blood leads to a higher detection rate of MNM. We recommend refined disease-based criteria, accompanied by a limited set of intervention- and organ dysfunction-based criteria to set a measure of severity.

  2. Validation of the Thermo Scientific SureTect Escherichia coli O157:H7 Real-Time PCR Assay for Raw Beef and Produce Matrixes.

    Science.gov (United States)

    Cloke, Jonathan; Crowley, Erin; Bird, Patrick; Bastin, Ben; Flannery, Jonathan; Agin, James; Goins, David; Clark, Dorn; Radcliff, Roy; Wickstrand, Nina; Kauppinen, Mikko

    2015-01-01

    The Thermo Scientific™ SureTect™ Escherichia coli O157:H7 Assay is a new real-time PCR assay which has been validated through the AOAC Research Institute (RI) Performance Tested Methods(SM) program for raw beef and produce matrixes. This validation study specifically validated the assay with 375 g 1:4 and 1:5 ratios of raw ground beef and raw beef trim in comparison to the U.S. Department of Agriculture, Food Safety Inspection Service, Microbiology Laboratory Guidebook (USDS-FSIS/MLG) reference method and 25 g bagged spinach and fresh apple juice at a ratio of 1:10, in comparison to the reference method detailed in the International Organization for Standardization 16654:2001 reference method. For raw beef matrixes, the validation of both 1:4 and 1:5 allows user flexibility with the enrichment protocol, although which of these two ratios chosen by the laboratory should be based on specific test requirements. All matrixes were analyzed by Thermo Fisher Scientific, Microbiology Division, Vantaa, Finland, and Q Laboratories Inc, Cincinnati, Ohio, in the method developer study. Two of the matrixes (raw ground beef at both 1:4 and 1:5 ratios) and bagged spinach were additionally analyzed in the AOAC-RI controlled independent laboratory study, which was conducted by Marshfield Food Safety, Marshfield, Wisconsin. Using probability of detection statistical analysis, no significant difference was demonstrated by the SureTect kit in comparison to the USDA FSIS reference method for raw beef matrixes, or with the ISO reference method for matrixes of bagged spinach and apple juice. Inclusivity and exclusivity testing was conducted with 58 E. coli O157:H7 and 54 non-E. coli O157:H7 isolates, respectively, which demonstrated that the SureTect assay was able to detect all isolates of E. coli O157:H7 analyzed. In addition, all but one of the nontarget isolates were correctly interpreted as negative by the SureTect Software. The single isolate giving a positive result was an E

  3. BESST (Bochum Emotional Stimulus Set)--a pilot validation study of a stimulus set containing emotional bodies and faces from frontal and averted views.

    Science.gov (United States)

    Thoma, Patrizia; Soria Bauser, Denise; Suchan, Boris

    2013-08-30

    This article introduces the freely available Bochum Emotional Stimulus Set (BESST), which contains pictures of bodies and faces depicting either a neutral expression or one of the six basic emotions (happiness, sadness, fear, anger, disgust, and surprise), presented from two different perspectives (0° frontal view vs. camera averted by 45° to the left). The set comprises 565 frontal view and 564 averted view pictures of real-life bodies with masked facial expressions and 560 frontal and 560 averted view faces which were synthetically created using the FaceGen 3.5 Modeller. All stimuli were validated in terms of categorization accuracy and the perceived naturalness of the expression. Additionally, each facial stimulus was morphed into three age versions (20/40/60 years). The results show high recognition of the intended facial expressions, even under speeded forced-choice conditions, as corresponds to common experimental settings. The average naturalness ratings for the stimuli range between medium and high. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  4. Warsaw set of emotional facial expression pictures: a validation study of facial display photographs

    NARCIS (Netherlands)

    Olszanowski, M.; Pochwatko, G.; Kuklinski, K.; Scibor-Rylski, M.; Lewinski, P.; Ohme, R.K.

    2015-01-01

    Emotional facial expressions play a critical role in theories of emotion and figure prominently in research on almost every aspect of emotion. This article provides a background for a new database of basic emotional expressions. The goal in creating this set was to provide high quality photographs

  5. Eigenvalue-eigenvector decomposition (EED) analysis of dissimilarity and covariance matrix obtained from total synchronous fluorescence spectral (TSFS) data sets of herbal preparations: Optimizing the classification approach

    Science.gov (United States)

    Tarai, Madhumita; Kumar, Keshav; Divya, O.; Bairi, Partha; Mishra, Kishor Kumar; Mishra, Ashok Kumar

    2017-09-01

    The present work compares the dissimilarity and covariance based unsupervised chemometric classification approaches by taking the total synchronous fluorescence spectroscopy data sets acquired for the cumin and non-cumin based herbal preparations. The conventional decomposition method involves eigenvalue-eigenvector analysis of the covariance of the data set and finds the factors that can explain the overall major sources of variation present in the data set. The conventional approach does this irrespective of the fact that the samples belong to intrinsically different groups and hence leads to poor class separation. The present work shows that classification of such samples can be optimized by performing the eigenvalue-eigenvector decomposition on the pair-wise dissimilarity matrix.

  6. Eigenvalue-eigenvector decomposition (EED) analysis of dissimilarity and covariance matrix obtained from total synchronous fluorescence spectral (TSFS) data sets of herbal preparations: Optimizing the classification approach.

    Science.gov (United States)

    Tarai, Madhumita; Kumar, Keshav; Divya, O; Bairi, Partha; Mishra, Kishor Kumar; Mishra, Ashok Kumar

    2017-09-05

    The present work compares the dissimilarity and covariance based unsupervised chemometric classification approaches by taking the total synchronous fluorescence spectroscopy data sets acquired for the cumin and non-cumin based herbal preparations. The conventional decomposition method involves eigenvalue-eigenvector analysis of the covariance of the data set and finds the factors that can explain the overall major sources of variation present in the data set. The conventional approach does this irrespective of the fact that the samples belong to intrinsically different groups and hence leads to poor class separation. The present work shows that classification of such samples can be optimized by performing the eigenvalue-eigenvector decomposition on the pair-wise dissimilarity matrix. Copyright © 2017 Elsevier B.V. All rights reserved.

  7. Date and acquaintance rape. Development and validation of a set of scales.

    Science.gov (United States)

    Walsh, J F; Devellis, B M; Devellis, R F

    1997-02-01

    Increasing recognition of the prevalence of date/acquaintance rape (DAR) in the US, especially among college women, has led to an understanding that the techniques needed to fend off attacks from friends and acquaintances differ from those used to prevent rape by strangers. This study developed and tested the reliability and validity of the following DAR constructs: perceived vulnerability (underestimation of vulnerability discourages adequate self-protection), self-efficacy, relational priority (neglecting self-interest to save a relationship), rape myth acceptance (subscribing to myths about rape allows women to avoid facing their own vulnerability), and commitment to self-defense. These constructs were also correlated with scales measuring masculinity, self-esteem, and degree of belief in a "just world." Data were gathered to test these constructs via a questionnaire administered to 800 female undergraduate dormitory residents (47% response rate). Analysis of the data allowed refinement of 50 items into 25 items that constitute reliable scales of perceived vulnerability, self-efficacy, and self-determination and a marginally reliable scale of victim-blaming (rape myth). Support was found for 5/6 predicted correlates between DAR scales and 3/5 hypothesized correlations between DAR scales and convergent/discrimination validity scales. Research into this rape prevention tool will continue.

  8. Development of a Reference Data Set (RDS) for dental age estimation (DAE) and testing of this with a separate Validation Set (VS) in a southern Chinese population.

    Science.gov (United States)

    Jayaraman, Jayakumar; Wong, Hai Ming; King, Nigel M; Roberts, Graham J

    2016-10-01

    Many countries have recently experienced a rapid increase in the demand for forensic age estimates of unaccompanied minors. Hong Kong is a major tourist and business center where there has been an increase in the number of people intercepted with false travel documents. An accurate estimation of age is only possible when a dataset for age estimation that has been derived from the corresponding ethnic population. Thus, the aim of this study was to develop and validate a Reference Data Set (RDS) for dental age estimation for southern Chinese. A total of 2306 subjects were selected from the patient archives of a large dental hospital and the chronological age for each subject was recorded. This age was assigned to each specific stage of dental development for each tooth to create a RDS. To validate this RDS, a further 484 subjects were randomly chosen from the patient archives and their dental age was assessed based on the scores from the RDS. Dental age was estimated using meta-analysis command corresponding to random effects statistical model. Chronological age (CA) and Dental Age (DA) were compared using the paired t-test. The overall difference between the chronological and dental age (CA-DA) was 0.05 years (2.6 weeks) for males and 0.03 years (1.6 weeks) for females. The paired t-test indicated that there was no statistically significant difference between the chronological and dental age (p > 0.05). The validated southern Chinese reference dataset based on dental maturation accurately estimated the chronological age. Copyright © 2016 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.

  9. Identification and Validation of a New Set of Five Genes for Prediction of Risk in Early Breast Cancer

    Directory of Open Access Journals (Sweden)

    Giorgio Mustacchi

    2013-05-01

    Full Text Available Molecular tests predicting the outcome of breast cancer patients based on gene expression levels can be used to assist in making treatment decisions after consideration of conventional markers. In this study we identified a subset of 20 mRNA differentially regulated in breast cancer analyzing several publicly available array gene expression data using R/Bioconductor package. Using RTqPCR we evaluate 261 consecutive invasive breast cancer cases not selected for age, adjuvant treatment, nodal and estrogen receptor status from paraffin embedded sections. The biological samples dataset was split into a training (137 cases and a validation set (124 cases. The gene signature was developed on the training set and a multivariate stepwise Cox analysis selected five genes independently associated with DFS: FGF18 (HR = 1.13, p = 0.05, BCL2 (HR = 0.57, p = 0.001, PRC1 (HR = 1.51, p = 0.001, MMP9 (HR = 1.11, p = 0.08, SERF1a (HR = 0.83, p = 0.007. These five genes were combined into a linear score (signature weighted according to the coefficients of the Cox model, as: 0.125FGF18 − 0.560BCL2 + 0.409PRC1 + 0.104MMP9 − 0.188SERF1A (HR = 2.7, 95% CI = 1.9–4.0, p < 0.001. The signature was then evaluated on the validation set assessing the discrimination ability by a Kaplan Meier analysis, using the same cut offs classifying patients at low, intermediate or high risk of disease relapse as defined on the training set (p < 0.001. Our signature, after a further clinical validation, could be proposed as prognostic signature for disease free survival in breast cancer patients where the indication for adjuvant chemotherapy added to endocrine treatment is uncertain.

  10. Validation Study of a Predictive Algorithm to Evaluate Opioid Use Disorder in a Primary Care Setting

    Science.gov (United States)

    Sharma, Maneesh; Lee, Chee; Kantorovich, Svetlana; Tedtaotao, Maria; Smith, Gregory A.

    2017-01-01

    Background: Opioid abuse in chronic pain patients is a major public health issue. Primary care providers are frequently the first to prescribe opioids to patients suffering from pain, yet do not always have the time or resources to adequately evaluate the risk of opioid use disorder (OUD). Purpose: This study seeks to determine the predictability of aberrant behavior to opioids using a comprehensive scoring algorithm (“profile”) incorporating phenotypic and, more uniquely, genotypic risk factors. Methods and Results: In a validation study with 452 participants diagnosed with OUD and 1237 controls, the algorithm successfully categorized patients at high and moderate risk of OUD with 91.8% sensitivity. Regardless of changes in the prevalence of OUD, sensitivity of the algorithm remained >90%. Conclusion: The algorithm correctly stratifies primary care patients into low-, moderate-, and high-risk categories to appropriately identify patients in need for additional guidance, monitoring, or treatment changes. PMID:28890908

  11. Adaption and validation of the Safety Attitudes Questionnaire for the Danish hospital setting

    DEFF Research Database (Denmark)

    Kristensen, Solvejg; Sabroe, Svend; Bartels, Paul

    2015-01-01

    PURPOSE: Measuring and developing a safe culture in health care is a focus point in creating highly reliable organizations being successful in avoiding patient safety incidents where these could normally be expected. Questionnaires can be used to capture a snapshot of an employee's perceptions...... of patient safety culture. A commonly used instrument to measure safety climate is the Safety Attitudes Questionnaire (SAQ). The purpose of this study was to adapt the SAQ for use in Danish hospitals, assess its construct validity and reliability, and present benchmark data. MATERIALS AND METHODS: The SAQ...... tested in a cross-sectional study. Goodness-of-fit indices from confirmatory factor analysis were reported along with inter-item correlations, Cronbach's alpha (α), and item and subscale scores. RESULTS: Participation was 73.2% (N=925) of invited health care workers. Goodness-of-fit indices from...

  12. Validation of non-rigid point-set registration methods using a porcine bladder pelvic phantom

    Science.gov (United States)

    Zakariaee, Roja; Hamarneh, Ghassan; Brown, Colin J.; Spadinger, Ingrid

    2016-01-01

    The problem of accurate dose accumulation in fractionated radiotherapy treatment for highly deformable organs, such as bladder, has garnered increasing interest over the past few years. However, more research is required in order to find a robust and efficient solution and to increase the accuracy over the current methods. The purpose of this study was to evaluate the feasibility and accuracy of utilizing non-rigid (affine or deformable) point-set registration in accumulating dose in bladder of different sizes and shapes. A pelvic phantom was built to house an ex vivo porcine bladder with fiducial landmarks adhered onto its surface. Four different volume fillings of the bladder were used (90, 180, 360 and 480 cc). The performance of MATLAB implementations of five different methods were compared, in aligning the bladder contour point-sets. The approaches evaluated were coherent point drift (CPD), gaussian mixture model, shape context, thin-plate spline robust point matching (TPS-RPM) and finite iterative closest point (ICP-finite). The evaluation metrics included registration runtime, target registration error (TRE), root-mean-square error (RMS) and Hausdorff distance (HD). The reference (source) dataset was alternated through all four points-sets, in order to study the effect of reference volume on the registration outcomes. While all deformable algorithms provided reasonable registration results, CPD provided the best TRE values (6.4 mm), and TPS-RPM yielded the best mean RMS and HD values (1.4 and 6.8 mm, respectively). ICP-finite was the fastest technique and TPS-RPM, the slowest.

  13. Validation of non-rigid point-set registration methods using a porcine bladder pelvic phantom

    International Nuclear Information System (INIS)

    Zakariaee, Roja; Hamarneh, Ghassan; Brown, Colin J; Spadinger, Ingrid

    2016-01-01

    The problem of accurate dose accumulation in fractionated radiotherapy treatment for highly deformable organs, such as bladder, has garnered increasing interest over the past few years. However, more research is required in order to find a robust and efficient solution and to increase the accuracy over the current methods. The purpose of this study was to evaluate the feasibility and accuracy of utilizing non-rigid (affine or deformable) point-set registration in accumulating dose in bladder of different sizes and shapes. A pelvic phantom was built to house an ex vivo porcine bladder with fiducial landmarks adhered onto its surface. Four different volume fillings of the bladder were used (90, 180, 360 and 480 cc). The performance of MATLAB implementations of five different methods were compared, in aligning the bladder contour point-sets. The approaches evaluated were coherent point drift (CPD), gaussian mixture model, shape context, thin-plate spline robust point matching (TPS-RPM) and finite iterative closest point (ICP-finite). The evaluation metrics included registration runtime, target registration error (TRE), root-mean-square error (RMS) and Hausdorff distance (HD). The reference (source) dataset was alternated through all four points-sets, in order to study the effect of reference volume on the registration outcomes. While all deformable algorithms provided reasonable registration results, CPD provided the best TRE values (6.4 mm), and TPS-RPM yielded the best mean RMS and HD values (1.4 and 6.8 mm, respectively). ICP-finite was the fastest technique and TPS-RPM, the slowest. (paper)

  14. Development and validation of a set of German stimulus- and target words for an attachment related semantic priming paradigm.

    Directory of Open Access Journals (Sweden)

    Anke Maatz

    Full Text Available Experimental research in adult attachment theory is faced with the challenge to adequately activate the adult attachment system. In view of the multitude of methods employed for this purpose so far, this paper suggests to further make use of the methodological advantages of semantic priming. In order to enable the use of such a paradigm in a German speaking context, a set of German words belonging to the semantic categories 'interpersonal closeness', 'interpersonal distance' and 'neutral' were identified and their semantics were validated combining production- and rating method. 164 university students answered corresponding online-questionnaires. Ratings were analysed using analysis of variance (ANOVA and cluster analysis from which three clearly distinct groups emerged. Beyond providing validated stimulus- and target words which can be used to activate the adult attachment system in a semantic priming paradigm, the results of this study point at important links between attachment and stress which call for further investigation in the future.

  15. The Svalbard study 1988-89: a unique setting for validation of self-reported alcohol consumption.

    Science.gov (United States)

    Høyer, G; Nilssen, O; Brenn, T; Schirmer, H

    1995-04-01

    The Norwegian island of Spitzbergen, Svalbard offers a unique setting for validation studies on self-reported alcohol consumption. No counterfeit production or illegal import exists, thus making complete registration of all sources of alcohol possible. In this study we recorded sales from all agencies selling alcohol on Svalbard over a 2-month period in 1988. During the same period all adults living permanently on Svalbard were invited to take part in a health screening. As part of the screening a self-administered questionnaire on alcohol consumption was introduced to the participants. We found that the self-reported volume accounted for approximately 40 percent of the sales volume. Because of the unique situation applying to Svalbard, the estimate made in this study is believed to be more reliable compared to other studies using sales volume to validate self-reports.

  16. A generic validation methodology and its application to a set of multi-axial creep damage constitutive equations

    International Nuclear Information System (INIS)

    Xu Qiang

    2005-01-01

    A generic validation methodology for a set of multi-axial creep damage constitutive equations is proposed and its use is illustrated with 0.5Cr0.5Mo0.25V ferritic steel which is featured as brittle or intergranular rupture. The objective of this research is to develop a methodology to guide systematically assess the quality of a set of multi-axial creep damage constitutive equations in order to ensure its general applicability. This work adopted a total quality assurance approach and expanded as a Four Stages procedure (Theories and Fundamentals, Parameter Identification, Proportional Load, and Non-proportional load). Its use is illustrated with 0.5Cr0.5Mo0.25V ferritic steel and this material is chosen due to its industry importance, the popular use of KRH type of constitutive equations, and the available qualitative experimental data including damage distribution from notched bar test. The validation exercise clearly revealed the deficiencies existed in the KRH formulation (in terms of mathematics and physics of damage mechanics) and its incapability to predict creep deformation accurately. Consequently, its use should be warned, which is particularly important due to its wide use as indicated in literature. This work contributes to understand the rational for formulation and the quality assurance of a set of constitutive equations in creep damage mechanics as well as in general damage mechanics. (authors)

  17. Construct Validity of Medical Clinical Competence Measures: A Multitrait-Multimethod Matrix Study Using Confirmatory Factor Analysis.

    Science.gov (United States)

    Forsythe, George B.; And Others

    1986-01-01

    Construct validity was investigated for three tests of clinical competence in medicine: National Board of Medical Examiners examination (NBME), California Psychological Inventory (CPI), and Resident Evaluation Form (REF). Scores from 166 residents were analyzed. Results suggested low construct validity for CPI and REF scales, and moderate…

  18. Validation of secondary commercial data sources for physical activity facilities in urban and nonurban settings.

    Science.gov (United States)

    Han, Euna; Powell, Lisa; Slater, Sandy; Quinn, Christopher

    2012-11-01

    Secondary data are often necessary to assess the availability of commercial physical activity (PA) facilities and examine its association with individual behaviors and outcomes, yet the validity of such sources has been explored only in a limited number of studies. Field data were collected on the presence and attributes of commercial PA facilities in a random sample of 30 urban, 15 suburban, and 15 rural Census tracts in the Chicago metropolitan statistical area and surrounding area. Approximately 40% of PA establishments in the field data were listed for both urban and nonurban tracts in both lists except for nonurban tracts in D&B (35%), which was significantly improved in the combined list of D&B and InfoUSA. Approximately one-quarter of the PA facilities listed in D&B were found on the ground, whereas 40% to 50% of PA facilities listed in InfoUSA were found on the ground. PA establishments that offered instruction programs or lessons or that had a court or pool were less likely to be listed, particularly in the nonurban tracts. Secondary commercial business lists on PA facilities should be used with caution in assessing the built environment.

  19. Using digital photography in a clinical setting: a valid, accurate, and applicable method to assess food intake.

    Science.gov (United States)

    Winzer, Eva; Luger, Maria; Schindler, Karin

    2018-06-01

    Regular monitoring of food intake is hardly integrated in clinical routine. Therefore, the aim was to examine the validity, accuracy, and applicability of an appropriate and also quick and easy-to-use tool for recording food intake in a clinical setting. Two digital photography methods, the postMeal method with a picture after the meal, the pre-postMeal method with a picture before and after the meal, and the visual estimation method (plate diagram; PD) were compared against the reference method (weighed food records; WFR). A total of 420 dishes from lunch (7 weeks) were estimated with both photography methods and the visual method. Validity, applicability, accuracy, and precision of the estimation methods, and additionally food waste, macronutrient composition, and energy content were examined. Tests of validity revealed stronger correlations for photography methods (postMeal: r = 0.971, p < 0.001; pre-postMeal: r = 0.995, p < 0.001) compared to the visual estimation method (r = 0.810; p < 0.001). The pre-postMeal method showed smaller variability (bias < 1 g) and also smaller overestimation and underestimation. This method accurately and precisely estimated portion sizes in all food items. Furthermore, the total food waste was 22% for lunch over the study period. The highest food waste was observed in salads and the lowest in desserts. The pre-postMeal digital photography method is valid, accurate, and applicable in monitoring food intake in clinical setting, which enables a quantitative and qualitative dietary assessment. Thus, nutritional care might be initiated earlier. This method might be also advantageous for quantitative and qualitative evaluation of food waste, with a resultantly reduction in costs.

  20. Norming the odd: creation, norming, and validation of a stimulus set for the study of incongruities across music and language.

    Science.gov (United States)

    Featherstone, Cara R; Waterman, Mitch G; Morrison, Catriona M

    2012-03-01

    Research into similarities between music and language processing is currently experiencing a strong renewed interest. Recent methodological advances have led to neuroimaging studies presenting striking similarities between neural patterns associated with the processing of music and language--notably, in the study of participants' responses to elements that are incongruous with their musical or linguistic context. Responding to a call for greater systematicity by leading researchers in the field of music and language psychology, this article describes the creation, selection, and validation of a set of auditory stimuli in which both congruence and resolution were manipulated in equivalent ways across harmony, rhythm, semantics, and syntax. Three conditions were created by changing the contexts preceding and following musical and linguistic incongruities originally used for effect by authors and composers: Stimuli in the incongruous-resolved condition reproduced the original incongruity and resolution into the same context; stimuli in the incongruous-unresolved condition reproduced the incongruity but continued postincongruity with a new context dictated by the incongruity; and stimuli in the congruous condition presented the same element of interest, but the entire context was adapted to match it so that it was no longer incongruous. The manipulations described in this article rendered unrecognizable the original incongruities from which the stimuli were adapted, while maintaining ecological validity. The norming procedure and validation study resulted in a significant increase in perceived oddity from congruous to incongruous-resolved and from incongruous-resolved to incongruous-unresolved in all four components of music and language, making this set of stimuli a theoretically grounded and empirically validated resource for this growing area of research.

  1. Spanish translation and cross-language validation of a sleep habits questionnaire for use in clinical and research settings.

    Science.gov (United States)

    Baldwin, Carol M; Choi, Myunghan; McClain, Darya Bonds; Celaya, Alma; Quan, Stuart F

    2012-04-15

    To translate, back-translate and cross-language validate (English/Spanish) the Sleep Heart Health Study Sleep Habits Questionnaire for use with Spanish-speakers in clinical and research settings. Following rigorous translation and back-translation, this cross-sectional cross-language validation study recruited bilingual participants from academic, clinic, and community-based settings (N = 50; 52% women; mean age 38.8 ± 12 years; 90% of Mexican heritage). Participants completed English and Spanish versions of the Sleep Habits Questionnaire, the Epworth Sleepiness Scale, and the Acculturation Rating Scale for Mexican Americans II one week apart in randomized order. Psychometric properties were assessed, including internal consistency, convergent validity, scale equivalence, language version intercorrelations, and exploratory factor analysis using PASW (Version18) software. Grade level readability of the sleep measure was evaluated. All sleep categories (duration, snoring, apnea, insomnia symptoms, other sleep symptoms, sleep disruptors, restless legs syndrome) showed Cronbach α, Spearman-Brown coefficients and intercorrelations ≥ 0.700, suggesting robust internal consistency, correlation, and agreement between language versions. The Epworth correlated significantly with snoring, apnea, sleep symptoms, restless legs, and sleep disruptors) on both versions, supporting convergent validity. Items loaded on 4 factors accounted for 68% and 67% of the variance on the English and Spanish versions, respectively. The Spanish-language Sleep Habits Questionnaire demonstrates conceptual and content equivalency. It has appropriate measurement properties and should be useful for assessing sleep health in community-based clinics and intervention studies among Spanish-speaking Mexican Americans. Both language versions showed readability at the fifth grade level. Further testing is needed with larger samples.

  2. Phenotypic identification of Porphyromonas gingivalis validated with matrix-assisted laser desorption/ionization time-of-flight mass spectrometry

    NARCIS (Netherlands)

    Rams, Thomas E; Sautter, Jacqueline D; Getreu, Adam; van Winkelhoff, Arie J

    OBJECTIVE: Porphyromonas gingivalis is a major bacterial pathogen in human periodontitis. This study used matrix-assisted laser desorption/ionization time-of-flight (MALDI-TOF) mass spectrometry to assess the accuracy of a rapid phenotypic identification scheme for detection of cultivable P.

  3. Detection of Q-Matrix Misspecification Using Two Criteria for Validation of Cognitive Structures under the Least Squares Distance Model

    Science.gov (United States)

    Romero, Sonia J.; Ordoñez, Xavier G.; Ponsoda, Vincente; Revuelta, Javier

    2014-01-01

    Cognitive Diagnostic Models (CDMs) aim to provide information about the degree to which individuals have mastered specific attributes that underlie the success of these individuals on test items. The Q-matrix is a key element in the application of CDMs, because contains links item-attributes representing the cognitive structure proposed for solve…

  4. A reference data set for validating vapor pressure measurement techniques: homologous series of polyethylene glycols

    Science.gov (United States)

    Krieger, Ulrich K.; Siegrist, Franziska; Marcolli, Claudia; Emanuelsson, Eva U.; Gøbel, Freya M.; Bilde, Merete; Marsh, Aleksandra; Reid, Jonathan P.; Huisman, Andrew J.; Riipinen, Ilona; Hyttinen, Noora; Myllys, Nanna; Kurtén, Theo; Bannan, Thomas; Percival, Carl J.; Topping, David

    2018-01-01

    To predict atmospheric partitioning of organic compounds between gas and aerosol particle phase based on explicit models for gas phase chemistry, saturation vapor pressures of the compounds need to be estimated. Estimation methods based on functional group contributions require training sets of compounds with well-established saturation vapor pressures. However, vapor pressures of semivolatile and low-volatility organic molecules at atmospheric temperatures reported in the literature often differ by several orders of magnitude between measurement techniques. These discrepancies exceed the stated uncertainty of each technique which is generally reported to be smaller than a factor of 2. At present, there is no general reference technique for measuring saturation vapor pressures of atmospherically relevant compounds with low vapor pressures at atmospheric temperatures. To address this problem, we measured vapor pressures with different techniques over a wide temperature range for intercomparison and to establish a reliable training set. We determined saturation vapor pressures for the homologous series of polyethylene glycols (H - (O - CH2 - CH2)n - OH) for n = 3 to n = 8 ranging in vapor pressure at 298 K from 10-7 to 5×10-2 Pa and compare them with quantum chemistry calculations. Such a homologous series provides a reference set that covers several orders of magnitude in saturation vapor pressure, allowing a critical assessment of the lower limits of detection of vapor pressures for the different techniques as well as permitting the identification of potential sources of systematic error. Also, internal consistency within the series allows outlying data to be rejected more easily. Most of the measured vapor pressures agreed within the stated uncertainty range. Deviations mostly occurred for vapor pressure values approaching the lower detection limit of a technique. The good agreement between the measurement techniques (some of which are sensitive to the mass

  5. Validity of verbal autopsy method to determine causes of death among adults in the urban setting of Ethiopia

    Science.gov (United States)

    2012-01-01

    Background Verbal autopsy has been widely used to estimate causes of death in settings with inadequate vital registries, but little is known about its validity. This analysis was part of Addis Ababa Mortality Surveillance Program to examine the validity of verbal autopsy for determining causes of death compared with hospital medical records among adults in the urban setting of Ethiopia. Methods This validation study consisted of comparison of verbal autopsy final diagnosis with hospital diagnosis taken as a “gold standard”. In public and private hospitals of Addis Ababa, 20,152 adult deaths (15 years and above) were recorded between 2007 and 2010. With the same period, a verbal autopsy was conducted for 4,776 adult deaths of which, 1,356 were deceased in any of Addis Ababa hospitals. Then, verbal autopsy and hospital data sets were merged using the variables; full name of the deceased, sex, address, age, place and date of death. We calculated sensitivity, specificity and positive predictive values with 95% confidence interval. Results After merging, a total of 335 adult deaths were captured. For communicable diseases, the values of sensitivity, specificity and positive predictive values of verbal autopsy diagnosis were 79%, 78% and 68% respectively. For non-communicable diseases, sensitivity of the verbal autopsy diagnoses was 69%, specificity 78% and positive predictive value 79%. Regarding injury, sensitivity of the verbal autopsy diagnoses was 70%, specificity 98% and positive predictive value 83%. Higher sensitivity was achieved for HIV/AIDS and tuberculosis, but lower specificity with relatively more false positives. Conclusion These findings may indicate the potential of verbal autopsy to provide cost-effective information to guide policy on communicable and non communicable diseases double burden among adults in Ethiopia. Thus, a well structured verbal autopsy method, followed by qualified physician reviews could be capable of providing reasonable cause

  6. Validity of verbal autopsy method to determine causes of death among adults in the urban setting of Ethiopia

    Directory of Open Access Journals (Sweden)

    Misganaw Awoke

    2012-08-01

    Full Text Available Abstract Background Verbal autopsy has been widely used to estimate causes of death in settings with inadequate vital registries, but little is known about its validity. This analysis was part of Addis Ababa Mortality Surveillance Program to examine the validity of verbal autopsy for determining causes of death compared with hospital medical records among adults in the urban setting of Ethiopia. Methods This validation study consisted of comparison of verbal autopsy final diagnosis with hospital diagnosis taken as a “gold standard”. In public and private hospitals of Addis Ababa, 20,152 adult deaths (15 years and above were recorded between 2007 and 2010. With the same period, a verbal autopsy was conducted for 4,776 adult deaths of which, 1,356 were deceased in any of Addis Ababa hospitals. Then, verbal autopsy and hospital data sets were merged using the variables; full name of the deceased, sex, address, age, place and date of death. We calculated sensitivity, specificity and positive predictive values with 95% confidence interval. Results After merging, a total of 335 adult deaths were captured. For communicable diseases, the values of sensitivity, specificity and positive predictive values of verbal autopsy diagnosis were 79%, 78% and 68% respectively. For non-communicable diseases, sensitivity of the verbal autopsy diagnoses was 69%, specificity 78% and positive predictive value 79%. Regarding injury, sensitivity of the verbal autopsy diagnoses was 70%, specificity 98% and positive predictive value 83%. Higher sensitivity was achieved for HIV/AIDS and tuberculosis, but lower specificity with relatively more false positives. Conclusion These findings may indicate the potential of verbal autopsy to provide cost-effective information to guide policy on communicable and non communicable diseases double burden among adults in Ethiopia. Thus, a well structured verbal autopsy method, followed by qualified physician reviews could be capable of

  7. A high confidence, manually validated human blood plasma protein reference set

    DEFF Research Database (Denmark)

    Schenk, Susann; Schoenhals, Gary J; de Souza, Gustavo

    2008-01-01

    BACKGROUND: The immense diagnostic potential of human plasma has prompted great interest and effort in cataloging its contents, exemplified by the Human Proteome Organization (HUPO) Plasma Proteome Project (PPP) pilot project. Due to challenges in obtaining a reliable blood plasma protein list......-trap-Fourier transform (LTQ-FT) and a linear ion trap-Orbitrap (LTQ-Orbitrap) for mass spectrometry (MS) analysis. Both instruments allow the measurement of peptide masses in the low ppm range. Furthermore, we employed a statistical score that allows database peptide identification searching using the products of two...... consecutive stages of tandem mass spectrometry (MS3). The combination of MS3 with very high mass accuracy in the parent peptide allows peptide identification with orders of magnitude more confidence than that typically achieved. RESULTS: Herein we established a high confidence set of 697 blood plasma proteins...

  8. Validation of Ogawa passive samplers for the determination of gaseous ammonia concentrations in agricultural settings

    Science.gov (United States)

    Roadman, M. J.; Scudlark, J. R.; Meisinger, J. J.; Ullman, W. J.

    The Ogawa passive sampler (Ogawa USA, Pompano Beach, Florida) is a useful tool for monitoring atmospheric ammonia (NH 3(g)) concentrations and assessing the effects of agricultural waste management practices on NH 3(g) emissions. The Ogawa sampler, with filter-discs impregnated with citric acid, was used to trap and determine NH 3(g) concentrations in a variety of agricultural settings. A wide range of NH 3(g) concentrations can be monitored by varying the sampler exposure time, provided that no more than ˜10 μg of NH 3-N are adsorbed on the acid-coated filters. Concentrations less than 1 μg NH 3-N m -3 can be detected using long deployments (⩽14 days), while concentrations as great as 10 mg NH 3-N m -3 may be determined in very short (e.g. 5 min) deployments. Reproducibility ranged from 5% to 10% over the range of concentrations studied and passive determinations of NH 3(g) were similar to those determined using dilute-acid gas scrubbers. Background levels of NH 3(g) at a non-agricultural site in southern Delaware were typically <1 μg NH 3-N m -3. The air entering a chicken house was 10 μg NH 3-N m -3, reflecting the background levels in agricultural settings in this region. Within the house, concentrations ⩽8.5 mg NH 3-N m -3 were observed, reflecting the high rates of NH 3(g) emission from chicken excreta. Using measured NH 3(g) concentrations and poultry house ventilation rates, we estimate that each broiler grown to production size over 6 weeks contributes approximately 19±3 g of NH 3-N to the atmosphere, a value consistent with other published results.

  9. Validation and Application of Models to Predict Facemask Influenza Contamination in Healthcare Settings

    Science.gov (United States)

    Fisher, Edward M.; Noti, John D.; Lindsley, William G.; Blachere, Francoise M.; Shaffer, Ronald E.

    2015-01-01

    Facemasks are part of the hierarchy of interventions used to reduce the transmission of respiratory pathogens by providing a barrier. Two types of facemasks used by healthcare workers are N95 filtering facepiece respirators (FFRs) and surgical masks (SMs). These can become contaminated with respiratory pathogens during use, thus serving as potential sources for transmission. However, because of the lack of field studies, the hazard associated with pathogen-exposed facemasks is unknown. A mathematical model was used to calculate the potential influenza contamination of facemasks from aerosol sources in various exposure scenarios. The aerosol model was validated with data from previous laboratory studies using facemasks mounted on headforms in a simulated healthcare room. The model was then used to estimate facemask contamination levels in three scenarios generated with input parameters from the literature. A second model estimated facemask contamination from a cough. It was determined that contamination levels from a single cough (≈19 viruses) were much less than likely levels from aerosols (4,473 viruses on FFRs and 3,476 viruses on SMs). For aerosol contamination, a range of input values from the literature resulted in wide variation in estimated facemask contamination levels (13–202,549 viruses), depending on the values selected. Overall, these models and estimates for facemask contamination levels can be used to inform infection control practice and research related to the development of better facemasks, to characterize airborne contamination levels, and to assist in assessment of risk from reaerosolization and fomite transfer because of handling and reuse of contaminated facemasks. PMID:24593662

  10. Validation of Nurse Practitioner Primary Care Organizational Climate Questionnaire: A New Tool to Study Nurse Practitioner Practice Settings.

    Science.gov (United States)

    Poghosyan, Lusine; Chaplin, William F; Shaffer, Jonathan A

    2017-04-01

    Favorable organizational climate in primary care settings is necessary to expand the nurse practitioner (NP) workforce and promote their practice. Only one NP-specific tool, the Nurse Practitioner Primary Care Organizational Climate Questionnaire (NP-PCOCQ), measures NP organizational climate. We confirmed NP-PCOCQ's factor structure and established its predictive validity. A crosssectional survey design was used to collect data from 314 NPs in Massachusetts in 2012. Confirmatory factor analysis and regression models were used. The 4-factor model characterized NP-PCOCQ. The NP-PCOCQ score predicted job satisfaction (beta = .36; p organizational climate in their clinics. Further testing of NP-PCOCQ is needed.

  11. The nuclear higher-order structure defined by the set of topological relationships between DNA and the nuclear matrix is species-specific in hepatocytes.

    Science.gov (United States)

    Silva-Santiago, Evangelina; Pardo, Juan Pablo; Hernández-Muñoz, Rolando; Aranda-Anzaldo, Armando

    2017-01-15

    During the interphase the nuclear DNA of metazoan cells is organized in supercoiled loops anchored to constituents of a nuclear substructure or compartment known as the nuclear matrix. The stable interactions between DNA and the nuclear matrix (NM) correspond to a set of topological relationships that define a nuclear higher-order structure (NHOS). Current evidence suggests that the NHOS is cell-type-specific. Biophysical evidence and theoretical models suggest that thermodynamic and structural constraints drive the actualization of DNA-NM interactions. However, if the topological relationships between DNA and the NM were the subject of any biological constraint with functional significance then they must be adaptive and thus be positively selected by natural selection and they should be reasonably conserved, at least within closely related species. We carried out a coarse-grained, comparative evaluation of the DNA-NM topological relationships in primary hepatocytes from two closely related mammals: rat and mouse, by determining the relative position to the NM of a limited set of target sequences corresponding to highly-conserved genomic regions that also represent a sample of distinct chromosome territories within the interphase nucleus. Our results indicate that the pattern of topological relationships between DNA and the NM is not conserved between the hepatocytes of the two closely related species, suggesting that the NHOS, like the karyotype, is species-specific. Copyright © 2016 Elsevier B.V. All rights reserved.

  12. Using affective knowledge to generate and validate a set of emotion-related, action words

    Directory of Open Access Journals (Sweden)

    Emma Portch

    2015-07-01

    Full Text Available Emotion concepts are built through situated experience. Abstract word meaning is grounded in this affective knowledge, giving words the potential to evoke emotional feelings and reactions (e.g., Vigliocco et al., 2009. In the present work we explore whether words differ in the extent to which they evoke ‘specific’ emotional knowledge. Using a categorical approach, in which an affective ‘context’ is created, it is possible to assess whether words proportionally activate knowledge relevant to different emotional states (e.g., ‘sadness’, ‘anger’, Stevenson, Mikels & James, 2007a. We argue that this method may be particularly effective when assessing the emotional meaning of action words (e.g., Schacht & Sommer, 2009. In study 1 we use a constrained feature generation task to derive a set of action words that participants associated with six, basic emotional states (see full list in Appendix S1. Generation frequencies were taken to indicate the likelihood that the word would evoke emotional knowledge relevant to the state to which it had been paired. In study 2 a rating task was used to assess the strength of association between the six most frequently generated, or ‘typical’, action words and corresponding emotion labels. Participants were presented with a series of sentences, in which action words (typical and atypical and labels were paired e.g., “If you are feeling ‘sad’ how likely would you be to act in the following way?” … ‘cry.’ Findings suggest that typical associations were robust. Participants always gave higher ratings to typical vs. atypical action word and label pairings, even when (a rating direction was manipulated (the label or verb appeared first in the sentence, and (b the typical behaviours were to be performed by the rater themselves, or others. Our findings suggest that emotion-related action words vary in the extent to which they evoke knowledge relevant for different emotional states. When

  13. Using affective knowledge to generate and validate a set of emotion-related, action words.

    Science.gov (United States)

    Portch, Emma; Havelka, Jelena; Brown, Charity; Giner-Sorolla, Roger

    2015-01-01

    Emotion concepts are built through situated experience. Abstract word meaning is grounded in this affective knowledge, giving words the potential to evoke emotional feelings and reactions (e.g., Vigliocco et al., 2009). In the present work we explore whether words differ in the extent to which they evoke 'specific' emotional knowledge. Using a categorical approach, in which an affective 'context' is created, it is possible to assess whether words proportionally activate knowledge relevant to different emotional states (e.g., 'sadness', 'anger', Stevenson, Mikels & James, 2007a). We argue that this method may be particularly effective when assessing the emotional meaning of action words (e.g., Schacht & Sommer, 2009). In study 1 we use a constrained feature generation task to derive a set of action words that participants associated with six, basic emotional states (see full list in Appendix S1). Generation frequencies were taken to indicate the likelihood that the word would evoke emotional knowledge relevant to the state to which it had been paired. In study 2 a rating task was used to assess the strength of association between the six most frequently generated, or 'typical', action words and corresponding emotion labels. Participants were presented with a series of sentences, in which action words (typical and atypical) and labels were paired e.g., "If you are feeling 'sad' how likely would you be to act in the following way?" … 'cry.' Findings suggest that typical associations were robust. Participants always gave higher ratings to typical vs. atypical action word and label pairings, even when (a) rating direction was manipulated (the label or verb appeared first in the sentence), and (b) the typical behaviours were to be performed by the rater themselves, or others. Our findings suggest that emotion-related action words vary in the extent to which they evoke knowledge relevant for different emotional states. When measuring affective grounding, it may then be

  14. Predicting death from kala-azar: construction, development, and validation of a score set and accompanying software.

    Science.gov (United States)

    Costa, Dorcas Lamounier; Rocha, Regina Lunardi; Chaves, Eldo de Brito Ferreira; Batista, Vivianny Gonçalves de Vasconcelos; Costa, Henrique Lamounier; Costa, Carlos Henrique Nery

    2016-01-01

    Early identification of patients at higher risk of progressing to severe disease and death is crucial for implementing therapeutic and preventive measures; this could reduce the morbidity and mortality from kala-azar. We describe a score set composed of four scales in addition to software for quick assessment of the probability of death from kala-azar at the point of care. Data from 883 patients diagnosed between September 2005 and August 2008 were used to derive the score set, and data from 1,031 patients diagnosed between September 2008 and November 2013 were used to validate the models. Stepwise logistic regression analyses were used to derive the optimal multivariate prediction models. Model performance was assessed by its discriminatory accuracy. A computational specialist system (Kala-Cal(r)) was developed to speed up the calculation of the probability of death based on clinical scores. The clinical prediction score showed high discrimination (area under the curve [AUC] 0.90) for distinguishing death from survival for children ≤2 years old. Performance improved after adding laboratory variables (AUC 0.93). The clinical score showed equivalent discrimination (AUC 0.89) for older children and adults, which also improved after including laboratory data (AUC 0.92). The score set also showed a high, although lower, discrimination when applied to the validation cohort. This score set and Kala-Cal(r) software may help identify individuals with the greatest probability of death. The associated software may speed up the calculation of the probability of death based on clinical scores and assist physicians in decision-making.

  15. Reliability and validity of a novel tool to comprehensively assess food and beverage marketing in recreational sport settings.

    Science.gov (United States)

    Prowse, Rachel J L; Naylor, Patti-Jean; Olstad, Dana Lee; Carson, Valerie; Mâsse, Louise C; Storey, Kate; Kirk, Sara F L; Raine, Kim D

    2018-05-31

    Current methods for evaluating food marketing to children often study a single marketing channel or approach. As the World Health Organization urges the removal of unhealthy food marketing in children's settings, methods that comprehensively explore the exposure and power of food marketing within a setting from multiple marketing channels and approaches are needed. The purpose of this study was to test the inter-rater reliability and the validity of a novel settings-based food marketing audit tool. The Food and beverage Marketing Assessment Tool for Settings (FoodMATS) was developed and its psychometric properties evaluated in five public recreation and sport facilities (sites) and subsequently used in 51 sites across Canada for a cross-sectional analysis of food marketing. Raters recorded the count of food marketing occasions, presence of child-targeted and sports-related marketing techniques, and the physical size of marketing occasions. Marketing occasions were classified by healthfulness. Inter-rater reliability was tested using Cohen's kappa (κ) and intra-class correlations (ICC). FoodMATS scores for each site were calculated using an algorithm that represented the theoretical impact of the marketing environment on food preferences, purchases, and consumption. Higher FoodMATS scores represented sites with higher exposure to, and more powerful (unhealthy, child-targeted, sports-related, large) food marketing. Validity of the scoring algorithm was tested through (1) Pearson's correlations between FoodMATS scores and facility sponsorship dollars, and (2) sequential multiple regression for predicting "Least Healthy" food sales from FoodMATS scores. Inter-rater reliability was very good to excellent (κ = 0.88-1.00, p marketing in recreation facilities, the FoodMATS provides a novel means to comprehensively track changes in food marketing environments that can assist in developing and monitoring the impact of policies and interventions.

  16. The Musical Emotional Bursts: A validated set of musical affect bursts to investigate auditory affective processing.

    Directory of Open Access Journals (Sweden)

    Sébastien ePaquette

    2013-08-01

    Full Text Available The Musical Emotional Bursts (MEB consist of 80 brief musical executions expressing basic emotional states (happiness, sadness and fear and neutrality. These musical bursts were designed to be the musical analogue of the Montreal Affective Voices (MAV – a set of brief non-verbal affective vocalizations portraying different basic emotions. The MEB consist of short (mean duration: 1.6 sec improvisations on a given emotion or of imitations of a given MAV stimulus, played on a violin (n:40 or a clarinet (n:40. The MEB arguably represent a primitive form of music emotional expression, just like the MAV represent a primitive form of vocal, nonlinguistic emotional expression. To create the MEB, stimuli were recorded from 10 violinists and 10 clarinetists, and then evaluated by 60 participants. Participants evaluated 240 stimuli (30 stimuli x 4 [3 emotions + neutral] x 2 instruments by performing either a forced-choice emotion categorization task, a valence rating task or an arousal rating task (20 subjects per task; 40 MAVs were also used in the same session with similar task instructions. Recognition accuracy of emotional categories expressed by the MEB (n:80 was lower than for the MAVs but still very high with an average percent correct recognition score of 80.4%. Highest recognition accuracies were obtained for happy clarinet (92.0% and fearful or sad violin (88.0% each MEB stimuli. The MEB can be used to compare the cerebral processing of emotional expressions in music and vocal communication, or used for testing affective perception in patients with communication problems.

  17. Copernicus stratospheric ozone service, 2009–2012: validation, system intercomparison and roles of input data sets

    Directory of Open Access Journals (Sweden)

    K. Lefever

    2015-03-01

    Full Text Available This paper evaluates and discusses the quality of the stratospheric ozone analyses delivered in near real time by the MACC (Monitoring Atmospheric Composition and Climate project during the 3-year period between September 2009 and September 2012. Ozone analyses produced by four different chemical data assimilation (CDA systems are examined and compared: the Integrated Forecast System coupled to the Model for OZone And Related chemical Tracers (IFS-MOZART; the Belgian Assimilation System for Chemical ObsErvations (BASCOE; the Synoptic Analysis of Chemical Constituents by Advanced Data Assimilation (SACADA; and the Data Assimilation Model based on Transport Model version 3 (TM3DAM. The assimilated satellite ozone retrievals differed for each system; SACADA and TM3DAM assimilated only total ozone observations, BASCOE assimilated profiles for ozone and some related species, while IFS-MOZART assimilated both types of ozone observations. All analyses deliver total column values that agree well with ground-based observations (biases The northern spring 2011 period is studied in more detail to evaluate the ability of the analyses to represent the exceptional ozone depletion event, which happened above the Arctic in March 2011. Offline sensitivity tests are performed during this month and indicate that the differences between the forward models or the assimilation algorithms are much less important than the characteristics of the assimilated data sets. They also show that IFS-MOZART is able to deliver realistic analyses of ozone both in the troposphere and in the stratosphere, but this requires the assimilation of observations from nadir-looking instruments as well as the assimilation of profiles, which are well resolved vertically and extend into the lowermost stratosphere.

  18. Validity of a population-specific BMR predictive equation for adults from an urban tropical setting.

    Science.gov (United States)

    Wahrlich, Vivian; Teixeira, Tatiana Miliante; Anjos, Luiz Antonio Dos

    2018-02-01

    Basal metabolic rate (BMR) is an important physiologic measure in nutrition research. In many instances it is not measured but estimated by predictive equations. The purpose of this study was to compare measured BMR (BMRm) with estimated BMR (BMRe) obtained by different equations. A convenient sample of 148 (89 women) 20-60 year-old subjects from the metropolitan area of Rio de Janeiro, Brazil participated in the study. BMRm values were measured by an indirect calorimeter and predicted by different equations (Schofield, Henry and Rees, Mifflin-St. Jeor and Anjos. All subjects had their body composition and anthropometric variables also measured. Accuracy of the estimations was established by the percentage of BMRe falling within ±10% of BMRm and bias when the 95% CI of the difference of BMRe and BMRm means did not include zero. Mean BMRm values were 4833.5 (SD 583.3) and 6278.8 (SD 724.0) kJ*day -1 for women and men, respectively. BMRe values were both biased and inaccurate except for values predicted by the Anjos equation. BMR overestimation was approximately 20% for the Schofield equation which was higher comparatively to the Henry and Rees (14.5% and 9.6% for women and men, respectively) and the Mifflin-St. Jeor (approximately 14.0%) equations. BMR estimated by the Anjos equation was unbiased (95% CI = -78.1; 96.3 kJ day -1 for women and -282.6; 30.7 kJ*day -1 for men). Population-specific BMR predictive equations yield unbiased and accurate BMR values in adults from an urban tropical setting. Copyright © 2016 Elsevier Ltd and European Society for Clinical Nutrition and Metabolism. All rights reserved.

  19. Prediction potential of candidate biomarker sets identified and validated on gene expression data from multiple datasets

    Directory of Open Access Journals (Sweden)

    Karacali Bilge

    2007-10-01

    Full Text Available Abstract Background Independently derived expression profiles of the same biological condition often have few genes in common. In this study, we created populations of expression profiles from publicly available microarray datasets of cancer (breast, lymphoma and renal samples linked to clinical information with an iterative machine learning algorithm. ROC curves were used to assess the prediction error of each profile for classification. We compared the prediction error of profiles correlated with molecular phenotype against profiles correlated with relapse-free status. Prediction error of profiles identified with supervised univariate feature selection algorithms were compared to profiles selected randomly from a all genes on the microarray platform and b a list of known disease-related genes (a priori selection. We also determined the relevance of expression profiles on test arrays from independent datasets, measured on either the same or different microarray platforms. Results Highly discriminative expression profiles were produced on both simulated gene expression data and expression data from breast cancer and lymphoma datasets on the basis of ER and BCL-6 expression, respectively. Use of relapse-free status to identify profiles for prognosis prediction resulted in poorly discriminative decision rules. Supervised feature selection resulted in more accurate classifications than random or a priori selection, however, the difference in prediction error decreased as the number of features increased. These results held when decision rules were applied across-datasets to samples profiled on the same microarray platform. Conclusion Our results show that many gene sets predict molecular phenotypes accurately. Given this, expression profiles identified using different training datasets should be expected to show little agreement. In addition, we demonstrate the difficulty in predicting relapse directly from microarray data using supervised machine

  20. Validation of a simple evaporation-transpiration scheme (SETS) to estimate evaporation using micro-lysimeter measurements

    Science.gov (United States)

    Ghazanfari, Sadegh; Pande, Saket; Savenije, Hubert

    2014-05-01

    Several methods exist to estimate E and T. The Penman-Montieth or Priestly-Taylor methods along with the Jarvis scheme for estimating vegetation resistance are commonly used to estimate these fluxes as a function of land cover, atmospheric forcing and soil moisture content. In this study, a simple evaporation transpiration method is developed based on MOSAIC Land Surface Model that explicitly accounts for soil moisture. Soil evaporation and transpiration estimated by SETS is validated on a single column of soil profile with measured evaporation data from three micro-lysimeters located at Ferdowsi University of Mashhad synoptic station, Iran, for the year 2005. SETS is run using both implicit and explicit computational schemes. Results show that the implicit scheme estimates the vapor flux close to that by the explicit scheme. The mean difference between the implicit and explicit scheme is -0.03 mm/day. The paired T-test of mean difference (p-Value = 0.042 and t-Value = 2.04) shows that there is no significant difference between the two methods. The sum of soil evaporation and transpiration from SETS is also compared with P-M equation and micro-lysimeters measurements. The SETS predicts the actual evaporation with a lower bias (= 1.24mm/day) than P-M (= 1.82 mm/day) and with R2 value of 0.82.

  1. Validity of the Elite HRV Smartphone Application for Examining Heart Rate Variability in a Field-Based Setting.

    Science.gov (United States)

    Perrotta, Andrew S; Jeklin, Andrew T; Hives, Ben A; Meanwell, Leah E; Warburton, Darren E R

    2017-08-01

    Perrotta, AS, Jeklin, AT, Hives, BA, Meanwell, LE, and Warburton, DER. Validity of the elite HRV smartphone application for examining heart rate variability in a field-based setting. J Strength Cond Res 31(8): 2296-2302, 2017-The introduction of smartphone applications has allowed athletes and practitioners to record and store R-R intervals on smartphones for immediate heart rate variability (HRV) analysis. This user-friendly option should be validated in the effort to provide practitioners confidence when monitoring their athletes before implementing such equipment. The objective of this investigation was to examine the relationship and validity between a vagal-related HRV index, rMSSD, when derived from a smartphone application accessible with most operating systems against a frequently used computer software program, Kubios HRV 2.2. R-R intervals were recorded immediately upon awakening over 14 consecutive days using the Elite HRV smartphone application. R-R recordings were then exported into Kubios HRV 2.2 for analysis. The relationship and levels of agreement between rMSSDln derived from Elite HRV and Kubios HRV 2.2 was examined using a Pearson product-moment correlation and a Bland-Altman Plot. An extremely large relationship was identified (r = 0.92; p smartphone HRV application may offer a reliable platform when assessing parasympathetic modulation.

  2. New set of convective heat transfer coefficients established for pools and validated against CLARA experiments for application to corium pools

    Energy Technology Data Exchange (ETDEWEB)

    Michel, B., E-mail: benedicte.michel@irsn.fr

    2015-05-15

    Highlights: • A new set of 2D convective heat transfer correlations is proposed. • It takes into account different horizontal and lateral superficial velocities. • It is based on previously established correlations. • It is validated against recent CLARA experiments. • It has to be implemented in a 0D MCCI (molten core concrete interaction) code. - Abstract: During an hypothetical Pressurized Water Reactor (PWR) or Boiling Water Reactor (BWR) severe accident with core meltdown and vessel failure, corium would fall directly on the concrete reactor pit basemat if no water is present. The high temperature of the corium pool maintained by the residual power would lead to the erosion of the concrete walls and basemat of this reactor pit. The thermal decomposition of concrete will lead to the release of a significant amount of gases that will modify the corium pool thermal hydraulics. In particular, it will affect heat transfers between the corium pool and the concrete which determine the reactor pit ablation kinetics. A new set of convective heat transfer coefficients in a pool with different lateral and horizontal superficial gas velocities is modeled and validated against the recent CLARA experimental program. 155 tests of this program, in two size configurations and a high range of investigated viscosity, have been used to validate the model. Then, a method to define different lateral and horizontal superficial gas velocities in a 0D code is proposed together with a discussion about the possible viscosity in the reactor case when the pool is semi-solid. This model is going to be implemented in the 0D ASTEC/MEDICIS code in order to determine the impact of the convective heat transfer in the concrete ablation by corium.

  3. Implementing the Science Assessment Standards: Developing and validating a set of laboratory assessment tasks in high school biology

    Science.gov (United States)

    Saha, Gouranga Chandra

    Very often a number of factors, especially time, space and money, deter many science educators from using inquiry-based, hands-on, laboratory practical tasks as alternative assessment instruments in science. A shortage of valid inquiry-based laboratory tasks for high school biology has been cited. Driven by this need, this study addressed the following three research questions: (1) How can laboratory-based performance tasks be designed and developed that are doable by students for whom they are designed/written? (2) Do student responses to the laboratory-based performance tasks validly represent at least some of the intended process skills that new biology learning goals want students to acquire? (3) Are the laboratory-based performance tasks psychometrically consistent as individual tasks and as a set? To answer these questions, three tasks were used from the six biology tasks initially designed and developed by an iterative process of trial testing. Analyses of data from 224 students showed that performance-based laboratory tasks that are doable by all students require careful and iterative process of development. Although the students demonstrated more skill in performing than planning and reasoning, their performances at the item level were very poor for some items. Possible reasons for the poor performances have been discussed and suggestions on how to remediate the deficiencies have been made. Empirical evidences for validity and reliability of the instrument have been presented both from the classical and the modern validity criteria point of view. Limitations of the study have been identified. Finally implications of the study and directions for further research have been discussed.

  4. Study of the validity of a job-exposure matrix for psychosocial work factors: results from the national French SUMER survey.

    Science.gov (United States)

    Niedhammer, Isabelle; Chastang, Jean-François; Levy, David; David, Simone; Degioanni, Stéphanie; Theorell, Töres

    2008-10-01

    To construct and evaluate the validity of a job-exposure matrix (JEM) for psychosocial work factors defined by Karasek's model using national representative data of the French working population. National sample of 24,486 men and women who filled in the Job Content Questionnaire (JCQ) by Karasek measuring the scores of psychological demands, decision latitude, and social support (individual scores) in 2003 (response rate 96.5%). Median values of the three scores in the total sample of men and women were used to define high demands, low latitude, and low support (individual binary exposures). Job title was defined by both occupation and economic activity that were coded using detailed national classifications (PCS and NAF/NACE). Two JEM measures were calculated from the individual scores of demands, latitude and support for each job title: JEM scores (mean of the individual score) and JEM binary exposures (JEM score dichotomized at the median). The analysis of the variance of the individual scores of demands, latitude, and support explained by occupations and economic activities, of the correlation and agreement between individual measures and JEM measures, and of the sensitivity and specificity of JEM exposures, as well as the study of the associations with self-reported health showed a low validity of JEM measures for psychological demands and social support, and a relatively higher validity for decision latitude compared with individual measures. Job-exposure matrix measure for decision latitude might be used as a complementary exposure assessment. Further research is needed to evaluate the validity of JEM for psychosocial work factors.

  5. Analysis of three sets of SWIW tracer test data using a two-population complex fracture model for matrix diffusion and sorption

    International Nuclear Information System (INIS)

    Doughty, Christine; Chin-Fu Tsang

    2009-03-01

    This study has been undertaken to obtain a better understanding of the processes underlying retention of radionuclides in fractured rock by using different model conceptualisations when interpreting SWIW tests. In particular the aim is to infer the diffusion and sorption parameters from the SWIW test data by matching tracer breakthrough curves (BTC) with a complex fracture model. The model employs two populations for diffusion and sorption. One population represents the semi-infinite rock matrix and the other represents finite blocks that can become saturated, thereafter accepting no further diffusion or sorption. For the non-sorbing tracer uranine, both the finite and the semi-infinite populations play a distinct role in controlling BTC. For the sorbing tracers Cs and Rb the finite population does not saturate, but acts essentially semi-infinite, thus the BTC behaviour is comparable to that obtained for a model containing only a semi-infinite rock matrix. The ability to match BTC for both sorbing and non-sorbing tracers for these three different SWIW data sets demonstrates that the two-population complex fracture model may be useful to analyze SWIW tracer test data in general. One of the two populations should be the semi-infinite rock matrix and the other finite blocks that can saturate. The latter can represent either rock blocks within the fracture, a fracture skin zone or stagnation zones. Three representative SWIW tracer tests recently conducted by SKB have been analyzed with a complex fracture model employing two populations for diffusion and sorption, one population being the semi-infinite rock matrix and the other, finite blocks. The results show that by adjusting diffusion and sorption parameters of the model, a good match with field data is obtained for BTC of both conservative and non-conservative tracers simultaneously. For non-sorbing tracer uranine, both the finite and the semi-infinite populations play a distinct role in controlling BTC. At early

  6. Development and validation of dried matrix spot sampling for the quantitative determination of amyloid β peptides in cerebrospinal fluid.

    Science.gov (United States)

    Delaby, Constance; Gabelle, Audrey; Meynier, Philippe; Loubiere, Vincent; Vialaret, Jérôme; Tiers, Laurent; Ducos, Jacques; Hirtz, Christophe; Lehmann, Sylvain

    2014-05-01

    The use of dried blood spots on filter paper is well documented as an affordable and practical alternative to classical venous sampling for various clinical needs. This technique has indeed many advantages in terms of collection, biological safety, storage, and shipment. Amyloid β (Aβ) peptides are useful cerebrospinal fluid (CSF) biomarkers for Alzheimer disease diagnosis. However, Aβ determination is hindered by preanalytical difficulties in terms of sample collection and stability in tubes. We compared the quantification of Aβ peptides (1-40, 1-42, and 1-38) by simplex and multiplex ELISA, following either a standard operator method (liquid direct quantification) or after spotting CSF onto dried matrix paper card. The use of dried matrix spot (DMS) overcame preanalytical problems and allowed the determination of Aβ concentrations that were highly commutable (Bland-Altman) with those obtained using CSF in classical tubes. Moreover, we found a positive and significant correlation (r2=0.83, Pearson coefficient p=0.0329) between the two approaches. This new DMS method for CSF represents an interesting alternative that increases the quality and efficiency in preanalytics. This should enable the better exploitation of Aβ analytes for Alzheimer's diagnosis.

  7. The static response function in Kohn-Sham theory: An appropriate basis for its matrix representation in case of finite AO basis sets

    International Nuclear Information System (INIS)

    Kollmar, Christian; Neese, Frank

    2014-01-01

    The role of the static Kohn-Sham (KS) response function describing the response of the electron density to a change of the local KS potential is discussed in both the theory of the optimized effective potential (OEP) and the so-called inverse Kohn-Sham problem involving the task to find the local KS potential for a given electron density. In a general discussion of the integral equation to be solved in both cases, it is argued that a unique solution of this equation can be found even in case of finite atomic orbital basis sets. It is shown how a matrix representation of the response function can be obtained if the exchange-correlation potential is expanded in terms of a Schmidt-orthogonalized basis comprising orbitals products of occupied and virtual orbitals. The viability of this approach in both OEP theory and the inverse KS problem is illustrated by numerical examples

  8. Experimental validation of control strategies for a microgrid test facility including a storage system and renewable generation sets

    DEFF Research Database (Denmark)

    Baccino, Francesco; Marinelli, Mattia; Silvestro, Federico

    2012-01-01

    The paper is aimed at describing and validating some control strategies in the SYSLAB experimental test facility characterized by the presence of a low voltage network with a 15 kW-190 kWh Vanadium Redox Flow battery system and a 11 kW wind turbine. The generation set is connected to the local...... network and is fully controllable by the SCADA system. The control strategies, implemented on a local pc interfaced to the SCADA, are realized in Matlab-Simulink. The main purpose is to control the charge/discharge action of the storage system in order to present at the point of common coupling...... the desired power or energy profiles....

  9. Reliability and Validity of Survey Instruments to Measure Work-Related Fatigue in the Emergency Medical Services Setting: A Systematic Review

    Science.gov (United States)

    2018-01-11

    Background: This study sought to systematically search the literature to identify reliable and valid survey instruments for fatigue measurement in the Emergency Medical Services (EMS) occupational setting. Methods: A systematic review study design wa...

  10. Discriminating real victims from feigners of psychological injury in gender violence: Validating a protocol for forensic setting

    Directory of Open Access Journals (Sweden)

    Ramon Arce

    2009-07-01

    Full Text Available Standard clinical assessment of psychological injury does not provide valid evidence in forensic settings, and screening of genuine from feigned complaints must be undertaken prior to the diagnosis of mental state (American Psychological Association, 2002. Whereas psychological injury is Post-traumatic Stress Disorder (PTSD, a clinical diagnosis may encompass other nosologies (e.g., depression and anxiety. The assessment of psychological injury in forensic contexts requires a multimethod approach consisting of a psychometric measure and an interview. To assess the efficacy of the multimethod approach in discriminating real from false victims, 25 real victims of gender violence and 24 feigners were assessed using a the Symptom Checklist-90-Revised (SCL-90-R, a recognition task; and a forensic clinical interview, a knowledge task. The results revealed that feigners reported more clinical symptoms on the SCL-90-R than real victims. Moreover, the feigning indicators on the SCL-90-R, GSI, PST, and PSDI were higher in feigners, but not sufficient to provide a screening test for invalidating feigning protocols. In contrast, real victims reported more clinical symptoms related to PTSD in the forensic clinical interview than feigners. Notwithstanding, in the forensic clinical interview feigners were able to feign PTSD which was not detected by the analysis of feigning strategies. The combination of both measures and their corresponding validity controls enabled the discrimination of real victims from feigners. Hence, a protocol for discriminating the psychological sequelae of real victims from feigners of gender violence is described.

  11. Validity and predictive ability of the juvenile arthritis disease activity score based on CRP versus ESR in a Nordic population-based setting

    DEFF Research Database (Denmark)

    Nordal, E B; Zak, M; Aalto, K

    2012-01-01

    To compare the juvenile arthritis disease activity score (JADAS) based on C reactive protein (CRP) (JADAS-CRP) with JADAS based on erythrocyte sedimentation rate (ESR) (JADAS-ESR) and to validate JADAS in a population-based setting.......To compare the juvenile arthritis disease activity score (JADAS) based on C reactive protein (CRP) (JADAS-CRP) with JADAS based on erythrocyte sedimentation rate (ESR) (JADAS-ESR) and to validate JADAS in a population-based setting....

  12. Study of the validity of a job-exposure matrix for the job strain model factors: an update and a study of changes over time.

    Science.gov (United States)

    Niedhammer, Isabelle; Milner, Allison; LaMontagne, Anthony D; Chastang, Jean-François

    2018-03-08

    The objectives of the study were to construct a job-exposure matrix (JEM) for psychosocial work factors of the job strain model, to evaluate its validity, and to compare the results over time. The study was based on national representative data of the French working population with samples of 46,962 employees (2010 SUMER survey) and 24,486 employees (2003 SUMER survey). Psychosocial work factors included the job strain model factors (Job Content Questionnaire): psychological demands, decision latitude, social support, job strain and iso-strain. Job title was defined by three variables: occupation and economic activity coded using standard classifications, and company size. A JEM was constructed using a segmentation method (Classification and Regression Tree-CART) and cross-validation. The best quality JEM was found using occupation and company size for social support. For decision latitude and psychological demands, there was not much difference using occupation and company size with or without economic activity. The validity of the JEM estimates was higher for decision latitude, job strain and iso-strain, and lower for social support and psychological demands. Differential changes over time were observed for psychosocial work factors according to occupation, economic activity and company size. This study demonstrated that company size in addition to occupation may improve the validity of JEMs for psychosocial work factors. These matrices may be time-dependent and may need to be updated over time. More research is needed to assess the validity of JEMs given that these matrices may be able to provide exposure assessments to study a range of health outcomes.

  13. Support Vector Data Description Model to Map Specific Land Cover with Optimal Parameters Determined from a Window-Based Validation Set

    Directory of Open Access Journals (Sweden)

    Jinshui Zhang

    2017-04-01

    Full Text Available This paper developed an approach, the window-based validation set for support vector data description (WVS-SVDD, to determine optimal parameters for support vector data description (SVDD model to map specific land cover by integrating training and window-based validation sets. Compared to the conventional approach where the validation set included target and outlier pixels selected visually and randomly, the validation set derived from WVS-SVDD constructed a tightened hypersphere because of the compact constraint by the outlier pixels which were located neighboring to the target class in the spectral feature space. The overall accuracies for wheat and bare land achieved were as high as 89.25% and 83.65%, respectively. However, target class was underestimated because the validation set covers only a small fraction of the heterogeneous spectra of the target class. The different window sizes were then tested to acquire more wheat pixels for validation set. The results showed that classification accuracy increased with the increasing window size and the overall accuracies were higher than 88% at all window size scales. Moreover, WVS-SVDD showed much less sensitivity to the untrained classes than the multi-class support vector machine (SVM method. Therefore, the developed method showed its merits using the optimal parameters, tradeoff coefficient (C and kernel width (s, in mapping homogeneous specific land cover.

  14. Matrix thermalization

    International Nuclear Information System (INIS)

    Craps, Ben; Evnin, Oleg; Nguyen, Kévin

    2017-01-01

    Matrix quantum mechanics offers an attractive environment for discussing gravitational holography, in which both sides of the holographic duality are well-defined. Similarly to higher-dimensional implementations of holography, collapsing shell solutions in the gravitational bulk correspond in this setting to thermalization processes in the dual quantum mechanical theory. We construct an explicit, fully nonlinear supergravity solution describing a generic collapsing dilaton shell, specify the holographic renormalization prescriptions necessary for computing the relevant boundary observables, and apply them to evaluating thermalizing two-point correlation functions in the dual matrix theory.

  15. Matrix thermalization

    Science.gov (United States)

    Craps, Ben; Evnin, Oleg; Nguyen, Kévin

    2017-02-01

    Matrix quantum mechanics offers an attractive environment for discussing gravitational holography, in which both sides of the holographic duality are well-defined. Similarly to higher-dimensional implementations of holography, collapsing shell solutions in the gravitational bulk correspond in this setting to thermalization processes in the dual quantum mechanical theory. We construct an explicit, fully nonlinear supergravity solution describing a generic collapsing dilaton shell, specify the holographic renormalization prescriptions necessary for computing the relevant boundary observables, and apply them to evaluating thermalizing two-point correlation functions in the dual matrix theory.

  16. Matrix thermalization

    Energy Technology Data Exchange (ETDEWEB)

    Craps, Ben [Theoretische Natuurkunde, Vrije Universiteit Brussel (VUB), and International Solvay Institutes, Pleinlaan 2, B-1050 Brussels (Belgium); Evnin, Oleg [Department of Physics, Faculty of Science, Chulalongkorn University, Thanon Phayathai, Pathumwan, Bangkok 10330 (Thailand); Theoretische Natuurkunde, Vrije Universiteit Brussel (VUB), and International Solvay Institutes, Pleinlaan 2, B-1050 Brussels (Belgium); Nguyen, Kévin [Theoretische Natuurkunde, Vrije Universiteit Brussel (VUB), and International Solvay Institutes, Pleinlaan 2, B-1050 Brussels (Belgium)

    2017-02-08

    Matrix quantum mechanics offers an attractive environment for discussing gravitational holography, in which both sides of the holographic duality are well-defined. Similarly to higher-dimensional implementations of holography, collapsing shell solutions in the gravitational bulk correspond in this setting to thermalization processes in the dual quantum mechanical theory. We construct an explicit, fully nonlinear supergravity solution describing a generic collapsing dilaton shell, specify the holographic renormalization prescriptions necessary for computing the relevant boundary observables, and apply them to evaluating thermalizing two-point correlation functions in the dual matrix theory.

  17. Association of Protein Translation and Extracellular Matrix Gene Sets with Breast Cancer Metastasis: Findings Uncovered on Analysis of Multiple Publicly Available Datasets Using Individual Patient Data Approach.

    Directory of Open Access Journals (Sweden)

    Nilotpal Chowdhury

    Full Text Available Microarray analysis has revolutionized the role of genomic prognostication in breast cancer. However, most studies are single series studies, and suffer from methodological problems. We sought to use a meta-analytic approach in combining multiple publicly available datasets, while correcting for batch effects, to reach a more robust oncogenomic analysis.The aim of the present study was to find gene sets associated with distant metastasis free survival (DMFS in systemically untreated, node-negative breast cancer patients, from publicly available genomic microarray datasets.Four microarray series (having 742 patients were selected after a systematic search and combined. Cox regression for each gene was done for the combined dataset (univariate, as well as multivariate - adjusted for expression of Cell cycle related genes and for the 4 major molecular subtypes. The centre and microarray batch effects were adjusted by including them as random effects variables. The Cox regression coefficients for each analysis were then ranked and subjected to a Gene Set Enrichment Analysis (GSEA.Gene sets representing protein translation were independently negatively associated with metastasis in the Luminal A and Luminal B subtypes, but positively associated with metastasis in Basal tumors. Proteinaceous extracellular matrix (ECM gene set expression was positively associated with metastasis, after adjustment for expression of cell cycle related genes on the combined dataset. Finally, the positive association of the proliferation-related genes with metastases was confirmed.To the best of our knowledge, the results depicting mixed prognostic significance of protein translation in breast cancer subtypes are being reported for the first time. We attribute this to our study combining multiple series and performing a more robust meta-analytic Cox regression modeling on the combined dataset, thus discovering 'hidden' associations. This methodology seems to yield new and

  18. Association of Protein Translation and Extracellular Matrix Gene Sets with Breast Cancer Metastasis: Findings Uncovered on Analysis of Multiple Publicly Available Datasets Using Individual Patient Data Approach.

    Science.gov (United States)

    Chowdhury, Nilotpal; Sapru, Shantanu

    2015-01-01

    Microarray analysis has revolutionized the role of genomic prognostication in breast cancer. However, most studies are single series studies, and suffer from methodological problems. We sought to use a meta-analytic approach in combining multiple publicly available datasets, while correcting for batch effects, to reach a more robust oncogenomic analysis. The aim of the present study was to find gene sets associated with distant metastasis free survival (DMFS) in systemically untreated, node-negative breast cancer patients, from publicly available genomic microarray datasets. Four microarray series (having 742 patients) were selected after a systematic search and combined. Cox regression for each gene was done for the combined dataset (univariate, as well as multivariate - adjusted for expression of Cell cycle related genes) and for the 4 major molecular subtypes. The centre and microarray batch effects were adjusted by including them as random effects variables. The Cox regression coefficients for each analysis were then ranked and subjected to a Gene Set Enrichment Analysis (GSEA). Gene sets representing protein translation were independently negatively associated with metastasis in the Luminal A and Luminal B subtypes, but positively associated with metastasis in Basal tumors. Proteinaceous extracellular matrix (ECM) gene set expression was positively associated with metastasis, after adjustment for expression of cell cycle related genes on the combined dataset. Finally, the positive association of the proliferation-related genes with metastases was confirmed. To the best of our knowledge, the results depicting mixed prognostic significance of protein translation in breast cancer subtypes are being reported for the first time. We attribute this to our study combining multiple series and performing a more robust meta-analytic Cox regression modeling on the combined dataset, thus discovering 'hidden' associations. This methodology seems to yield new and interesting

  19. RELAP-7 Software Verification and Validation Plan - Requirements Traceability Matrix (RTM) Part 2: Code Assessment Strategy, Procedure, and RTM Update

    Energy Technology Data Exchange (ETDEWEB)

    Yoo, Jun Soo [Idaho National Lab. (INL), Idaho Falls, ID (United States); Choi, Yong Joon [Idaho National Lab. (INL), Idaho Falls, ID (United States); Smith, Curtis Lee [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2016-09-01

    This document addresses two subjects involved with the RELAP-7 Software Verification and Validation Plan (SVVP): (i) the principles and plan to assure the independence of RELAP-7 assessment through the code development process, and (ii) the work performed to establish the RELAP-7 assessment plan, i.e., the assessment strategy, literature review, and identification of RELAP-7 requirements. Then, the Requirements Traceability Matrices (RTMs) proposed in previous document (INL-EXT-15-36684) are updated. These RTMs provide an efficient way to evaluate the RELAP-7 development status as well as the maturity of RELAP-7 assessment through the development process.

  20. Initial validation of the prekindergarten Classroom Observation Tool and goal setting system for data-based coaching.

    Science.gov (United States)

    Crawford, April D; Zucker, Tricia A; Williams, Jeffrey M; Bhavsar, Vibhuti; Landry, Susan H

    2013-12-01

    Although coaching is a popular approach for enhancing the quality of Tier 1 instruction, limited research has addressed observational measures specifically designed to focus coaching on evidence-based practices. This study explains the development of the prekindergarten (pre-k) Classroom Observation Tool (COT) designed for use in a data-based coaching model. We examined psychometric characteristics of the COT and explored how coaches and teachers used the COT goal-setting system. The study included 193 coaches working with 3,909 pre-k teachers in a statewide professional development program. Classrooms served 3 and 4 year olds (n = 56,390) enrolled mostly in Title I, Head Start, and other need-based pre-k programs. Coaches used the COT during a 2-hr observation at the beginning of the academic year. Teachers collected progress-monitoring data on children's language, literacy, and math outcomes three times during the year. Results indicated a theoretically supported eight-factor structure of the COT across language, literacy, and math instructional domains. Overall interrater reliability among coaches was good (.75). Although correlations with an established teacher observation measure were small, significant positive relations between COT scores and children's literacy outcomes indicate promising predictive validity. Patterns of goal-setting behaviors indicate teachers and coaches set an average of 43.17 goals during the academic year, and coaches reported that 80.62% of goals were met. Both coaches and teachers reported the COT was a helpful measure for enhancing quality of Tier 1 instruction. Limitations of the current study and implications for research and data-based coaching efforts are discussed. PsycINFO Database Record (c) 2013 APA, all rights reserved.

  1. European validation of The Comprehensive International Classification of Functioning, Disability and Health Core Set for Osteoarthritis from the perspective of patients with osteoarthritis of the knee or hip.

    Science.gov (United States)

    Weigl, Martin; Wild, Heike

    2017-09-15

    To validate the International Classification of Functioning, Disability and Health Comprehensive Core Set for Osteoarthritis from the patient perspective in Europe. This multicenter cross-sectional study involved 375 patients with knee or hip osteoarthritis. Trained health professionals completed the Comprehensive Core Set, and patients completed the Short-Form 36 questionnaire. Content validity was evaluated by calculating prevalences of impairments in body function and structures, limitations in activities and participation and environmental factors, which were either barriers or facilitators. Convergent construct validity was evaluated by correlating the International Classification of Functioning, Disability and Health categories with the Short-Form 36 Physical Component Score and the SF-36 Mental Component Score in a subgroup of 259 patients. The prevalences of all body function, body structure and activities and participation categories were >40%, >32% and >20%, respectively, and all environmental factors were relevant for >16% of patients. Few categories showed relevant differences between knee and hip osteoarthritis. All body function categories and all but two activities and participation categories showed significant correlations with the Physical Component Score. Body functions from the ICF chapter Mental Functions showed higher correlations with the Mental Component Score than with the Physical Component Score. This study supports the validity of the International Classification of Functioning, Disability and Health Comprehensive Core Set for Osteoarthritis. Implications for Rehabilitation Comprehensive International Classification of Functioning, Disability and Health Core Sets were developed as practical tools for application in multidisciplinary assessments. The validity of the Comprehensive International Classification of Functioning, Disability and Health Core Set for Osteoarthritis in this study supports its application in European patients with

  2. Method development and validation for the simultaneous determination of organochlorine and organophosphorus pesticides in a complex sediment matrix.

    Science.gov (United States)

    Alcántara-Concepción, Victor; Cram, Silke; Gibson, Richard; Ponce de León, Claudia; Mazari-Hiriart, Marisa

    2013-01-01

    The Xochimilco area in the southeastern part of Mexico City has a variety of socioeconomic activities, such as periurban agriculture, which is of great importance in the Mexico City metropolitan area. Pesticides are used extensively, some being legal, mostly chlorpyrifos and malathion, and some illegal, mostly DDT. Sediments are a common sink for pesticides in aquatic systems near agricultural areas, and Xochimilco sediments have a complex composition with high contents of organic matter and clay that are ideal adsorption sites for organochlorine (OC) and organophosphorus (OP) pesticides. Therefore, it is important to have a quick, affordable, and reliable method to determine these pesticides. Conventional methods for the determination of OC and OP pesticides are long, laborious, and costly owing to the high volume of solvents and adsorbents. The present study developed and validated a method for determining 18 OC and five OP pesticides in sediments with high organic and clay contents. In contrast with other methods described in the literature, this method allows isolation of the 23 pesticides with a 12 min microwave-assisted extraction (MAE) and one-step cleanup of pesticides. The method developed is a simpler, time-saving procedure that uses only 3.5 g of dry sediment. The use of MAE eliminates excessive handling and the possible loss of analytes. It was shown that the use of LC-Si cartridges with hexane-ethyl acetate (75+25, v/v) in the cleanup procedure recovered all pesticides with rates between 70 and 120%. The validation parameters demonstrated good performance of the method, with intermediate precision ranging from 7.3 to 17.0%, HorRat indexes all below 0.5, and tests of accuracy with the 23 pesticides at three concentration levels demonstrating recoveries ranging from 74 to 114% and RSDs from 3.3 to 12.7%.

  3. Validation of a pediatric early warning system for hospitalized pediatric oncology patients in a resource-limited setting.

    Science.gov (United States)

    Agulnik, Asya; Méndez Aceituno, Alejandra; Mora Robles, Lupe Nataly; Forbes, Peter W; Soberanis Vasquez, Dora Judith; Mack, Ricardo; Antillon-Klussmann, Federico; Kleinman, Monica; Rodriguez-Galindo, Carlos

    2017-12-15

    Pediatric oncology patients are at high risk of clinical deterioration, particularly in hospitals with resource limitations. The performance of pediatric early warning systems (PEWS) to identify deterioration has not been assessed in these settings. This study evaluates the validity of PEWS to predict the need for unplanned transfer to the pediatric intensive care unit (PICU) among pediatric oncology patients in a resource-limited hospital. A retrospective case-control study comparing the highest documented and corrected PEWS score before unplanned PICU transfer in pediatric oncology patients (129 cases) with matched controls (those not requiring PICU care) was performed. Documented and corrected PEWS scores were found to be highly correlated with the need for PICU transfer (area under the receiver operating characteristic, 0.940 and 0.930, respectively). PEWS scores increased 24 hours prior to unplanned transfer (P = .0006). In cases, organ dysfunction at the time of PICU admission correlated with maximum PEWS score (correlation coefficient, 0.26; P = .003), patients with PEWS results ≥4 had a higher Pediatric Index of Mortality 2 (PIM2) (P = .028), and PEWS results were higher in patients with septic shock (P = .01). The PICU mortality rate was 17.1%; nonsurvivors had higher mean PEWS scores before PICU transfer (P = .0009). A single-point increase in the PEWS score increased the odds of mechanical ventilation or vasopressors within the first 24 hours and during PICU admission (odds ratio 1.3-1.4). PEWS accurately predicted the need for unplanned PICU transfer in pediatric oncology patients in this resource-limited setting, with abnormal results beginning 24 hours before PICU admission and higher scores predicting the severity of illness at the time of PICU admission, need for PICU interventions, and mortality. These results demonstrate that PEWS aid in the identification of clinical deterioration in this high-risk population, regardless of a hospital

  4. Considerations for design and use of container challenge sets for qualification and validation of visible particulate inspection.

    Science.gov (United States)

    Melchore, James A; Berdovich, Dan

    2012-01-01

    The major compendia require sterile injectable and ophthalmic drugs, to be prepared in a manner that is designed to exclude particulate matter. This requirement is satisfied by testing for subvisual particles in the laboratory and 100% inspection of all containers for the presence of visible particles. Inspection for visible particles is performed in the operations area using one of three methods. Manual inspection is based on human visual acuity, the ability of the inspector to discern between conforming and nonconforming containers, and the ability to remove nonconforming units. Semi-automated inspection is a variation of manual inspection, in which a roller conveyor handles and presents the containers to the human inspector. Fully automated inspection systems perform handling, inspection, and rejection of defective containers. All inspection methods must meet the compendial requirement for sterile drug product to be "essentially free" of visible particulates. Given the random occurrence of particles within the batch, visual detection of a particle in an individual container is probabilistic. The probability of detection for a specific particle is affected by many variables that include product attributes, container size and shape, particle composition and size, and inspection capability. The challenge set is a useful tool to assess the particle detection in a product, and it may also be used to evaluate detection of container/closure defects. While the importance of a well-designed challenge set is not always recognized or understood, it serves as the cornerstone for qualification and/or validation of all inspection methods. This article is intended to provide useful information for the design, composition, and use of container challenge sets for particulate inspection studies. Regulations require drug products intended for injection or ophthalmic use to be sterile and free of particles that could harm the patient. This requirement is meet by 100% inspection of

  5. Development and validation of an in vitro–in vivo correlation (IVIVC model for propranolol hydrochloride extended-release matrix formulations

    Directory of Open Access Journals (Sweden)

    Chinhwa Cheng

    2014-06-01

    Full Text Available The objective of this study was to develop an in vitro–in vivo correlation (IVIVC model for hydrophilic matrix extended-release (ER propranolol dosage formulations. The in vitro release characteristics of the drug were determined using USP apparatus I at 100 rpm, in a medium of varying pH (from pH 1.2 to pH 6.8. In vivo plasma concentrations and pharmacokinetic parameters in male beagle dogs were obtained after administering oral, ER formulations and immediate-release (IR commercial products. The similarity factor f2 was used to compare the dissolution data. The IVIVC model was developed using pooled fraction dissolved and fraction absorbed of propranolol ER formulations, ER-F and ER-S, with different release rates. An additional formulation ER-V, with a different release rate of propranolol, was prepared for evaluating the external predictability. The results showed that the percentage prediction error (%PE values of Cmax and AUC0–∞ were 0.86% and 5.95%, respectively, for the external validation study. The observed low prediction errors for Cmax and AUC0–∞ demonstrated that the propranolol IVIVC model was valid.

  6. Analysis of Fundus Fluorescein Angiogram Based on the Hessian Matrix of Directional Curvelet Sub-bands and Distance Regularized Level Set Evolution.

    Science.gov (United States)

    Soltanipour, Asieh; Sadri, Saeed; Rabbani, Hossein; Akhlaghi, Mohammad Reza

    2015-01-01

    This paper presents a new procedure for automatic extraction of the blood vessels and optic disk (OD) in fundus fluorescein angiogram (FFA). In order to extract blood vessel centerlines, the algorithm of vessel extraction starts with the analysis of directional images resulting from sub-bands of fast discrete curvelet transform (FDCT) in the similar directions and different scales. For this purpose, each directional image is processed by using information of the first order derivative and eigenvalues obtained from the Hessian matrix. The final vessel segmentation is obtained using a simple region growing algorithm iteratively, which merges centerline images with the contents of images resulting from modified top-hat transform followed by bit plane slicing. After extracting blood vessels from FFA image, candidates regions for OD are enhanced by removing blood vessels from the FFA image, using multi-structure elements morphology, and modification of FDCT coefficients. Then, canny edge detector and Hough transform are applied to the reconstructed image to extract the boundary of candidate regions. At the next step, the information of the main arc of the retinal vessels surrounding the OD region is used to extract the actual location of the OD. Finally, the OD boundary is detected by applying distance regularized level set evolution. The proposed method was tested on the FFA images from angiography unit of Isfahan Feiz Hospital, containing 70 FFA images from different diabetic retinopathy stages. The experimental results show the accuracy more than 93% for vessel segmentation and more than 87% for OD boundary extraction.

  7. The Geriatric ICF Core Set reflecting health-related problems in community-living older adults aged 75 years and older without dementia: development and validation.

    Science.gov (United States)

    Spoorenberg, Sophie L W; Reijneveld, Sijmen A; Middel, Berrie; Uittenbroek, Ronald J; Kremer, Hubertus P H; Wynia, Klaske

    2015-01-01

    The aim of the present study was to develop a valid Geriatric ICF Core Set reflecting relevant health-related problems of community-living older adults without dementia. A Delphi study was performed in order to reach consensus (≥70% agreement) on second-level categories from the International Classification of Functioning, Disability and Health (ICF). The Delphi panel comprised 41 older adults, medical and non-medical experts. Content validity of the set was tested in a cross-sectional study including 267 older adults identified as frail or having complex care needs. Consensus was reached for 30 ICF categories in the Delphi study (fourteen Body functions, ten Activities and Participation and six Environmental Factors categories). Content validity of the set was high: the prevalence of all the problems was >10%, except for d530 Toileting. The most frequently reported problems were b710 Mobility of joint functions (70%), b152 Emotional functions (65%) and b455 Exercise tolerance functions (62%). No categories had missing values. The final Geriatric ICF Core Set is a comprehensive and valid set of 29 ICF categories, reflecting the most relevant health-related problems among community-living older adults without dementia. This Core Set may contribute to optimal care provision and support of the older population. Implications for Rehabilitation The Geriatric ICF Core Set may provide a practical tool for gaining an understanding of the relevant health-related problems of community-living older adults without dementia. The Geriatric ICF Core Set may be used in primary care practice as an assessment tool in order to tailor care and support to the needs of older adults. The Geriatric ICF Core Set may be suitable for use in multidisciplinary teams in integrated care settings, since it is based on a broad range of problems in functioning. Professionals should pay special attention to health problems related to mobility and emotional functioning since these are the most

  8. Improved diagnostic accuracy of Alzheimer's disease by combining regional cortical thickness and default mode network functional connectivity: Validated in the Alzheimer's disease neuroimaging initiative set

    International Nuclear Information System (INIS)

    Park, Ji Eun; Park, Bum Woo; Kim, Sang Joon; Kim, Ho Sung; Choi, Choong Gon; Jung, Seung Jung; Oh, Joo Young; Shim, Woo Hyun; Lee, Jae Hong; Roh, Jee Hoon

    2017-01-01

    To identify potential imaging biomarkers of Alzheimer's disease by combining brain cortical thickness (CThk) and functional connectivity and to validate this model's diagnostic accuracy in a validation set. Data from 98 subjects was retrospectively reviewed, including a study set (n = 63) and a validation set from the Alzheimer's Disease Neuroimaging Initiative (n = 35). From each subject, data for CThk and functional connectivity of the default mode network was extracted from structural T1-weighted and resting-state functional magnetic resonance imaging. Cortical regions with significant differences between patients and healthy controls in the correlation of CThk and functional connectivity were identified in the study set. The diagnostic accuracy of functional connectivity measures combined with CThk in the identified regions was evaluated against that in the medial temporal lobes using the validation set and application of a support vector machine. Group-wise differences in the correlation of CThk and default mode network functional connectivity were identified in the superior temporal (p < 0.001) and supramarginal gyrus (p = 0.007) of the left cerebral hemisphere. Default mode network functional connectivity combined with the CThk of those two regions were more accurate than that combined with the CThk of both medial temporal lobes (91.7% vs. 75%). Combining functional information with CThk of the superior temporal and supramarginal gyri in the left cerebral hemisphere improves diagnostic accuracy, making it a potential imaging biomarker for Alzheimer's disease

  9. Studies Related to the Oregon State University High Temperature Test Facility: Scaling, the Validation Matrix, and Similarities to the Modular High Temperature Gas-Cooled Reactor

    Energy Technology Data Exchange (ETDEWEB)

    Richard R. Schultz; Paul D. Bayless; Richard W. Johnson; William T. Taitano; James R. Wolf; Glenn E. McCreery

    2010-09-01

    The Oregon State University (OSU) High Temperature Test Facility (HTTF) is an integral experimental facility that will be constructed on the OSU campus in Corvallis, Oregon. The HTTF project was initiated, by the U.S. Nuclear Regulatory Commission (NRC), on September 5, 2008 as Task 4 of the 5 year High Temperature Gas Reactor Cooperative Agreement via NRC Contract 04-08-138. Until August, 2010, when a DOE contract was initiated to fund additional capabilities for the HTTF project, all of the funding support for the HTTF was provided by the NRC via their cooperative agreement. The U.S. Department of Energy (DOE) began their involvement with the HTTF project in late 2009 via the Next Generation Nuclear Plant project. Because the NRC interests in HTTF experiments were only centered on the depressurized conduction cooldown (DCC) scenario, NGNP involvement focused on expanding the experimental envelope of the HTTF to include steady-state operations and also the pressurized conduction cooldown (PCC). Since DOE has incorporated the HTTF as an ingredient in the NGNP thermal-fluids validation program, several important outcomes should be noted: 1. The reference prismatic reactor design, that serves as the basis for scaling the HTTF, became the modular high temperature gas-cooled reactor (MHTGR). The MHTGR has also been chosen as the reference design for all of the other NGNP thermal-fluid experiments. 2. The NGNP validation matrix is being planned using the same scaling strategy that has been implemented to design the HTTF, i.e., the hierarchical two-tiered scaling methodology developed by Zuber in 1991. Using this approach a preliminary validation matrix has been designed that integrates the HTTF experiments with the other experiments planned for the NGNP thermal-fluids verification and validation project. 3. Initial analyses showed that the inherent power capability of the OSU infrastructure, which only allowed a total operational facility power capability of 0.6 MW, is

  10. Analysis of anti-neoplastic drug in bacterial ghost matrix, w/o/w double nanoemulsion and w/o nanoemulsion by a validated 'green' liquid chromatographic method.

    Science.gov (United States)

    Youssof, Abdullah M E; Salem-Bekhit, Mounir M; Shakeel, Faiyaz; Alanazi, Fars K; Haq, Nazrul

    2016-07-01

    The objective of the present investigation was to develop and validate a 'green' reversed phase high-performance liquid chromatography (RP-HPLC) method for rapid analysis of a cytotoxic drug 5-fluorouracil (5-FU) in bulk drug, marketed injection, water-in-oil (w/o) nanoemulsion, double water-in-oil-in-water (w/o/w) nanoemulsion and bacterial ghost (BG) matrix. The chromatography study was carried out at room temperature (25±1°C) using an HPLC system with the help of ultraviolet (UV)-visible detector. The chromatographic performance was achieved with a Nucleodur 150mm×4.6mm RP C8 column filled with 5µm filler as a static phase. The mobile phase consisted of ethyl acetate: methanol (7:3% v/v) which was delivered at a flow rate of 1.0mLmin(-1) and the drug was detected in UV mode at 254nm. The developed method was validated in terms of linearity (r(2)=0.998), accuracy (98.19-102.09%), precision (% RSD=0.58-1.17), robustness (% RSD=0.12-0.53) and sensitivity with satisfactory results. The efficiency of the method was demonstrated by the assay of the drug in marketed injection, w/o nanoemulsion, w/o/w nanoemulsion and BG with satisfactory results. The successful resolution of the drug along with its degradation products clearly established the stability-indicating nature of the proposed method. Overall, these results suggested that the proposed analytical method could be effectively applied to the routine analysis of 5-FU in bulk drug, various pharmaceutical dosage forms and BG. Copyright © 2016 Elsevier B.V. All rights reserved.

  11. Validity of the M-3Y force equivalent G-matrix element for the calculations of nuclear structure in the s-d shell

    International Nuclear Information System (INIS)

    Song Hong-qiu; Wang Zixing; Cai Yanhuang; Huang Weizhi

    1987-01-01

    The matrix elements of the M-3Y force are adopted as the equivalent G-matrix elements and the folded diagram method is used to calculate the spectra of 18 O and 18 F. The results show that the matrix elements of the M-3Y force as the equivalent G-matrix elements are suitable for microscopic calculations of the nuclei in the s-d shell

  12. Validity of M-3Y force equivalent G-matrix elements for calculations of the nuclear structure in heavy mass region

    International Nuclear Information System (INIS)

    Cheng Lan; Huang Weizhi; Zhou Baosen

    1996-01-01

    Using the matrix elements of M-3Y force as the equivalent G-matrix elements, the spectra of 210 Pb, 206 Pb, 206 Hg and 210 Po are calculated in the framework of the Folded Diagram Method. The results show that such equivalent matrix elements are suitable for microscopic calculations of the nuclear structure in heavy mass region

  13. Validated RP-HPLC/DAD Method for the Quantification of Insect Repellent Ethyl 2-Aminobenzoate in Membrane-Moderated Matrix Type Monolithic Polymeric Device.

    Science.gov (United States)

    Islam, Johirul; Zaman, Kamaruz; Chakrabarti, Srijita; Sharma Bora, Nilutpal; Mandal, Santa; Pratim Pathak, Manash; Srinivas Raju, Pakalapati; Chattopadhyay, Pronobesh

    2017-07-01

    A simple, accurate and sensitive reversed-phase high-performance liquid chromatographic (RP-HPLC) method has been developed for the estimation of ethyl 2-aminobenzoate (EAB) in a matrix type monolithic polymeric device and validated as per the International Conference on Harmonization guidelines. The analysis was performed isocratically on a ZORBAX Eclipse plus C18 analytical column (250 × 4.4 mm, 5 μm) and a diode array detector (DAD) using acetonitrile and water (75:25 v/v) as the mobile phase by keeping the flow-rate constant at 1.0 mL/min. Determination of EAB was not interfered in the presence of excipients. Inter- and intra-day relative standard deviations were not higher than 2%. Mean recovery was between 98.7 and 101.3%. Calibration curve was linear in the concentration range of 0.5-10 µg/mL. Limits of detection and quantification were 0.19 and 0.60 µg/mL, respectively. Thus, the present report put forward a novel method for the estimation of EAB, an emerging insect repellent, by using RP-HPLC technique. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  14. Utility of the MMPI-2-RF (Restructured Form) Validity Scales in Detecting Malingering in a Criminal Forensic Setting: A Known-Groups Design

    Science.gov (United States)

    Sellbom, Martin; Toomey, Joseph A.; Wygant, Dustin B.; Kucharski, L. Thomas; Duncan, Scott

    2010-01-01

    The current study examined the utility of the recently released Minnesota Multiphasic Personality Inventory-2 Restructured Form (MMPI-2-RF; Ben-Porath & Tellegen, 2008) validity scales to detect feigned psychopathology in a criminal forensic setting. We used a known-groups design with the Structured Interview of Reported Symptoms (SIRS;…

  15. The Geriatric ICF Core Set reflecting health-related problems in community-living older adults aged 75 years and older without dementia : development and validation

    NARCIS (Netherlands)

    Spoorenberg, Sophie L. W.; Reijneveld, Sijmen A.; Middel, Berrie; Uittenbroek, Ronald J.; Kremer, Hubertus P. H.; Wynia, Klaske

    2015-01-01

    Purpose: The aim of the present study was to develop a valid Geriatric ICF Core Set reflecting relevant health-related problems of community-living older adults without dementia. Methods: A Delphi study was performed in order to reach consensus (70% agreement) on second-level categories from the

  16. The validity and reliability of value-added and target-setting procedures with special reference to Key Stage 3

    OpenAIRE

    Moody, Ian Robin

    2003-01-01

    The validity of value-added systems of measurement is crucially dependent upon there being a demonstrably unambiguous relationship between the so-called baseline, or intake measures, and any subsequent measure of performance at a later stage. The reliability of such procedures is dependent on the relationships between these two measures being relatively stable over time. A number of questions arise with regard to both the validity and reliability of value-added procedures at any level in educ...

  17. Validation of the Care-Related Quality of Life Instrument in different study settings: findings from The Older Persons and Informal Caregivers Survey Minimum DataSet (TOPICS-MDS).

    Science.gov (United States)

    Lutomski, J E; van Exel, N J A; Kempen, G I J M; Moll van Charante, E P; den Elzen, W P J; Jansen, A P D; Krabbe, P F M; Steunenberg, B; Steyerberg, E W; Olde Rikkert, M G M; Melis, R J F

    2015-05-01

    Validity is a contextual aspect of a scale which may differ across sample populations and study protocols. The objective of our study was to validate the Care-Related Quality of Life Instrument (CarerQol) across two different study design features, sampling framework (general population vs. different care settings) and survey mode (interview vs. written questionnaire). Data were extracted from The Older Persons and Informal Caregivers Minimum DataSet (TOPICS-MDS, www.topics-mds.eu ), a pooled public-access data set with information on >3,000 informal caregivers throughout the Netherlands. Meta-correlations and linear mixed models between the CarerQol's seven dimensions (CarerQol-7D) and caregiver's level of happiness (CarerQol-VAS) and self-rated burden (SRB) were performed. The CarerQol-7D dimensions were correlated to the CarerQol-VAS and SRB in the pooled data set and the subgroups. The strength of correlations between CarerQol-7D dimensions and SRB was weaker among caregivers who were interviewed versus those who completed a written questionnaire. The directionality of associations between the CarerQol-VAS, SRB and the CarerQol-7D dimensions in the multivariate model supported the construct validity of the CarerQol in the pooled population. Significant interaction terms were observed in several dimensions of the CarerQol-7D across sampling frame and survey mode, suggesting meaningful differences in reporting levels. Although good scientific practice emphasises the importance of re-evaluating instrument properties in individual research studies, our findings support the validity and applicability of the CarerQol instrument in a variety of settings. Due to minor differential reporting, pooling CarerQol data collected using mixed administration modes should be interpreted with caution; for TOPICS-MDS, meta-analytic techniques may be warranted.

  18. 11th GCC Closed Forum: cumulative stability; matrix stability; immunogenicity assays; laboratory manuals; biosimilars; chiral methods; hybrid LBA/LCMS assays; fit-for-purpose validation; China Food and Drug Administration bioanalytical method validation.

    Science.gov (United States)

    Islam, Rafiq; Briscoe, Chad; Bower, Joseph; Cape, Stephanie; Arnold, Mark; Hayes, Roger; Warren, Mark; Karnik, Shane; Stouffer, Bruce; Xiao, Yi Qun; van der Strate, Barry; Sikkema, Daniel; Fang, Xinping; Tudoroniu, Ariana; Tayyem, Rabab; Brant, Ashley; Spriggs, Franklin; Barry, Colin; Khan, Masood; Keyhani, Anahita; Zimmer, Jennifer; Caturla, Maria Cruz; Couerbe, Philippe; Khadang, Ardeshir; Bourdage, James; Datin, Jim; Zemo, Jennifer; Hughes, Nicola; Fatmi, Saadya; Sheldon, Curtis; Fountain, Scott; Satterwhite, Christina; Colletti, Kelly; Vija, Jenifer; Yu, Mathilde; Stamatopoulos, John; Lin, Jenny; Wilfahrt, Jim; Dinan, Andrew; Ohorodnik, Susan; Hulse, James; Patel, Vimal; Garofolo, Wei; Savoie, Natasha; Brown, Michael; Papac, Damon; Buonarati, Mike; Hristopoulos, George; Beaver, Chris; Boudreau, Nadine; Williard, Clark; Liu, Yansheng; Ray, Gene; Warrino, Dominic; Xu, Allan; Green, Rachel; Hayward-Sewell, Joanne; Marcelletti, John; Sanchez, Christina; Kennedy, Michael; Charles, Jessica St; Bouhajib, Mohammed; Nehls, Corey; Tabler, Edward; Tu, Jing; Joyce, Philip; Iordachescu, Adriana; DuBey, Ira; Lindsay, John; Yamashita, Jim; Wells, Edward

    2018-04-01

    The 11th Global CRO Council Closed Forum was held in Universal City, CA, USA on 3 April 2017. Representatives from international CRO members offering bioanalytical services were in attendance in order to discuss scientific and regulatory issues specific to bioanalysis. The second CRO-Pharma Scientific Interchange Meeting was held on 7 April 2017, which included Pharma representatives' sharing perspectives on the topics discussed earlier in the week with the CRO members. The issues discussed at the meetings included cumulative stability evaluations, matrix stability evaluations, the 2016 US FDA Immunogenicity Guidance and recent and unexpected FDA Form 483s on immunogenicity assays, the bioanalytical laboratory's role in writing PK sample collection instructions, biosimilars, CRO perspectives on the use of chiral versus achiral methods, hybrid LBA/LCMS assays, applications of fit-for-purpose validation and, at the Global CRO Council Closed Forum only, the status and trend of current regulated bioanalytical practice in China under CFDA's new BMV policy. Conclusions from discussions of these topics at both meetings are included in this report.

  19. Development and validation of a casemix classification to predict costs of specialist palliative care provision across inpatient hospice, hospital and community settings in the UK: a study protocol.

    Science.gov (United States)

    Guo, Ping; Dzingina, Mendwas; Firth, Alice M; Davies, Joanna M; Douiri, Abdel; O'Brien, Suzanne M; Pinto, Cathryn; Pask, Sophie; Higginson, Irene J; Eagar, Kathy; Murtagh, Fliss E M

    2018-03-17

    Provision of palliative care is inequitable with wide variations across conditions and settings in the UK. Lack of a standard way to classify by case complexity is one of the principle obstacles to addressing this. We aim to develop and validate a casemix classification to support the prediction of costs of specialist palliative care provision. Phase I: A cohort study to determine the variables and potential classes to be included in a casemix classification. Data are collected from clinicians in palliative care services across inpatient hospice, hospital and community settings on: patient demographics, potential complexity/casemix criteria and patient-level resource use. Cost predictors are derived using multivariate regression and then incorporated into a classification using classification and regression trees. Internal validation will be conducted by bootstrapping to quantify any optimism in the predictive performance (calibration and discrimination) of the developed classification. Phase II: A mixed-methods cohort study across settings for external validation of the classification developed in phase I. Patient and family caregiver data will be collected longitudinally on demographics, potential complexity/casemix criteria and patient-level resource use. This will be triangulated with data collected from clinicians on potential complexity/casemix criteria and patient-level resource use, and with qualitative interviews with patients and caregivers about care provision across difference settings. The classification will be refined on the basis of its performance in the validation data set. The study has been approved by the National Health Service Health Research Authority Research Ethics Committee. The results are expected to be disseminated in 2018 through papers for publication in major palliative care journals; policy briefs for clinicians, commissioning leads and policy makers; and lay summaries for patients and public. ISRCTN90752212. © Article author

  20. Further Validation of the MMPI-2 And MMPI-2-RF Response Bias Scale: Findings from Disability and Criminal Forensic Settings

    Science.gov (United States)

    Wygant, Dustin B.; Sellbom, Martin; Gervais, Roger O.; Ben-Porath, Yossef S.; Stafford, Kathleen P.; Freeman, David B.; Heilbronner, Robert L.

    2010-01-01

    The present study extends the validation of the Minnesota Multiphasic Personality Inventory-2 (MMPI-2) and the Minnesota Multiphasic Personality Inventory-2 Restructured Form (MMPI-2-RF) Response Bias Scale (RBS; R. O. Gervais, Y. S. Ben-Porath, D. B. Wygant, & P. Green, 2007) in separate forensic samples composed of disability claimants and…

  1. Psychometric properties and longitudinal validation of the self-reporting questionnaire (SRQ-20 in a Rwandan community setting: a validation study

    Directory of Open Access Journals (Sweden)

    van Lammeren Anouk

    2011-08-01

    Full Text Available Abstract Background This study took place to enable the measurement of the effects on mental health of a psychosocial intervention in Rwanda. It aimed to establish the capacities of the Self-Reporting Questionnaire (SRQ-20 to screen for mental disorder and to assess symptom change over time in a Rwandan community setting. Methods The SRQ-20 was translated into Kinyarwanda in a process of forward and back-translation. SRQ-20 data were collected in a Rwandan setting on 418 respondents; a random subsample of 230 respondents was assessed a second time with a three month time interval. Internal reliability was tested using Cronbach's alpha. The optimal cut-off point was determined by calculating Receiver Operating Curves, using semi-structured clinical interviews as standard in a random subsample of 99 respondents. Subsequently, predictive value, likelihood ratio, and interrater agreement were calculated. The factor structure of the SRQ-20 was determined through exploratory factor analysis. Factorial invariance over time was tested in a multigroup confirmatory factor analysis. Results The reliability of the SRQ-20 in women (α = 0.85 and men (α = 0.81 could be considered good. The instrument performed moderately well in detecting common mental disorders, with an area under the curve (AUC of 0.76 for women and 0.74 for men. Cut-off scores were different for women (10 and men (8. Factor analysis yielded five factors, explaining 38% of the total variance. The factor structure proved to be time invariant. Conclusions The SRQ-20 can be used as a screener to detect mental disorder in a Rwandan community setting, but cut-off scores need to be adjusted for women and men separately. The instrument also shows longitudinal factorial invariance, which is an important prerequisite for assessing changes in symptom severity. This is a significant finding as in non-western post-conflict settings the relevance of diagnostic categories is questionable. The use of the

  2. Development of a tool to measure person-centered maternity care in developing settings: validation in a rural and urban Kenyan population.

    Science.gov (United States)

    Afulani, Patience A; Diamond-Smith, Nadia; Golub, Ginger; Sudhinaraset, May

    2017-09-22

    Person-centered reproductive health care is recognized as critical to improving reproductive health outcomes. Yet, little research exists on how to operationalize it. We extend the literature in this area by developing and validating a tool to measure person-centered maternity care. We describe the process of developing the tool and present the results of psychometric analyses to assess its validity and reliability in a rural and urban setting in Kenya. We followed standard procedures for scale development. First, we reviewed the literature to define our construct and identify domains, and developed items to measure each domain. Next, we conducted expert reviews to assess content validity; and cognitive interviews with potential respondents to assess clarity, appropriateness, and relevance of the questions. The questions were then refined and administered in surveys; and survey results used to assess construct and criterion validity and reliability. The exploratory factor analysis yielded one dominant factor in both the rural and urban settings. Three factors with eigenvalues greater than one were identified for the rural sample and four factors identified for the urban sample. Thirty of the 38 items administered in the survey were retained based on the factors loadings and correlation between the items. Twenty-five items load very well onto a single factor in both the rural and urban sample, with five items loading well in either the rural or urban sample, but not in both samples. These 30 items also load on three sub-scales that we created to measure dignified and respectful care, communication and autonomy, and supportive care. The Chronbach alpha for the main scale is greater than 0.8 in both samples, and that for the sub-scales are between 0.6 and 0.8. The main scale and sub-scales are correlated with global measures of satisfaction with maternity services, suggesting criterion validity. We present a 30-item scale with three sub-scales to measure person

  3. Reliability and validity of the International Spinal Cord Injury Basic Pain Data Set items as self-report measures

    DEFF Research Database (Denmark)

    Jensen, M P; Widerström-Noga, E; Richards, J S

    2010-01-01

    To evaluate the psychometric properties of a subset of International Spinal Cord Injury Basic Pain Data Set (ISCIBPDS) items that could be used as self-report measures in surveys, longitudinal studies and clinical trials....

  4. Validation of five minimally obstructive methods to estimate physical activity energy expenditure in young adults in semi-standardized settings

    DEFF Research Database (Denmark)

    Schneller, Mikkel Bo; Pedersen, Mogens Theisen; Gupta, Nidhi

    2015-01-01

    We compared the accuracy of five objective methods, including two newly developed methods combining accelerometry and activity type recognition (Acti4), against indirect calorimetry, to estimate total energy expenditure (EE) of different activities in semi-standardized settings. Fourteen particip...

  5. Content validation of the international classification of functioning, disability and health core set for stroke from gender perspective using a qualitative approach.

    Science.gov (United States)

    Glässel, A; Coenen, M; Kollerits, B; Cieza, A

    2014-06-01

    The extended ICF Core Set for stroke is an application of the International Classification of Functioning, Disability and Health (ICF) of the World Health Organisation (WHO) with the purpose to represent the typical spectrum of functioning of persons with stroke. The objective of the study is to add evidence to the content validity of the extended ICF Core Set for stroke from persons after stroke taking into account gender perspective. A qualitative study design was conducted by using individual interviews with women and men after stroke in an in- and outpatient rehabilitation setting. The sampling followed the maximum variation strategy. Sample size was determined by saturation. Concepts from qualitative data analysis were linked to ICF categories and compared to the extended ICF Core Set for stroke. Twelve women and 12 men participated in 24 individual interviews. In total, 143 out of 166 ICF categories included in the extended ICF Core Set for stroke were confirmed (women: N.=13; men: N.=17; both genders: N.=113). Thirty-eight additional categories that are not yet included in the extended ICF Core Set for stroke were raised by women and men. This study confirms that the experience of functioning and disability after stroke shows communalities and differences for women and men. The validity of the extended ICF Core Set for stroke could be mostly confirmed, since it does not only include those areas of functioning and disability relevant to both genders but also those exclusively relevant to either women or men. Further research is needed on ICF categories not yet included in the extended ICF Core Set for stroke.

  6. Validity and Interrater Reliability of the Visual Quarter-Waste Method for Assessing Food Waste in Middle School and High School Cafeteria Settings.

    Science.gov (United States)

    Getts, Katherine M; Quinn, Emilee L; Johnson, Donna B; Otten, Jennifer J

    2017-11-01

    Measuring food waste (ie, plate waste) in school cafeterias is an important tool to evaluate the effectiveness of school nutrition policies and interventions aimed at increasing consumption of healthier meals. Visual assessment methods are frequently applied in plate waste studies because they are more convenient than weighing. The visual quarter-waste method has become a common tool in studies of school meal waste and consumption, but previous studies of its validity and reliability have used correlation coefficients, which measure association but not necessarily agreement. The aims of this study were to determine, using a statistic measuring interrater agreement, whether the visual quarter-waste method is valid and reliable for assessing food waste in a school cafeteria setting when compared with the gold standard of weighed plate waste. To evaluate validity, researchers used the visual quarter-waste method and weighed food waste from 748 trays at four middle schools and five high schools in one school district in Washington State during May 2014. To assess interrater reliability, researcher pairs independently assessed 59 of the same trays using the visual quarter-waste method. Both validity and reliability were assessed using a weighted κ coefficient. For validity, as compared with the measured weight, 45% of foods assessed using the visual quarter-waste method were in almost perfect agreement, 42% of foods were in substantial agreement, 10% were in moderate agreement, and 3% were in slight agreement. For interrater reliability between pairs of visual assessors, 46% of foods were in perfect agreement, 31% were in almost perfect agreement, 15% were in substantial agreement, and 8% were in moderate agreement. These results suggest that the visual quarter-waste method is a valid and reliable tool for measuring plate waste in school cafeteria settings. Copyright © 2017 Academy of Nutrition and Dietetics. Published by Elsevier Inc. All rights reserved.

  7. Political Representation and Gender Inequalities Testing the Validity of Model Developed for Pakistan using a Data Set of Malaysia

    OpenAIRE

    Najeebullah Khan; Adnan Hussein; Zahid Awan; Bakhtiar Khan

    2012-01-01

    This study measured the impacts of six independent variables (political rights, election system type, political quota, literacy rate, labor force participation and GDP per capita at current price in US dollar) on the dependent variable (percentage of women representation in national legislature) using multiple linear regression models. At a first step we developed and tested the model without of sample data of Pakistan. For model construction and validation ten years data from the year 1999 a...

  8. Examination of the MMPI-2 restructured form (MMPI-2-RF) validity scales in civil forensic settings: findings from simulation and known group samples.

    Science.gov (United States)

    Wygant, Dustin B; Ben-Porath, Yossef S; Arbisi, Paul A; Berry, David T R; Freeman, David B; Heilbronner, Robert L

    2009-11-01

    The current study examined the effectiveness of the MMPI-2 Restructured Form (MMPI-2-RF; Ben-Porath and Tellegen, 2008) over-reporting indicators in civil forensic settings. The MMPI-2-RF includes three revised MMPI-2 over-reporting validity scales and a new scale to detect over-reported somatic complaints. Participants dissimulated medical and neuropsychological complaints in two simulation samples, and a known-groups sample used symptom validity tests as a response bias criterion. Results indicated large effect sizes for the MMPI-2-RF validity scales, including a Cohen's d of .90 for Fs in a head injury simulation sample, 2.31 for FBS-r, 2.01 for F-r, and 1.97 for Fs in a medical simulation sample, and 1.45 for FBS-r and 1.30 for F-r in identifying poor effort on SVTs. Classification results indicated good sensitivity and specificity for the scales across the samples. This study indicates that the MMPI-2-RF over-reporting validity scales are effective at detecting symptom over-reporting in civil forensic settings.

  9. A Validated Set of MIDAS V5 Task Network Model Scenarios to Evaluate Nextgen Closely Spaced Parallel Operations Concepts

    Science.gov (United States)

    Gore, Brian Francis; Hooey, Becky Lee; Haan, Nancy; Socash, Connie; Mahlstedt, Eric; Foyle, David C.

    2013-01-01

    The Closely Spaced Parallel Operations (CSPO) scenario is a complex, human performance model scenario that tested alternate operator roles and responsibilities to a series of off-nominal operations on approach and landing (see Gore, Hooey, Mahlstedt, Foyle, 2013). The model links together the procedures, equipment, crewstation, and external environment to produce predictions of operator performance in response to Next Generation system designs, like those expected in the National Airspaces NextGen concepts. The task analysis that is contained in the present report comes from the task analysis window in the MIDAS software. These tasks link definitions and states for equipment components, environmental features as well as operational contexts. The current task analysis culminated in 3300 tasks that included over 1000 Subject Matter Expert (SME)-vetted, re-usable procedural sets for three critical phases of flight; the Descent, Approach, and Land procedural sets (see Gore et al., 2011 for a description of the development of the tasks included in the model; Gore, Hooey, Mahlstedt, Foyle, 2013 for a description of the model, and its results; Hooey, Gore, Mahlstedt, Foyle, 2013 for a description of the guidelines that were generated from the models results; Gore, Hooey, Foyle, 2012 for a description of the models implementation and its settings). The rollout, after landing checks, taxi to gate and arrive at gate illustrated in Figure 1 were not used in the approach and divert scenarios exercised. The other networks in Figure 1 set up appropriate context settings for the flight deck.The current report presents the models task decomposition from the tophighest level and decomposes it to finer-grained levels. The first task that is completed by the model is to set all of the initial settings for the scenario runs included in the model (network 75 in Figure 1). This initialization process also resets the CAD graphic files contained with MIDAS, as well as the embedded

  10. Performance of Matrix-Assisted Laser Desorption Ionization−Time of Flight Mass Spectrometry for Identification of Aspergillus, Scedosporium, and Fusarium spp. in the Australian Clinical Setting

    Science.gov (United States)

    Sleiman, Sue; Halliday, Catriona L.; Chapman, Belinda; Brown, Mitchell; Nitschke, Joanne; Lau, Anna F.

    2016-01-01

    We developed an Australian database for the identification of Aspergillus, Scedosporium, and Fusarium species (n = 28) by matrix-assisted laser desorption ionization−time of flight mass spectrometry (MALDI-TOF MS). In a challenge against 117 isolates, species identification significantly improved when the in-house-built database was combined with the Bruker Filamentous Fungi Library compared with that for the Bruker library alone (Aspergillus, 93% versus 69%; Fusarium, 84% versus 42%; and Scedosporium, 94% versus 18%, respectively). PMID:27252460

  11. Data set for the proteomic inventory and quantitative analysis of chicken eggshell matrix proteins during the primary events of eggshell mineralization and the active growth phase of calcification.

    Science.gov (United States)

    Marie, Pauline; Labas, Valérie; Brionne, Aurélien; Harichaux, Grégoire; Hennequet-Antier, Christelle; Rodriguez-Navarro, Alejandro B; Nys, Yves; Gautron, Joël

    2015-09-01

    Chicken eggshell is a biomineral composed of 95% calcite calcium carbonate mineral and of 3.5% organic matrix proteins. The assembly of mineral and its structural organization is controlled by its organic matrix. In a recent study [1], we have used quantitative proteomic, bioinformatic and functional analyses to explore the distribution of 216 eggshell matrix proteins at four key stages of shell mineralization defined as: (1) widespread deposition of amorphous calcium carbonate (ACC), (2) ACC transformation into crystalline calcite aggregates, (3) formation of larger calcite crystal units and (4) rapid growth of calcite as columnar structure with preferential crystal orientation. The current article detailed the quantitative analysis performed at the four stages of shell mineralization to determine the proteins which are the most abundant. Additionally, we reported the enriched GO terms and described the presence of 35 antimicrobial proteins equally distributed at all stages to keep the egg free of bacteria and of 81 proteins, the function of which could not be ascribed.

  12. Data set for the proteomic inventory and quantitative analysis of chicken eggshell matrix proteins during the primary events of eggshell mineralization and the active growth phase of calcification

    Directory of Open Access Journals (Sweden)

    Pauline Marie

    2015-09-01

    Full Text Available Chicken eggshell is a biomineral composed of 95% calcite calcium carbonate mineral and of 3.5% organic matrix proteins. The assembly of mineral and its structural organization is controlled by its organic matrix. In a recent study [1], we have used quantitative proteomic, bioinformatic and functional analyses to explore the distribution of 216 eggshell matrix proteins at four key stages of shell mineralization defined as: (1 widespread deposition of amorphous calcium carbonate (ACC, (2 ACC transformation into crystalline calcite aggregates, (3 formation of larger calcite crystal units and (4 rapid growth of calcite as columnar structure with preferential crystal orientation. The current article detailed the quantitative analysis performed at the four stages of shell mineralization to determine the proteins which are the most abundant. Additionally, we reported the enriched GO terms and described the presence of 35 antimicrobial proteins equally distributed at all stages to keep the egg free of bacteria and of 81 proteins, the function of which could not be ascribed.

  13. Validation of Accelerometer-Based Energy Expenditure Prediction Models in Structured and Simulated Free-Living Settings

    Science.gov (United States)

    Montoye, Alexander H. K.; Conger, Scott A.; Connolly, Christopher P.; Imboden, Mary T.; Nelson, M. Benjamin; Bock, Josh M.; Kaminsky, Leonard A.

    2017-01-01

    This study compared accuracy of energy expenditure (EE) prediction models from accelerometer data collected in structured and simulated free-living settings. Twenty-four adults (mean age 45.8 years, 50% female) performed two sessions of 11 to 21 activities, wearing four ActiGraph GT9X Link activity monitors (right hip, ankle, both wrists) and a…

  14. An ancestry informative marker set for determining continental origin: validation and extension using human genome diversity panels

    Directory of Open Access Journals (Sweden)

    Gregersen Peter K

    2009-07-01

    Full Text Available Abstract Background Case-control genetic studies of complex human diseases can be confounded by population stratification. This issue can be addressed using panels of ancestry informative markers (AIMs that can provide substantial population substructure information. Previously, we described a panel of 128 SNP AIMs that were designed as a tool for ascertaining the origins of subjects from Europe, Sub-Saharan Africa, Americas, and East Asia. Results In this study, genotypes from Human Genome Diversity Panel populations were used to further evaluate a 93 SNP AIM panel, a subset of the 128 AIMS set, for distinguishing continental origins. Using both model-based and relatively model-independent methods, we here confirm the ability of this AIM set to distinguish diverse population groups that were not previously evaluated. This study included multiple population groups from Oceana, South Asia, East Asia, Sub-Saharan Africa, North and South America, and Europe. In addition, the 93 AIM set provides population substructure information that can, for example, distinguish Arab and Ashkenazi from Northern European population groups and Pygmy from other Sub-Saharan African population groups. Conclusion These data provide additional support for using the 93 AIM set to efficiently identify continental subject groups for genetic studies, to identify study population outliers, and to control for admixture in association studies.

  15. Clinical validation of the LKB model and parameter sets for predicting radiation-induced pneumonitis from breast cancer radiotherapy

    International Nuclear Information System (INIS)

    Tsougos, Ioannis; Mavroidis, Panayiotis; Theodorou, Kyriaki; Rajala, J; Pitkaenen, M A; Holli, K; Ojala, A T; Hyoedynmaa, S; Jaervenpaeae, Ritva; Lind, Bengt K; Kappas, Constantin

    2006-01-01

    The choice of the appropriate model and parameter set in determining the relation between the incidence of radiation pneumonitis and dose distribution in the lung is of great importance, especially in the case of breast radiotherapy where the observed incidence is fairly low. From our previous study based on 150 breast cancer patients, where the fits of dose-volume models to clinical data were estimated (Tsougos et al 2005 Evaluation of dose-response models and parameters predicting radiation induced pneumonitis using clinical data from breast cancer radiotherapy Phys. Med. Biol. 50 3535-54), one could get the impression that the relative seriality is significantly better than the LKB NTCP model. However, the estimation of the different NTCP models was based on their goodness-of-fit on clinical data, using various sets of published parameters from other groups, and this fact may provisionally justify the results. Hence, we sought to investigate further the LKB model, by applying different published parameter sets for the very same group of patients, in order to be able to compare the results. It was shown that, depending on the parameter set applied, the LKB model is able to predict the incidence of radiation pneumonitis with acceptable accuracy, especially when implemented on a sub-group of patients (120) receiving D-bar-bar vertical bar EUD higher than 8 Gy. In conclusion, the goodness-of-fit of a certain radiobiological model on a given clinical case is closely related to the selection of the proper scoring criteria and parameter set as well as to the compatibility of the clinical case from which the data were derived. (letter to the editor)

  16. The ToMenovela – A photograph-based stimulus set for the study of social cognition with high ecological validity

    Directory of Open Access Journals (Sweden)

    Maike C. Herbort

    2016-12-01

    Full Text Available We present the ToMenovela, a stimulus set that has been developed to provide a set of normatively rated socio-emotional stimuli showing varying amount of characters in emotionally laden interactions for experimental investigations of i cognitive and ii affective ToM, iii emotional reactivity, and iv complex emotion judgment with respect to Ekman’s basic emotions (happiness, sadness, anger, fear, surprise and disgust, Ekman & Friesen, 1975. Stimuli were generated with focus on ecological validity and consist of 190 scenes depicting daily-life situations. Two or more of eight main characters with distinct biographies and personalities are depicted on each scene picture.To obtain an initial evaluation of the stimulus set and to pave the way for future studies in clinical populations, normative data on each stimulus of the set was obtained from a sample of 61 neurologically and psychiatrically healthy participants (31 female, 30 male; mean age 26.74 +/- 5.84, including a visual analog scale rating of Ekman’s basic emotions (happiness, sadness, anger, fear, surprise and disgust and free-text descriptions of the content. The ToMenovela is being developed to provide standardized material of social scenes that are available to researchers in the study of social cognition. It should facilitate experimental control while keeping ecological validity high.

  17. Development and Validation of a Simple Risk Score for Undiagnosed Type 2 Diabetes in a Resource-Constrained Setting

    Science.gov (United States)

    Gilman, Robert H.; Sanchez-Abanto, Jose R.; Study Group, CRONICAS Cohort

    2016-01-01

    Objective. To develop and validate a risk score for detecting cases of undiagnosed diabetes in a resource-constrained country. Methods. Two population-based studies in Peruvian population aged ≥35 years were used in the analysis: the ENINBSC survey (n = 2,472) and the CRONICAS Cohort Study (n = 2,945). Fasting plasma glucose ≥7.0 mmol/L was used to diagnose diabetes in both studies. Coefficients for risk score were derived from the ENINBSC data and then the performance was validated using both baseline and follow-up data of the CRONICAS Cohort Study. Results. The prevalence of undiagnosed diabetes was 2.0% in the ENINBSC survey and 2.9% in the CRONICAS Cohort Study. Predictors of undiagnosed diabetes were age, diabetes in first-degree relatives, and waist circumference. Score values ranged from 0 to 4, with an optimal cutoff ≥2 and had a moderate performance when applied in the CRONICAS baseline data (AUC = 0.68; 95% CI: 0.62–0.73; sensitivity 70%; specificity 59%). When predicting incident cases, the AUC was 0.66 (95% CI: 0.61–0.71), with a sensitivity of 69% and specificity of 59%. Conclusions. A simple nonblood based risk score based on age, diabetes in first-degree relatives, and waist circumference can be used as a simple screening tool for undiagnosed and incident cases of diabetes in Peru. PMID:27689096

  18. The Basel Face Database: A validated set of photographs reflecting systematic differences in Big Two and Big Five personality dimensions.

    Science.gov (United States)

    Walker, Mirella; Schönborn, Sandro; Greifeneder, Rainer; Vetter, Thomas

    2018-01-01

    Upon a first encounter, individuals spontaneously associate faces with certain personality dimensions. Such first impressions can strongly impact judgments and decisions and may prove highly consequential. Researchers investigating the impact of facial information often rely on (a) real photographs that have been selected to vary on the dimension of interest, (b) morphed photographs, or (c) computer-generated faces (avatars). All three approaches have distinct advantages. Here we present the Basel Face Database, which combines these advantages. In particular, the Basel Face Database consists of real photographs that are subtly, but systematically manipulated to show variations in the perception of the Big Two and the Big Five personality dimensions. To this end, the information specific to each psychological dimension is isolated and modeled in new photographs. Two studies serve as systematic validation of the Basel Face Database. The Basel Face Database opens a new pathway for researchers across psychological disciplines to investigate effects of perceived personality.

  19. Reliability and Validity of Survey Instruments to Measure Work-Related Fatigue in the Emergency Medical Services Setting: A Systematic Review.

    Science.gov (United States)

    Patterson, P Daniel; Weaver, Matthew D; Fabio, Anthony; Teasley, Ellen M; Renn, Megan L; Curtis, Brett R; Matthews, Margaret E; Kroemer, Andrew J; Xun, Xiaoshuang; Bizhanova, Zhadyra; Weiss, Patricia M; Sequeira, Denisse J; Coppler, Patrick J; Lang, Eddy S; Higgins, J Stephen

    2018-02-15

    This study sought to systematically search the literature to identify reliable and valid survey instruments for fatigue measurement in the Emergency Medical Services (EMS) occupational setting. A systematic review study design was used and searched six databases, including one website. The research question guiding the search was developed a priori and registered with the PROSPERO database of systematic reviews: "Are there reliable and valid instruments for measuring fatigue among EMS personnel?" (2016:CRD42016040097). The primary outcome of interest was criterion-related validity. Important outcomes of interest included reliability (e.g., internal consistency), and indicators of sensitivity and specificity. Members of the research team independently screened records from the databases. Full-text articles were evaluated by adapting the Bolster and Rourke system for categorizing findings of systematic reviews, and the rated data abstracted from the body of literature as favorable, unfavorable, mixed/inconclusive, or no impact. The Grading of Recommendations, Assessment, Development and Evaluation (GRADE) methodology was used to evaluate the quality of evidence. The search strategy yielded 1,257 unique records. Thirty-four unique experimental and non-experimental studies were determined relevant following full-text review. Nineteen studies reported on the reliability and/or validity of ten different fatigue survey instruments. Eighteen different studies evaluated the reliability and/or validity of four different sleepiness survey instruments. None of the retained studies reported sensitivity or specificity. Evidence quality was rated as very low across all outcomes. In this systematic review, limited evidence of the reliability and validity of 14 different survey instruments to assess the fatigue and/or sleepiness status of EMS personnel and related shift worker groups was identified.

  20. Development of Reliable and Validated Tools to Evaluate Technical Resuscitation Skills in a Pediatric Simulation Setting: Resuscitation and Emergency Simulation Checklist for Assessment in Pediatrics.

    Science.gov (United States)

    Faudeux, Camille; Tran, Antoine; Dupont, Audrey; Desmontils, Jonathan; Montaudié, Isabelle; Bréaud, Jean; Braun, Marc; Fournier, Jean-Paul; Bérard, Etienne; Berlengi, Noémie; Schweitzer, Cyril; Haas, Hervé; Caci, Hervé; Gatin, Amélie; Giovannini-Chami, Lisa

    2017-09-01

    To develop a reliable and validated tool to evaluate technical resuscitation skills in a pediatric simulation setting. Four Resuscitation and Emergency Simulation Checklist for Assessment in Pediatrics (RESCAPE) evaluation tools were created, following international guidelines: intraosseous needle insertion, bag mask ventilation, endotracheal intubation, and cardiac massage. We applied a modified Delphi methodology evaluation to binary rating items. Reliability was assessed comparing the ratings of 2 observers (1 in real time and 1 after a video-recorded review). The tools were assessed for content, construct, and criterion validity, and for sensitivity to change. Inter-rater reliability, evaluated with Cohen kappa coefficients, was perfect or near-perfect (>0.8) for 92.5% of items and each Cronbach alpha coefficient was ≥0.91. Principal component analyses showed that all 4 tools were unidimensional. Significant increases in median scores with increasing levels of medical expertise were demonstrated for RESCAPE-intraosseous needle insertion (P = .0002), RESCAPE-bag mask ventilation (P = .0002), RESCAPE-endotracheal intubation (P = .0001), and RESCAPE-cardiac massage (P = .0037). Significantly increased median scores over time were also demonstrated during a simulation-based educational program. RESCAPE tools are reliable and validated tools for the evaluation of technical resuscitation skills in pediatric settings during simulation-based educational programs. They might also be used for medical practice performance evaluations. Copyright © 2017 Elsevier Inc. All rights reserved.

  1. Development of the Human Factors Skills for Healthcare Instrument: a valid and reliable tool for assessing interprofessional learning across healthcare practice settings.

    Science.gov (United States)

    Reedy, Gabriel B; Lavelle, Mary; Simpson, Thomas; Anderson, Janet E

    2017-10-01

    A central feature of clinical simulation training is human factors skills, providing staff with the social and cognitive skills to cope with demanding clinical situations. Although these skills are critical to safe patient care, assessing their learning is challenging. This study aimed to develop, pilot and evaluate a valid and reliable structured instrument to assess human factors skills, which can be used pre- and post-simulation training, and is relevant across a range of healthcare professions. Through consultation with a multi-professional expert group, we developed and piloted a 39-item survey with 272 healthcare professionals attending training courses across two large simulation centres in London, one specialising in acute care and one in mental health, both serving healthcare professionals working across acute and community settings. Following psychometric evaluation, the final 12-item instrument was evaluated with a second sample of 711 trainees. Exploratory factor analysis revealed a 12-item, one-factor solution with good internal consistency (α=0.92). The instrument had discriminant validity, with newly qualified trainees scoring significantly lower than experienced trainees ( t (98)=4.88, pSkills for Healthcare Instrument provides a reliable and valid method of assessing trainees' human factors skills self-efficacy across acute and mental health settings. This instrument has the potential to improve the assessment and evaluation of human factors skills learning in both uniprofessional and interprofessional clinical simulation training.

  2. Validity of measures of pain and symptoms in HIV/AIDS infected households in resources poor settings: results from the Dominican Republic and Cambodia

    Directory of Open Access Journals (Sweden)

    Morineau Guy

    2006-03-01

    Full Text Available Abstract Background HIV/AIDS treatment programs are currently being mounted in many developing nations that include palliative care services. While measures of palliative care have been developed and validated for resource rich settings, very little work exists to support an understanding of measurement for Africa, Latin America or Asia. Methods This study investigates the construct validity of measures of reported pain, pain control, symptoms and symptom control in areas with high HIV-infected prevalence in Dominican Republic and Cambodia Measures were adapted from the POS (Palliative Outcome Scale. Households were selected through purposive sampling from networks of people living with HIV/AIDS. Consistencies in patterns in the data were tested used Chi Square and Mantel Haenszel tests. Results The sample persons who reported chronic illness were much more likely to report pain and symptoms compared to those not chronically ill. When controlling for the degrees of pain, pain control did not differ between the chronically ill and non-chronically ill using a Mantel Haenszel test in both countries. Similar results were found for reported symptoms and symptom control for the Dominican Republic. These findings broadly support the construct validity of an adapted version of the POS in these two less developed countries. Conclusion The results of the study suggest that the selected measures can usefully be incorporated into population-based surveys and evaluation tools needed to monitor palliative care and used in settings with high HIV/AIDS prevalence.

  3. De-MetaST-BLAST: a tool for the validation of degenerate primer sets and data mining of publicly available metagenomes.

    Directory of Open Access Journals (Sweden)

    Christopher A Gulvik

    Full Text Available Development and use of primer sets to amplify nucleic acid sequences of interest is fundamental to studies spanning many life science disciplines. As such, the validation of primer sets is essential. Several computer programs have been created to aid in the initial selection of primer sequences that may or may not require multiple nucleotide combinations (i.e., degeneracies. Conversely, validation of primer specificity has remained largely unchanged for several decades, and there are currently few available programs that allows for an evaluation of primers containing degenerate nucleotide bases. To alleviate this gap, we developed the program De-MetaST that performs an in silico amplification using user defined nucleotide sequence dataset(s and primer sequences that may contain degenerate bases. The program returns an output file that contains the in silico amplicons. When De-MetaST is paired with NCBI's BLAST (De-MetaST-BLAST, the program also returns the top 10 nr NCBI database hits for each recovered in silico amplicon. While the original motivation for development of this search tool was degenerate primer validation using the wealth of nucleotide sequences available in environmental metagenome and metatranscriptome databases, this search tool has potential utility in many data mining applications.

  4. Performance of Matrix-Assisted Laser Desorption Ionization-Time of Flight Mass Spectrometry for Identification of Aspergillus, Scedosporium, and Fusarium spp. in the Australian Clinical Setting.

    Science.gov (United States)

    Sleiman, Sue; Halliday, Catriona L; Chapman, Belinda; Brown, Mitchell; Nitschke, Joanne; Lau, Anna F; Chen, Sharon C-A

    2016-08-01

    We developed an Australian database for the identification of Aspergillus, Scedosporium, and Fusarium species (n = 28) by matrix-assisted laser desorption ionization-time of flight mass spectrometry (MALDI-TOF MS). In a challenge against 117 isolates, species identification significantly improved when the in-house-built database was combined with the Bruker Filamentous Fungi Library compared with that for the Bruker library alone (Aspergillus, 93% versus 69%; Fusarium, 84% versus 42%; and Scedosporium, 94% versus 18%, respectively). Copyright © 2016, American Society for Microbiology. All Rights Reserved.

  5. The Child Behaviour Assessment Instrument: development and validation of a measure to screen for externalising child behavioural problems in community setting

    Directory of Open Access Journals (Sweden)

    Perera Hemamali

    2010-06-01

    Full Text Available Abstract Background In Sri Lanka, behavioural problems have grown to epidemic proportions accounting second highest category of mental health problems among children. Early identification of behavioural problems in children is an important pre-requisite of the implementation of interventions to prevent long term psychiatric outcomes. The objectives of the study were to develop and validate a screening instrument for use in the community setting to identify behavioural problems in children aged 4-6 years. Methods An initial 54 item questionnaire was developed following an extensive review of the literature. A three round Delphi process involving a panel of experts from six relevant fields was then undertaken to refine the nature and number of items and created the 15 item community screening instrument, Child Behaviour Assessment Instrument (CBAI. Validation study was conducted in the Medical Officer of Health area Kaduwela, Sri Lanka and a community sample of 332 children aged 4-6 years were recruited by two stage randomization process. The behaviour status of the participants was assessed by an interviewer using the CBAI and a clinical psychologist following clinical assessment concurrently. Criterion validity was appraised by assessing the sensitivity, specificity and predictive values at the optimum screen cut off value. Construct validity of the instrument was quantified by testing whether the data of validation study fits to a hypothetical model. Face and content validity of the CBAI were qualitatively assessed by a panel of experts. The reliability of the instrument was assessed by internal consistency analysis and test-retest methods in a 15% subset of the community sample. Results Using the Receiver Operating Characteristic analysis the CBAI score of >16 was identified as the cut off point that optimally differentiated children having behavioural problems, with a sensitivity of 0.88 (95% CI = 0.80-0.96 and specificity of 0.81 (95% CI = 0

  6. Dynamic Matrix Rank

    DEFF Research Database (Denmark)

    Frandsen, Gudmund Skovbjerg; Frandsen, Peter Frands

    2009-01-01

    We consider maintaining information about the rank of a matrix under changes of the entries. For n×n matrices, we show an upper bound of O(n1.575) arithmetic operations and a lower bound of Ω(n) arithmetic operations per element change. The upper bound is valid when changing up to O(n0.575) entries...... in a single column of the matrix. We also give an algorithm that maintains the rank using O(n2) arithmetic operations per rank one update. These bounds appear to be the first nontrivial bounds for the problem. The upper bounds are valid for arbitrary fields, whereas the lower bound is valid for algebraically...... closed fields. The upper bound for element updates uses fast rectangular matrix multiplication, and the lower bound involves further development of an earlier technique for proving lower bounds for dynamic computation of rational functions....

  7. Development and validation of an app-based cell counter for use in the clinical laboratory setting

    Directory of Open Access Journals (Sweden)

    Alexander C Thurman

    2015-01-01

    Full Text Available Introduction: For decades cellular differentials have been generated exclusively on analog tabletop cell counters. With the advent of tablet computers, digital cell counters - in the form of mobile applications ("apps" - now represent an alternative to analog devices. However, app-based counters have not been widely adopted by clinical laboratories, perhaps owing to a presumed decrease in count accuracy related to the lack of tactile feedback inherent in a touchscreen interface. We herein provide the first systematic evidence that digital cell counters function similarly to standard tabletop units. Methods: We developed an app-based cell counter optimized for use in the clinical laboratory setting. Paired counts of 188 peripheral blood smears and 62 bone marrow aspirate smears were performed using our app-based counter and a standard analog device. Differences between paired data sets were analyzed using the correlation coefficient, Student′s t-test for paired samples and Bland-Altman plots. Results: All counts showed excellent agreement across all users and touch screen devices. With the exception of peripheral blood basophils (r = 0.684, differentials generated for the measured cell categories within the paired data sets were highly correlated (all r ≥ 0.899. Results of paired t-tests did not reach statistical significance for any cell type (all P > 0.05, and Bland-Altman plots showed a narrow spread of the difference about the mean without evidence of significant outliers. Conclusions: Our analysis suggests that no systematic differences exist between cellular differentials obtained via app-based or tabletop counters and that agreement between these two methods is excellent.

  8. Predictive validity of the identification of seniors at risk screening tool in a German emergency department setting.

    Science.gov (United States)

    Singler, Katrin; Heppner, Hans Jürgen; Skutetzky, Andreas; Sieber, Cornel; Christ, Michael; Thiem, Ulrich

    2014-01-01

    The identification of patients at high risk for adverse outcomes [death, unplanned readmission to emergency department (ED)/hospital, functional decline] plays an important role in emergency medicine. The Identification of Seniors at Risk (ISAR) instrument is one of the most commonly used and best-validated screening tools. As to the authors' knowledge so far there are no data on any screening tool for the identification of older patients at risk for a negative outcome in Germany. To evaluate the validity of the ISAR screening tool in a German ED. This was a prospective single-center observational cohort study in an ED of an urban university-affiliated hospital. Participants were 520 patients aged ≥75 years consecutively admitted to the ED. The German version of the ISAR screening tool was administered directly after triage of the patients. Follow-up telephone interviews to assess outcome variables were conducted 28 and 180 days after the index visit in the ED. The primary end point was death from any cause or hospitalization or recurrent ED visit or change of residency into a long-term care facility on day 28 after the index ED visit. The mean age ± SD was 82.8 ± 5.0 years. According to ISAR, 425 patients (81.7%) scored ≥2 points, and 315 patients (60.5%) scored ≥3 points. The combined primary end point was observed in 250 of 520 patients (48.1%) on day 28 and in 260 patients (50.0%) on day 180. Using a continuous ISAR score the area under the curve on day 28 was 0.621 (95% confidence interval, CI 0.573-0.669) and 0.661 (95% CI 0.615-0.708) on day 180, respectively. The German version of the ISAR screening tool acceptably identified elderly patients in the ED with an increased risk of a negative outcome. Using the cutoff ≥3 points instead of ≥2 points yielded better overall results.

  9. Hope Matters: Developing and Validating a Measure of Future Expectations Among Young Women in a High HIV Prevalence Setting in Rural South Africa (HPTN 068).

    Science.gov (United States)

    Abler, Laurie; Hill, Lauren; Maman, Suzanne; DeVellis, Robert; Twine, Rhian; Kahn, Kathleen; MacPhail, Catherine; Pettifor, Audrey

    2017-07-01

    Hope is a future expectancy characterized by an individual's perception that a desirable future outcome can be achieved. Though scales exist to measure hope, they may have limited relevance in low resource, high HIV prevalence settings. We developed and validated a hope scale among young women living in rural South Africa. We conducted formative interviews to identify the key elements of hope. Using items developed from these interviews, we administered the hope scale to 2533 young women enrolled in an HIV-prevention trial. Women endorsed scale items highly and the scale proved to be unidimensional in the sample. Hope scores were significantly correlated with hypothesized psycholosocial correlates with the exception of life stressors. Overall, our hope measure was found to have excellent reliability and to show encouraging preliminary indications of validity in this population. This study presents a promising measure to assess hope among young women in South Africa.

  10. Validation of the Self Reporting Questionnaire 20-Item (SRQ-20) for Use in a Low- and Middle-Income Country Emergency Centre Setting

    Science.gov (United States)

    Wyatt, Gail; Williams, John K.; Stein, Dan J.; Sorsdahl, Katherine

    2015-01-01

    Common mental disorders are highly prevalent in emergency centre (EC) patients, yet few brief screening tools have been validated for low- and middle-income country (LMIC) ECs. This study explored the psychometric properties of the SRQ-20 screening tool in South African ECs using the Mini Neuropsychiatric Interview (MINI) as the gold standard comparison tool. Patients (n=200) from two ECs in Cape Town, South Africa were interviewed using the SRQ-20 and the MINI. Internal consistency, screening properties and factorial validity were examined. The SRQ-20 was effective in identifying participants with major depression, anxiety disorders or suicidality and displayed good internal consistency. The optimal cutoff scores were 4/5 and 6/7 for men and women respectively. The factor structure differed by gender. The SRQ-20 is a useful tool for EC settings in South Africa and holds promise for task-shifted approaches to decreasing the LMIC burden of mental disorders. PMID:26957953

  11. Validating the use of colouration patterns for individual recognition in the worm pipefish using a novel set of microsatellite markers.

    Science.gov (United States)

    Monteiro, N M; Silva, R M; Cunha, M; Antunes, A; Jones, A G; Vieira, M N

    2014-01-01

    In studies of behaviour, ecology and evolution, identification of individual organisms can be an invaluable tool, capable of unravelling otherwise cryptic information regarding group structure, movement patterns, population size and mating strategies. The use of natural markings is arguably the least invasive method for identification. However, to be truly useful natural markings must be sufficiently variable to allow for unique identification, while being stable enough to permit long-term studies. Non-invasive marking techniques are especially important in fishes of the Family Syngnathidae (pipefishes, seahorses and seadragons), as many of these taxa are of conservation concern or used extensively in studies of sexual selection. Here, we assessed the reliability of natural markings as a character for individual identification in a wild population of Nerophis lumbriciformis by comparing results from natural markings to individual genetic assignments based on eight novel microsatellite loci. We also established a minimally invasive method based on epithelial cell swabbing to sample DNA. All pipefish used in the validation of natural markings, independently of sex or time between recaptures, were individually recognized through facial colouration patterns. Their identities were verified by the observation of the same multilocus genotype at every sampling event for each individual that was identified on the basis of natural markings. Successful recaptures of previously swabbed pipefish indicated that this process probably did not induce an elevated rate of mortality. Also, the recapture of newly pregnant males showed that swabbing did not affect reproductive behaviour. © 2013 John Wiley & Sons Ltd.

  12. Matrix theory

    CERN Document Server

    Franklin, Joel N

    2003-01-01

    Mathematically rigorous introduction covers vector and matrix norms, the condition-number of a matrix, positive and irreducible matrices, much more. Only elementary algebra and calculus required. Includes problem-solving exercises. 1968 edition.

  13. Influence of different process settings conditions on the accuracy of micro injection molding simulations: an experimental validation

    DEFF Research Database (Denmark)

    Tosello, Guido; Gava, Alberto; Hansen, Hans Nørgaard

    2009-01-01

    Currently available software packages exhibit poor results accuracy when performing micro injection molding (µIM) simulations. However, with an appropriate set-up of the processing conditions, the quality of results can be improved. The effects on the simulation results of different and alternative...... process conditions are investigated, namely the nominal injection speed, as well as the cavity filling time and the evolution of the cavity injection pressure as experimental data. In addition, the sensitivity of the results to the quality of the rheological data is analyzed. Simulated results...... are compared with experiments in terms of flow front position at part and micro features levels, as well as cavity injection filling time measurements....

  14. Measuring the Value of New Drugs: Validity and Reliability of 4 Value Assessment Frameworks in the Oncology Setting.

    Science.gov (United States)

    Bentley, Tanya G K; Cohen, Joshua T; Elkin, Elena B; Huynh, Julie; Mukherjea, Arnab; Neville, Thanh H; Mei, Matthew; Copher, Ronda; Knoth, Russell; Popescu, Ioana; Lee, Jackie; Zambrano, Jenelle M; Broder, Michael S

    2017-06-01

    Several organizations have developed frameworks to systematically assess the value of new drugs. To evaluate the convergent validity and interrater reliability of 4 value frameworks to understand the extent to which these tools can facilitate value-based treatment decisions in oncology. Eight panelists used the American Society of Clinical Oncology (ASCO), European Society for Medical Oncology (ESMO), Institute for Clinical and Economic Review (ICER), and National Comprehensive Cancer Network (NCCN) frameworks to conduct value assessments of 15 drugs for advanced lung and breast cancers and castration-refractory prostate cancer. Panelists received instructions and published clinical data required to complete the assessments, assigning each drug a numeric or letter score. Kendall's Coefficient of Concordance for Ranks (Kendall's W) was used to measure convergent validity by cancer type among the 4 frameworks. Intraclass correlation coefficients (ICCs) were used to measure interrater reliability for each framework across cancers. Panelists were surveyed on their experiences. Kendall's W across all 4 frameworks for breast, lung, and prostate cancer drugs was 0.560 (P= 0.010), 0.562 (P = 0.010), and 0.920 (P fair to excellent, increasing with clinical benefit subdomain concordance and simplicity of drug trial data. Interrater reliability, highest for ASCO and ESMO, improved with clarity of instructions and specificity of score definitions. Continued use, analyses, and refinements of these frameworks will bring us closer to the ultimate goal of using value-based treatment decisions to improve patient care and outcomes. This work was funded by Eisai Inc. Copher and Knoth are employees of Eisai Inc. Bentley, Lee, Zambrano, and Broder are employees of Partnership for Health Analytic Research, a health services research company paid by Eisai Inc. to conduct this research. For this study, Cohen, Huynh, and Neville report fees from Partnership for Health Analytic Research

  15. Creation and validation of a novel body condition scoring method for the magellanic penguin (Spheniscus magellanicus) in the zoo setting.

    Science.gov (United States)

    Clements, Julie; Sanchez, Jessica N

    2015-11-01

    This research aims to validate a novel, visual body scoring system created for the Magellanic penguin (Spheniscus magellanicus) suitable for the zoo practitioner. Magellanics go through marked seasonal fluctuations in body mass gains and losses. A standardized multi-variable visual body condition guide may provide a more sensitive and objective assessment tool compared to the previously used single variable method. Accurate body condition scores paired with seasonal weight variation measurements give veterinary and keeper staff a clearer understanding of an individual's nutritional status. San Francisco Zoo staff previously used a nine-point body condition scale based on the classic bird standard of a single point of keel palpation with the bird restrained in hand, with no standard measure of reference assigned to each scoring category. We created a novel, visual body condition scoring system that does not require restraint to assesses subcutaneous fat and muscle at seven body landmarks using illustrations and descriptive terms. The scores range from one, the least robust or under-conditioned, to five, the most robust, or over-conditioned. The ratio of body weight to wing length was used as a "gold standard" index of body condition and compared to both the novel multi-variable and previously used single-variable body condition scores. The novel multi-variable scale showed improved agreement with weight:wing ratio compared to the single-variable scale, demonstrating greater accuracy, and reliability when a trained assessor uses the multi-variable body condition scoring system. Zoo staff may use this tool to manage both the colony and the individual to assist in seasonally appropriate Magellanic penguin nutrition assessment. © 2015 Wiley Periodicals, Inc.

  16. Quantitative co-localization and pattern analysis of endo-lysosomal cargo in subcellular image cytometry and validation on synthetic image sets

    DEFF Research Database (Denmark)

    Lund, Frederik W.; Wüstner, Daniel

    2017-01-01

    /LYSs. Analysis of endocytic trafficking relies heavily on quantitative fluorescence microscopy, but evaluation of the huge image data sets is challenging and demands computer-assisted statistical tools. Here, we describe how to use SpatTrack (www.sdu.dk/bmb/spattrack), an imaging toolbox, which we developed...... such synthetic vesicle patterns as “ground truth” for validation of two-channel analysis tools in SpatTrack, revealing their high reliability. An improved version of SpatTrack for microscopy-based quantification of cargo transport through the endo-lysosomal system accompanies this protocol....

  17. Growth and setting of gas bubbles in a viscoelastic matrix imaged by X-ray microtomography: the evolution of cellular structures in fermenting wheat flour dough.

    Science.gov (United States)

    Turbin-Orger, A; Babin, P; Boller, E; Chaunier, L; Chiron, H; Della Valle, G; Dendievel, R; Réguerre, A L; Salvo, L

    2015-05-07

    X-ray tomography is a relevant technique for the dynamic follow-up of gas bubbles in an opaque viscoelastic matrix, especially using image analysis. It has been applied here to pieces of fermenting wheat flour dough of various compositions, at two different voxel sizes (15 and 5 μm). The resulting evolution of the main cellular features shows that the creation of cellular structures follows two regimes that are defined by a characteristic time of connectivity, tc [30 and 80 min]: first (t ≤ tc), bubbles grow freely and then (t ≥ tc) they become connected since the percolation of the gas phase is limited by liquid films. During the first regime, bubbles can be tracked and the local strain rate can be measured. Its values (10(-4)-5 × 10(-4) s(-1)) are in agreement with those computed from dough viscosity and internal gas pressure, both of which depend on the composition. For higher porosity, P = 0.64 in our case, and thus occurring in the second regime, different cellular structures are obtained and XRT images show deformed gas cells that display complex shapes. The comparison of these images with confocal laser scanning microscopy images suggests the presence of liquid films that separate these cells. The dough can therefore be seen as a three-phase medium: viscoelastic matrix/gas cell/liquid phase. The contributions of the different levels of matter organization can be integrated by defining a capillary number (C = 0.1-1) that makes it possible to predict the macroscopic dough behavior.

  18. Validation of Doloplus-2 among nonverbal nursing home patients - an evaluation of Doloplus-2 in a clinical setting

    Directory of Open Access Journals (Sweden)

    Kirkevold Øyvind

    2010-02-01

    Full Text Available Abstract Background Pain measurement in nonverbal older adults is best based on behavioural observation, e.g. using an observational measurement tool such as Doloplus-2. The purposes of this study were to examine the use of Doloplus-2 in a nonverbal nursing home population, and to evaluate its reliability and validity by comparing registered nurses' estimation of pain with Doloplus-2 scores. Method In this cross-sectional study, Doloplus-2 was used to observe the pain behaviour of patients aged above 65 years who were unable to self-report their pain. Nurses also recorded their perceptions of patient pain (yes, no, don't know before they used Doloplus-2. Data on demographics, medical diagnoses, and prescribed pain treatment were collected from patient records. Daily life functioning was measured and participants were screened using the Mini Mental State Examination. Results In total, 77 nursing home patients were included, 75% were women and the mean age was 86 years (SD 6.6, range 68-100. Over 50% were dependent on nursing care to a high or a medium degree, and all were severely cognitively impaired. The percentage of zero scores on Doloplus-2 ranged from 17% (somatic reactions to 40% (psychosocial reactions. Cronbach's alpha was 0.71 for the total scale. In total, 52% of the patients were judged by nurses to be experiencing pain, compared with 68% when using Doloplus-2 (p = 0.01. For 29% of the sample, nurses were unable to report if the patients were in pain. Conclusions In the present study, more patients were categorized as having pain while using Doloplus-2 compared with nurses' estimation of pain without using any tools. The fact that nurses could not report if the patients were in pain in one third of the patients supports the claim that Doloplus-2 is a useful supplement for estimating pain in this population. However, nurses must use their clinical experience in addition to the use of Doloplus-2, as behaviour can have different meaning

  19. Validation of a coupled wave-flow model in a high-energy setting: the mouth of the Columbia River

    Science.gov (United States)

    Elias, Edwin P.L.; Gelfenbaum, Guy R.; van der Westhuysen, André J.

    2012-01-01

     A monthlong time series of wave, current, salinity, and suspended-sediment measurements was made at five sites on a transect across the Mouth of Columbia River (MCR). These data were used to calibrate and evaluate the performance of a coupled hydrodynamic and wave model for the MCR based on the Delft3D modeling system. The MCR is a dynamic estuary inlet in which tidal currents, river discharge, and wave-driven currents are all important. Model tuning consisted primarily of spatial adjustments to bottom drag coefficients. In combination with (near-) default parameter settings, the MCR model application is able to simulate the dominant features in the tidal flow, salinity and wavefields observed in field measurements. The wave-orbital averaged method for representing the current velocity profile in the wave model is considered the most realistic for the MCR. The hydrodynamic model is particularly effective in reproducing the observed vertical residual and temporal variations in current structure. Density gradients introduce the observed and modeled reversal of the mean flow at the bed and augment mean and peak flow in the upper half of the water column. This implies that sediment transport during calmer summer conditions is controlled by density stratification and is likely net landward due to the reversal of flow near the bed. The correspondence between observed and modeled hydrodynamics makes this application a tool to investigate hydrodynamics and associated sediment transport.

  20. Selection and validation of a set of reliable reference genes for quantitative sod gene expression analysis in C. elegans

    Directory of Open Access Journals (Sweden)

    Vandesompele Jo

    2008-01-01

    Full Text Available Abstract Background In the nematode Caenorhabditis elegans the conserved Ins/IGF-1 signaling pathway regulates many biological processes including life span, stress response, dauer diapause and metabolism. Detection of differentially expressed genes may contribute to a better understanding of the mechanism by which the Ins/IGF-1 signaling pathway regulates these processes. Appropriate normalization is an essential prerequisite for obtaining accurate and reproducible quantification of gene expression levels. The aim of this study was to establish a reliable set of reference genes for gene expression analysis in C. elegans. Results Real-time quantitative PCR was used to evaluate the expression stability of 12 candidate reference genes (act-1, ama-1, cdc-42, csq-1, eif-3.C, mdh-1, gpd-2, pmp-3, tba-1, Y45F10D.4, rgs-6 and unc-16 in wild-type, three Ins/IGF-1 pathway mutants, dauers and L3 stage larvae. After geNorm analysis, cdc-42, pmp-3 and Y45F10D.4 showed the most stable expression pattern and were used to normalize 5 sod expression levels. Significant differences in mRNA levels were observed for sod-1 and sod-3 in daf-2 relative to wild-type animals, whereas in dauers sod-1, sod-3, sod-4 and sod-5 are differentially expressed relative to third stage larvae. Conclusion Our findings emphasize the importance of accurate normalization using stably expressed reference genes. The methodology used in this study is generally applicable to reliably quantify gene expression levels in the nematode C. elegans using quantitative PCR.

  1. Cross-validation of biomarkers for the early differential diagnosis and prognosis of dementia in a clinical setting

    International Nuclear Information System (INIS)

    Perani, Daniela; Cerami, Chiara; Caminiti, Silvia Paola; Santangelo, Roberto; Coppi, Elisabetta; Ferrari, Laura; Magnani, Giuseppe; Pinto, Patrizia; Passerini, Gabriella; Falini, Andrea; Iannaccone, Sandro; Cappa, Stefano Francesco; Comi, Giancarlo; Gianolli, Luigi

    2016-01-01

    The aim of this study was to evaluate the supportive role of molecular and structural biomarkers (CSF protein levels, FDG PET and MRI) in the early differential diagnosis of dementia in a large sample of patients with neurodegenerative dementia, and in determining the risk of disease progression in subjects with mild cognitive impairment (MCI). We evaluated the supportive role of CSF Aβ 42 , t-Tau, p-Tau levels, conventional brain MRI and visual assessment of FDG PET SPM t-maps in the early diagnosis of dementia and the evaluation of MCI progression. Diagnosis based on molecular biomarkers showed the best fit with the final diagnosis at a long follow-up. FDG PET SPM t-maps had the highest diagnostic accuracy in Alzheimer's disease and in the differential diagnosis of non-Alzheimer's disease dementias. The p-tau/Aβ 42 ratio was the only CSF biomarker providing a significant classification rate for Alzheimer's disease. An Alzheimer's disease-positive metabolic pattern as shown by FDG PET SPM in MCI was the best predictor of conversion to Alzheimer's disease. In this clinical setting, FDG PET SPM t-maps and the p-tau/Aβ 42 ratio improved clinical diagnostic accuracy, supporting the importance of these biomarkers in the emerging diagnostic criteria for Alzheimer's disease dementia. FDG PET using SPM t-maps had the highest predictive value by identifying hypometabolic patterns in different neurodegenerative dementias and normal brain metabolism in MCI, confirming its additional crucial exclusionary role. (orig.)

  2. Cross-validation of biomarkers for the early differential diagnosis and prognosis of dementia in a clinical setting

    Energy Technology Data Exchange (ETDEWEB)

    Perani, Daniela [Vita-Salute San Raffaele University, Milan (Italy); San Raffaele Scientific Institute, Division of Neuroscience, Milan (Italy); San Raffaele Hospital, Nuclear Medicine Unit, Milan (Italy); Cerami, Chiara [Vita-Salute San Raffaele University, Milan (Italy); San Raffaele Scientific Institute, Division of Neuroscience, Milan (Italy); San Raffaele Hospital, Clinical Neuroscience Department, Milan (Italy); Caminiti, Silvia Paola [Vita-Salute San Raffaele University, Milan (Italy); San Raffaele Scientific Institute, Division of Neuroscience, Milan (Italy); Santangelo, Roberto; Coppi, Elisabetta; Ferrari, Laura; Magnani, Giuseppe [San Raffaele Hospital, Department of Neurology, Milan (Italy); Pinto, Patrizia [Papa Giovanni XXIII Hospital, Department of Neurology, Bergamo (Italy); Passerini, Gabriella [Servizio di Medicina di Laboratorio OSR, Milan (Italy); Falini, Andrea [Vita-Salute San Raffaele University, Milan (Italy); San Raffaele Scientific Institute, Division of Neuroscience, Milan (Italy); San Raffaele Hospital, CERMAC - Department of Neuroradiology, Milan (Italy); Iannaccone, Sandro [San Raffaele Hospital, Clinical Neuroscience Department, Milan (Italy); Cappa, Stefano Francesco [San Raffaele Scientific Institute, Division of Neuroscience, Milan (Italy); IUSS Pavia, Pavia (Italy); Comi, Giancarlo [Vita-Salute San Raffaele University, Milan (Italy); San Raffaele Hospital, Department of Neurology, Milan (Italy); Gianolli, Luigi [San Raffaele Hospital, Nuclear Medicine Unit, Milan (Italy)

    2016-03-15

    The aim of this study was to evaluate the supportive role of molecular and structural biomarkers (CSF protein levels, FDG PET and MRI) in the early differential diagnosis of dementia in a large sample of patients with neurodegenerative dementia, and in determining the risk of disease progression in subjects with mild cognitive impairment (MCI). We evaluated the supportive role of CSF Aβ{sub 42}, t-Tau, p-Tau levels, conventional brain MRI and visual assessment of FDG PET SPM t-maps in the early diagnosis of dementia and the evaluation of MCI progression. Diagnosis based on molecular biomarkers showed the best fit with the final diagnosis at a long follow-up. FDG PET SPM t-maps had the highest diagnostic accuracy in Alzheimer's disease and in the differential diagnosis of non-Alzheimer's disease dementias. The p-tau/Aβ{sub 42} ratio was the only CSF biomarker providing a significant classification rate for Alzheimer's disease. An Alzheimer's disease-positive metabolic pattern as shown by FDG PET SPM in MCI was the best predictor of conversion to Alzheimer's disease. In this clinical setting, FDG PET SPM t-maps and the p-tau/Aβ{sub 42} ratio improved clinical diagnostic accuracy, supporting the importance of these biomarkers in the emerging diagnostic criteria for Alzheimer's disease dementia. FDG PET using SPM t-maps had the highest predictive value by identifying hypometabolic patterns in different neurodegenerative dementias and normal brain metabolism in MCI, confirming its additional crucial exclusionary role. (orig.)

  3. Development and Validation of the Nursing Home Minimum Data Set 3.0 Mortality Risk Score (MRS3).

    Science.gov (United States)

    Thomas, Kali S; Ogarek, Jessica A; Teno, Joan M; Gozalo, Pedro L; Mor, Vincent

    2018-03-05

    To develop a score to predict mortality using the Minimum Data Set 3.0 (MDS 3.0) that can be readily calculated from items collected during nursing home (NH) residents' admission assessments. We developed a training cohort of Medicare beneficiaries newly admitted to U.S. NHs during 2012 (N=1,426,815) and a testing cohort from 2013 (N=1,160,964). Data came from the MDS 3.0 assessments linked to the Medicare Beneficiary Summary File. Using the training dataset, we developed a composite MDS 3.0 Mortality Risk Score (MRS3) consisting of 17 clinical items and patients' age groups based on their relation to 30-day mortality. We assessed the calibration and discrimination of the MRS3 in predicting 30-day and 60-day mortality and compared its performance to the Charlson Comorbidity Index and the clinician's assessment of 6-month prognosis measured at admission. The 30-day and 60-day mortality rate for the testing population was 2.8% and 5.6%, respectively. Results from logistic regression models suggest that the MRS3 performed well in predicting death within 30 and 60 days (C-Statistics of 0.744 (95%CL = 0.741, 0.747) and 0.709 (95%CL=0.706, 0.711), respectively). The MRS3 was a superior predictor of mortality compared to the Charlson Comorbidity Index (C-statistics of 0.611 (95%CL=0.607, 0.615) and 0.608 (95%CL=0.605, 0.610)) and the clinicians' assessments of patients' 6-month prognoses (C-statistics of 0.543 (95%CL=0.542, 0.545) and 0.528 (95%CL=0.527, 0.529). The MRS3 is a good predictor of mortality and can be useful in guiding decision-making, informing plans of care, and adjusting for patients' risk of mortality.

  4. Validating the Copenhagen Psychosocial Questionnaire (COPSOQ-II) Using Set-ESEM: Identifying Psychosocial Risk Factors in a Sample of School Principals.

    Science.gov (United States)

    Dicke, Theresa; Marsh, Herbert W; Riley, Philip; Parker, Philip D; Guo, Jiesi; Horwood, Marcus

    2018-01-01

    School principals world-wide report high levels of strain and attrition resulting in a shortage of qualified principals. It is thus crucial to identify psychosocial risk factors that reflect principals' occupational wellbeing. For this purpose, we used the Copenhagen Psychosocial Questionnaire (COPSOQ-II), a widely used self-report measure covering multiple psychosocial factors identified by leading occupational stress theories. We evaluated the COPSOQ-II regarding factor structure and longitudinal, discriminant, and convergent validity using latent structural equation modeling in a large sample of Australian school principals ( N = 2,049). Results reveal that confirmatory factor analysis produced marginally acceptable model fit. A novel approach we call set exploratory structural equation modeling (set-ESEM), where cross-loadings were only allowed within a priori defined sets of factors, fit well, and was more parsimonious than a full ESEM. Further multitrait-multimethod models based on the set-ESEM confirm the importance of a principal's psychosocial risk factors; Stressors and depression were related to demands and ill-being, while confidence and autonomy were related to wellbeing. We also show that working in the private sector was beneficial for showing a low psychosocial risk, while other demographics have little effects. Finally, we identify five latent risk profiles (high risk to no risk) of school principals based on all psychosocial factors. Overall the research presented here closes the theory application gap of a strong multi-dimensional measure of psychosocial risk-factors.

  5. Estimation of influential points in any data set from coefficient of determination and its leave-one-out cross-validated counterpart.

    Science.gov (United States)

    Tóth, Gergely; Bodai, Zsolt; Héberger, Károly

    2013-10-01

    Coefficient of determination (R (2)) and its leave-one-out cross-validated analogue (denoted by Q (2) or R cv (2) ) are the most frequantly published values to characterize the predictive performance of models. In this article we use R (2) and Q (2) in a reversed aspect to determine uncommon points, i.e. influential points in any data sets. The term (1 - Q (2))/(1 - R (2)) corresponds to the ratio of predictive residual sum of squares and the residual sum of squares. The ratio correlates to the number of influential points in experimental and random data sets. We propose an (approximate) F test on (1 - Q (2))/(1 - R (2)) term to quickly pre-estimate the presence of influential points in training sets of models. The test is founded upon the routinely calculated Q (2) and R (2) values and warns the model builders to verify the training set, to perform influence analysis or even to change to robust modeling.

  6. The accuracy of SST retrievals from AATSR: An initial assessment through geophysical validation against in situ radiometers, buoys and other SST data sets

    Science.gov (United States)

    Corlett, G. K.; Barton, I. J.; Donlon, C. J.; Edwards, M. C.; Good, S. A.; Horrocks, L. A.; Llewellyn-Jones, D. T.; Merchant, C. J.; Minnett, P. J.; Nightingale, T. J.; Noyes, E. J.; O'Carroll, A. G.; Remedios, J. J.; Robinson, I. S.; Saunders, R. W.; Watts, J. G.

    The Advanced Along-Track Scanning Radiometer (AATSR) was launched on Envisat in March 2002. The AATSR instrument is designed to retrieve precise and accurate global sea surface temperature (SST) that, combined with the large data set collected from its predecessors, ATSR and ATSR-2, will provide a long term record of SST data that is greater than 15 years. This record can be used for independent monitoring and detection of climate change. The AATSR validation programme has successfully completed its initial phase. The programme involves validation of the AATSR derived SST values using in situ radiometers, in situ buoys and global SST fields from other data sets. The results of the initial programme presented here will demonstrate that the AATSR instrument is currently close to meeting its scientific objectives of determining global SST to an accuracy of 0.3 K (one sigma). For night time data, the analysis gives a warm bias of between +0.04 K (0.28 K) for buoys to +0.06 K (0.20 K) for radiometers, with slightly higher errors observed for day time data, showing warm biases of between +0.02 (0.39 K) for buoys to +0.11 K (0.33 K) for radiometers. They show that the ATSR series of instruments continues to be the world leader in delivering accurate space-based observations of SST, which is a key climate parameter.

  7. The setting time of a clay-slag geopolymer matrix: the influence of blast-furnace-slag addition and the mixing method

    Czech Academy of Sciences Publication Activity Database

    Perná, Ivana; Hanzlíček, Tomáš

    112, Part 1, JAN 20 (2016), s. 1150-1155 ISSN 0959-6526 Institutional support: RVO:67985891 Keywords : blast-furnace slag * geopolymer * setting time * mixing method * solidification * recycling Subject RIV: DM - Solid Waste and Recycling Impact factor: 5.715, year: 2016

  8. The Virtual Care Climate Questionnaire: Development and Validation of a Questionnaire Measuring Perceived Support for Autonomy in a Virtual Care Setting.

    Science.gov (United States)

    Smit, Eline Suzanne; Dima, Alexandra Lelia; Immerzeel, Stephanie Annette Maria; van den Putte, Bas; Williams, Geoffrey Colin

    2017-05-08

    Web-based health behavior change interventions may be more effective if they offer autonomy-supportive communication facilitating the internalization of motivation for health behavior change. Yet, at this moment no validated tools exist to assess user-perceived autonomy-support of such interventions. The aim of this study was to develop and validate the virtual climate care questionnaire (VCCQ), a measure of perceived autonomy-support in a virtual care setting. Items were developed based on existing questionnaires and expert consultation and were pretested among experts and target populations. The virtual climate care questionnaire was administered in relation to Web-based interventions aimed at reducing consumption of alcohol (Study 1; N=230) or cannabis (Study 2; N=228). Item properties, structural validity, and reliability were examined with item-response and classical test theory methods, and convergent and divergent validity via correlations with relevant concepts. In Study 1, 20 of 23 items formed a one-dimensional scale (alpha=.97; omega=.97; H=.66; mean 4.9 [SD 1.0]; range 1-7) that met the assumptions of monotonicity and invariant item ordering. In Study 2, 16 items fitted these criteria (alpha=.92; H=.45; omega=.93; mean 4.2 [SD 1.1]; range 1-7). Only 15 items remained in the questionnaire in both studies, thus we proceeded to the analyses of the questionnaire's reliability and construct validity with a 15-item version of the virtual climate care questionnaire. Convergent validity of the resulting 15-item virtual climate care questionnaire was confirmed by positive associations with autonomous motivation (Study 1: r=.66, Pperceived competence for reducing alcohol intake (Study 1: r=.52, Pperceived competence for learning (Study 2: r=.05, P=.48). The virtual climate care questionnaire accurately assessed participants' perceived autonomy-support offered by two Web-based health behavior change interventions. Overall, the scale showed the expected properties

  9. Fuzzy vulnerability matrix

    International Nuclear Information System (INIS)

    Baron, Jorge H.; Rivera, S.S.

    2000-01-01

    The so-called vulnerability matrix is used in the evaluation part of the probabilistic safety assessment for a nuclear power plant, during the containment event trees calculations. This matrix is established from what is knows as Numerical Categories for Engineering Judgement. This matrix is usually established with numerical values obtained with traditional arithmetic using the set theory. The representation of this matrix with fuzzy numbers is much more adequate, due to the fact that the Numerical Categories for Engineering Judgement are better represented with linguistic variables, such as 'highly probable', 'probable', 'impossible', etc. In the present paper a methodology to obtain a Fuzzy Vulnerability Matrix is presented, starting from the recommendations on the Numerical Categories for Engineering Judgement. (author)

  10. Self-organising maps and correlation analysis as a tool to explore patterns in excitation-emission matrix data sets and to discriminate dissolved organic matter fluorescence components.

    Science.gov (United States)

    Ejarque-Gonzalez, Elisabet; Butturini, Andrea

    2014-01-01

    Dissolved organic matter (DOM) is a complex mixture of organic compounds, ubiquitous in marine and freshwater systems. Fluorescence spectroscopy, by means of Excitation-Emission Matrices (EEM), has become an indispensable tool to study DOM sources, transport and fate in aquatic ecosystems. However the statistical treatment of large and heterogeneous EEM data sets still represents an important challenge for biogeochemists. Recently, Self-Organising Maps (SOM) has been proposed as a tool to explore patterns in large EEM data sets. SOM is a pattern recognition method which clusterizes and reduces the dimensionality of input EEMs without relying on any assumption about the data structure. In this paper, we show how SOM, coupled with a correlation analysis of the component planes, can be used both to explore patterns among samples, as well as to identify individual fluorescence components. We analysed a large and heterogeneous EEM data set, including samples from a river catchment collected under a range of hydrological conditions, along a 60-km downstream gradient, and under the influence of different degrees of anthropogenic impact. According to our results, chemical industry effluents appeared to have unique and distinctive spectral characteristics. On the other hand, river samples collected under flash flood conditions showed homogeneous EEM shapes. The correlation analysis of the component planes suggested the presence of four fluorescence components, consistent with DOM components previously described in the literature. A remarkable strength of this methodology was that outlier samples appeared naturally integrated in the analysis. We conclude that SOM coupled with a correlation analysis procedure is a promising tool for studying large and heterogeneous EEM data sets.

  11. Sluggish cognitive tempo and attention-deficit/hyperactivity disorder (ADHD) inattention in the home and school contexts: Parent and teacher invariance and cross-setting validity.

    Science.gov (United States)

    Burns, G Leonard; Becker, Stephen P; Servera, Mateu; Bernad, Maria Del Mar; García-Banda, Gloria

    2017-02-01

    This study examined whether sluggish cognitive tempo (SCT) and attention-deficit/hyperactivity disorder (ADHD) inattention (IN) symptoms demonstrated cross-setting invariance and unique associations with symptom and impairment dimensions across settings (i.e., home SCT and ADHD-IN uniquely predicting school symptom and impairment dimensions, and vice versa). Mothers, fathers, primary teachers, and secondary teachers rated SCT, ADHD-IN, ADHD-hyperactivity/impulsivity (HI), oppositional defiant disorder (ODD), anxiety, depression, academic impairment, social impairment, and peer rejection dimensions for 585 Spanish 3rd-grade children (53% boys). Within-setting (i.e., mothers, fathers; primary, secondary teachers) and cross-settings (i.e., home, school) invariance was found for both SCT and ADHD-IN. From home to school, higher levels of home SCT predicted lower levels of school ADHD-HI and higher levels of school academic impairment after controlling for home ADHD-IN, whereas higher levels of home ADHD-IN predicted higher levels of school ADHD-HI, ODD, anxiety, depression, academic impairment, and peer rejection after controlling for home SCT. From school to home, higher levels of school SCT predicted lower levels of home ADHD-HI and ODD and higher levels of home anxiety, depression, academic impairment, and social impairment after controlling for school ADHD-IN, whereas higher levels of school ADHD-IN predicted higher levels of home ADHD-HI, ODD, and academic impairment after controlling for school SCT. Although SCT at home and school was able to uniquely predict symptom and impairment dimensions in the other setting, SCT at school was a better predictor than ADHD-IN at school of psychopathology and impairment at home. Findings provide additional support for SCT's validity relative to ADHD-IN. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  12. Chemical composition analysis and product consistency tests to support enhanced Hanford waste glass models. Results for the third set of high alumina outer layer matrix glasses

    Energy Technology Data Exchange (ETDEWEB)

    Fox, K. M. [Savannah River Site (SRS), Aiken, SC (United States); Edwards, T. B. [Savannah River Site (SRS), Aiken, SC (United States)

    2015-12-01

    In this report, the Savannah River National Laboratory provides chemical analyses and Product Consistency Test (PCT) results for 14 simulated high level waste glasses fabricated by the Pacific Northwest National Laboratory. The results of these analyses will be used as part of efforts to revise or extend the validation regions of the current Hanford Waste Treatment and Immobilization Plant glass property models to cover a broader span of waste compositions. The measured chemical composition data are reported and compared with the targeted values for each component for each glass. All of the measured sums of oxides for the study glasses fell within the interval of 96.9 to 100.8 wt %, indicating recovery of all components. Comparisons of the targeted and measured chemical compositions showed that the measured values for the glasses met the targeted concentrations within 10% for those components present at more than 5 wt %. The PCT results were normalized to both the targeted and measured compositions of the study glasses. Several of the glasses exhibited increases in normalized concentrations (NCi) after the canister centerline cooled (CCC) heat treatment. Five of the glasses, after the CCC heat treatment, had NCB values that exceeded that of the Environmental Assessment (EA) benchmark glass. These results can be combined with additional characterization, including X-ray diffraction, to determine the cause of the higher release rates.

  13. Validation of the GROMOS force-field parameter set 45A3 against nuclear magnetic resonance data of hen egg lysozyme

    Energy Technology Data Exchange (ETDEWEB)

    Soares, T. A. [ETH Hoenggerberg Zuerich, Laboratory of Physical Chemistry (Switzerland); Daura, X. [Universitat Autonoma de Barcelona, InstitucioCatalana de Recerca i Estudis Avancats and Institut de Biotecnologia i Biomedicina (Spain); Oostenbrink, C. [ETH Hoenggerberg Zuerich, Laboratory of Physical Chemistry (Switzerland); Smith, L. J. [University of Oxford, Oxford Centre for Molecular Sciences, Central Chemistry Laboratory (United Kingdom); Gunsteren, W. F. van [ETH Hoenggerberg Zuerich, Laboratory of Physical Chemistry (Switzerland)], E-mail: wfvgn@igc.phys.chem.ethz.ch

    2004-12-15

    The quality of molecular dynamics (MD) simulations of proteins depends critically on the biomolecular force field that is used. Such force fields are defined by force-field parameter sets, which are generally determined and improved through calibration of properties of small molecules against experimental or theoretical data. By application to large molecules such as proteins, a new force-field parameter set can be validated. We report two 3.5 ns molecular dynamics simulations of hen egg white lysozyme in water applying the widely used GROMOS force-field parameter set 43A1 and a new set 45A3. The two MD ensembles are evaluated against NMR spectroscopic data NOE atom-atom distance bounds, {sup 3}J{sub NH{alpha}} and {sup 3}J{sub {alpha}}{sub {beta}} coupling constants, and {sup 1}5N relaxation data. It is shown that the two sets reproduce structural properties about equally well. The 45A3 ensemble fulfills the atom-atom distance bounds derived from NMR spectroscopy slightly less well than the 43A1 ensemble, with most of the NOE distance violations in both ensembles involving residues located in loops or flexible regions of the protein. Convergence patterns are very similar in both simulations atom-positional root-mean-square differences (RMSD) with respect to the X-ray and NMR model structures and NOE inter-proton distances converge within 1.0-1.5 ns while backbone {sup 3}J{sub HN{alpha}}-coupling constants and {sup 1}H- {sup 1}5N order parameters take slightly longer, 1.0-2.0 ns. As expected, side-chain {sup 3}J{sub {alpha}}{sub {beta}}-coupling constants and {sup 1}H- {sup 1}5N order parameters do not reach full convergence for all residues in the time period simulated. This is particularly noticeable for side chains which display rare structural transitions. When comparing each simulation trajectory with an older and a newer set of experimental NOE data on lysozyme, it is found that the newer, larger, set of experimental data agrees as well with each of the

  14. Validation of the GROMOS force-field parameter set 45A3 against nuclear magnetic resonance data of hen egg lysozyme

    International Nuclear Information System (INIS)

    Soares, T. A.; Daura, X.; Oostenbrink, C.; Smith, L. J.; Gunsteren, W. F. van

    2004-01-01

    The quality of molecular dynamics (MD) simulations of proteins depends critically on the biomolecular force field that is used. Such force fields are defined by force-field parameter sets, which are generally determined and improved through calibration of properties of small molecules against experimental or theoretical data. By application to large molecules such as proteins, a new force-field parameter set can be validated. We report two 3.5 ns molecular dynamics simulations of hen egg white lysozyme in water applying the widely used GROMOS force-field parameter set 43A1 and a new set 45A3. The two MD ensembles are evaluated against NMR spectroscopic data NOE atom-atom distance bounds, 3 J NHα and 3 J αβ coupling constants, and 1 5N relaxation data. It is shown that the two sets reproduce structural properties about equally well. The 45A3 ensemble fulfills the atom-atom distance bounds derived from NMR spectroscopy slightly less well than the 43A1 ensemble, with most of the NOE distance violations in both ensembles involving residues located in loops or flexible regions of the protein. Convergence patterns are very similar in both simulations atom-positional root-mean-square differences (RMSD) with respect to the X-ray and NMR model structures and NOE inter-proton distances converge within 1.0-1.5 ns while backbone 3 J HNα -coupling constants and 1 H- 1 5N order parameters take slightly longer, 1.0-2.0 ns. As expected, side-chain 3 J αβ -coupling constants and 1 H- 1 5N order parameters do not reach full convergence for all residues in the time period simulated. This is particularly noticeable for side chains which display rare structural transitions. When comparing each simulation trajectory with an older and a newer set of experimental NOE data on lysozyme, it is found that the newer, larger, set of experimental data agrees as well with each of the simulations. In other words, the experimental data converged towards the theoretical result

  15. Cell type specific DNA methylation in cord blood: A 450K-reference data set and cell count-based validation of estimated cell type composition.

    Science.gov (United States)

    Gervin, Kristina; Page, Christian Magnus; Aass, Hans Christian D; Jansen, Michelle A; Fjeldstad, Heidi Elisabeth; Andreassen, Bettina Kulle; Duijts, Liesbeth; van Meurs, Joyce B; van Zelm, Menno C; Jaddoe, Vincent W; Nordeng, Hedvig; Knudsen, Gunn Peggy; Magnus, Per; Nystad, Wenche; Staff, Anne Cathrine; Felix, Janine F; Lyle, Robert

    2016-09-01

    Epigenome-wide association studies of prenatal exposure to different environmental factors are becoming increasingly common. These studies are usually performed in umbilical cord blood. Since blood comprises multiple cell types with specific DNA methylation patterns, confounding caused by cellular heterogeneity is a major concern. This can be adjusted for using reference data consisting of DNA methylation signatures in cell types isolated from blood. However, the most commonly used reference data set is based on blood samples from adult males and is not representative of the cell type composition in neonatal cord blood. The aim of this study was to generate a reference data set from cord blood to enable correct adjustment of the cell type composition in samples collected at birth. The purity of the isolated cell types was very high for all samples (>97.1%), and clustering analyses showed distinct grouping of the cell types according to hematopoietic lineage. We explored whether this cord blood and the adult peripheral blood reference data sets impact the estimation of cell type composition in cord blood samples from an independent birth cohort (MoBa, n = 1092). This revealed significant differences for all cell types. Importantly, comparison of the cell type estimates against matched cell counts both in the cord blood reference samples (n = 11) and in another independent birth cohort (Generation R, n = 195), demonstrated moderate to high correlation of the data. This is the first cord blood reference data set with a comprehensive examination of the downstream application of the data through validation of estimated cell types against matched cell counts.

  16. Matrix Effect Evaluation and Method Validation of Azoxystrobin and Difenoconazole Residues in Red Flesh Dragon Fruit (Hylocereus polyrhizus) Matrices Using QuEChERS Sample Preparation Methods Followed by LC-MS/MS Determination.

    Science.gov (United States)

    Noegrohati, Sri; Hernadi, Elan; Asviastuti, Syanti

    2018-03-30

    Production of red flesh dragon fruit (Hylocereus polyrhizus) was hampered by Colletotrichum sp. Pre-harvest application of azoxystrobin and difenoconazole mixture is recommended, therefore, a selective and sensitive multi residues analytical method is required in monitoring and evaluating the commodity's safety. LC-MS/MS is a well-established analytical technique for qualitative and quantitative determination in complex matrices. However, this method is hurdled by co-eluted coextractives interferences. This work evaluated the pH effect of acetate buffered and citrate buffered QuEChERS sample preparation in their effectiveness of matrix effect reduction. Citrate buffered QuEChERS proved to produce clean final extract with relative matrix effect 0.4%-0.7%. Method validation of the selected sample preparation followed by LC-MS/MS for whole dragon fruit, flesh and peel matrices fortified at 0.005, 0.01, 0.1 and 1 g/g showed recoveries 75%-119%, intermediate repeatability 2%-14%. The expanded uncertainties were 7%-48%. Based on the international acceptance criteria, this method is valid.

  17. Mechanical tests imaging on metallic matrix composites. Experimental contribution to homogenization methods validation and identification of phase-related mechanical properties

    International Nuclear Information System (INIS)

    Quoc-Thang Vo

    2013-01-01

    This work is focused on a matrix/inclusion metal composite. A simple method is proposed to evaluate the elastic properties of one phase while the properties of the other phase are assumed to be known. The method is based on both an inverse homogenization scheme and mechanical field's measurements by 2D digital image correlation. The originality of the approach rests on the scale studied, i.e. the microstructure scale of material: the characteristic size of the inclusions is about few tens of microns. The evaluation is performed on standard uniaxial tensile tests associated with a long-distance microscope. It allows observation of the surface of a specimen on the microstructure scale during the mechanical stress. First, the accuracy of the method is estimated on 'perfect' mechanical fields coming from numerical simulations for four microstructures: elastic or porous single inclusions having either spherical or cylindrical shape. Second, this accuracy is estimated on real mechanical field for two simple microstructures: an elasto-plastic metallic matrix containing a single cylindrical micro void or four cylindrical micro voids arranged in a square pattern. Third, the method is used to evaluate elastic properties of αZr inclusions with arbitrary shape in an oxidized Zircaloy-4 sample of the fuel cladding of a pressurized water reactor after an accident loss of coolant accident (LOCA). In all this study, the phases are assumed to have isotropic properties. (author) [fr

  18. Adaptation and validation of the Alzheimer's Disease Assessment Scale - Cognitive (ADAS-Cog) in a low-literacy setting in sub-Saharan Africa.

    Science.gov (United States)

    Paddick, Stella-Maria; Kisoli, Aloyce; Mkenda, Sarah; Mbowe, Godfrey; Gray, William Keith; Dotchin, Catherine; Ogunniyi, Adesola; Kisima, John; Olakehinde, Olaide; Mushi, Declare; Walker, Richard William

    2017-08-01

    This study aimed to assess the feasibility of a low-literacy adaptation of the Alzheimer's Disease Assessment Scale - Cognitive (ADAS-Cog) for use in rural sub-Saharan Africa (SSA) for interventional studies in dementia. No such adaptations currently exist. Tanzanian and Nigerian health professionals adapted the ADAS-Cog by consensus. Validation took place in a cross-sectional sample of 34 rural-dwelling older adults with mild/moderate dementia alongside 32 non-demented controls in Tanzania. Participants were oversampled for lower educational level. Inter-rater reliability was conducted by two trained raters in 22 older adults (13 with dementia) from the same population. Assessors were blind to diagnostic group. Median ADAS-Cog scores were 28.75 (interquartile range (IQR), 22.96-35.54) in mild/moderate dementia and 12.75 (IQR 9.08-16.16) in controls. The area under the receiver operating characteristic curve (AUC) was 0.973 (95% confidence interval (CI) 0.936-1.00) for dementia. Internal consistency was high (Cronbach's α 0.884) and inter-rater reliability was excellent (intra-class correlation coefficient 0.905, 95% CI 0.804-0.964). The low-literacy adaptation of the ADAS-Cog had good psychometric properties in this setting. Further evaluation in similar settings is required.

  19. Matrix calculus

    CERN Document Server

    Bodewig, E

    1959-01-01

    Matrix Calculus, Second Revised and Enlarged Edition focuses on systematic calculation with the building blocks of a matrix and rows and columns, shunning the use of individual elements. The publication first offers information on vectors, matrices, further applications, measures of the magnitude of a matrix, and forms. The text then examines eigenvalues and exact solutions, including the characteristic equation, eigenrows, extremum properties of the eigenvalues, bounds for the eigenvalues, elementary divisors, and bounds for the determinant. The text ponders on approximate solutions, as well

  20. Diagnostic aid to rule out pneumonia in adults with cough and feeling of fever. A validation study in the primary care setting

    Directory of Open Access Journals (Sweden)

    Held Ulrike

    2012-12-01

    Full Text Available Abstract Background We recently reported the derivation of a diagnostic aid to rule out pneumonia in adults presenting with new onset of cough or worsening of chronic cough and increased body temperature. The aim of the present investigation was to validate the diagnostic aid in a new sample of primary care patients. Methods From two group practices in Zurich, we included 110 patients with the main symptoms of cough and subjective feeling of increased body temperature, and C-reactive protein levels below 50 μg/ml, no dyspnea, and not daily feeling of increased body temperature since the onset of cough. We excluded patients who were prescribed antibiotics at their first consultation. Approximately two weeks after inclusion, practice assistants contacted the participants by phone and asked four questions regarding the course of their complaints. In particular, they asked whether a prescription of antibiotics or hospitalization had been necessary within the last two weeks. Results In 107 of 110 patients, pneumonia could be ruled out with a high degree of certainty, and no prescription of antibiotics was necessary. Three patients were prescribed antibiotics between the time of inclusion in the study and the phone interview two weeks later. Acute rhinosinusitis was diagnosed in one patient, and antibiotics were prescribed to the other two patients because their symptoms had worsened and their CRP levels increased. Use of the diagnostic aid could have missed these two possible cases of pneumonia. These observations correspond to a false negative rate of 1.8% (95% confidence interval: 0.50%-6.4%. Conclusions This diagnostic aid is helpful to rule out pneumonia in patients from a primary care setting. After further validation application of this aid in daily practice may help to reduce the prescription rate of unnecessary antibiotics in patients with respiratory tract infections.

  1. Validation of an air–liquid interface toxicological set-up using Cu, Pd, and Ag well-characterized nanostructured aggregates and spheres

    International Nuclear Information System (INIS)

    Svensson, C. R.; Ameer, S. S.; Ludvigsson, L.; Ali, N.; Alhamdow, A.; Messing, M. E.; Pagels, J.; Gudmundsson, A.; Bohgard, M.; Sanfins, E.; Kåredal, M.; Broberg, K.; Rissler, J.

    2016-01-01

    Systems for studying the toxicity of metal aggregates on the airways are normally not suited for evaluating the effects of individual particle characteristics. This study validates a set-up for toxicological studies of metal aggregates using an air–liquid interface approach. The set-up used a spark discharge generator capable of generating aerosol metal aggregate particles and sintered near spheres. The set-up also contained an exposure chamber, The Nano Aerosol Chamber for In Vitro Toxicity (NACIVT). The system facilitates online characterization capabilities of mass mobility, mass concentration, and number size distribution to determine the exposure. By dilution, the desired exposure level was controlled. Primary and cancerous airway cells were exposed to copper (Cu), palladium (Pd), and silver (Ag) aggregates, 50–150 nm in median diameter. The aggregates were composed of primary particles <10 nm in diameter. For Cu and Pd, an exposure of sintered aerosol particles was also produced. The doses of the particles were expressed as particle numbers, masses, and surface areas. For the Cu, Pd, and Ag aerosol particles, a range of mass surface concentrations on the air–liquid interface of 0.4–10.7, 0.9–46.6, and 0.1–1.4 µg/cm"2, respectively, were achieved. Viability was measured by WST-1 assay, cytokines (Il-6, Il-8, TNF-a, MCP) by Luminex technology. Statistically significant effects and dose response on cytokine expression were observed for SAEC cells after exposure to Cu, Pd, or Ag particles. Also, a positive dose response was observed for SAEC viability after Cu exposure. For A549 cells, statistically significant effects on viability were observed after exposure to Cu and Pd particles. The set-up produced a stable flow of aerosol particles with an exposure and dose expressed in terms of number, mass, and surface area. Exposure-related effects on the airway cellular models could be asserted.Graphical Abstract

  2. Validation of an air–liquid interface toxicological set-up using Cu, Pd, and Ag well-characterized nanostructured aggregates and spheres

    Energy Technology Data Exchange (ETDEWEB)

    Svensson, C. R., E-mail: christian.svensson@design.lth.se [Lund University, Department of Design Sciences, Ergonomics and Aerosol Technology (Sweden); Ameer, S. S. [Lund University, Division of Occupational and Environmental Medicine, Department of Laboratory Medicine (Sweden); Ludvigsson, L. [Lund University, Department of Physics, Solid State Physics (Sweden); Ali, N.; Alhamdow, A. [Lund University, Division of Occupational and Environmental Medicine, Department of Laboratory Medicine (Sweden); Messing, M. E. [Lund University, Department of Physics, Solid State Physics (Sweden); Pagels, J.; Gudmundsson, A.; Bohgard, M. [Lund University, Department of Design Sciences, Ergonomics and Aerosol Technology (Sweden); Sanfins, E. [Atomic Energy Commission (CEA), Institute of Emerging Diseases and Innovative Therapies (iMETI), Division of Prions and Related Diseases - SEPIA (France); Kåredal, M.; Broberg, K. [Lund University, Division of Occupational and Environmental Medicine, Department of Laboratory Medicine (Sweden); Rissler, J. [Lund University, Department of Design Sciences, Ergonomics and Aerosol Technology (Sweden)

    2016-04-15

    Systems for studying the toxicity of metal aggregates on the airways are normally not suited for evaluating the effects of individual particle characteristics. This study validates a set-up for toxicological studies of metal aggregates using an air–liquid interface approach. The set-up used a spark discharge generator capable of generating aerosol metal aggregate particles and sintered near spheres. The set-up also contained an exposure chamber, The Nano Aerosol Chamber for In Vitro Toxicity (NACIVT). The system facilitates online characterization capabilities of mass mobility, mass concentration, and number size distribution to determine the exposure. By dilution, the desired exposure level was controlled. Primary and cancerous airway cells were exposed to copper (Cu), palladium (Pd), and silver (Ag) aggregates, 50–150 nm in median diameter. The aggregates were composed of primary particles <10 nm in diameter. For Cu and Pd, an exposure of sintered aerosol particles was also produced. The doses of the particles were expressed as particle numbers, masses, and surface areas. For the Cu, Pd, and Ag aerosol particles, a range of mass surface concentrations on the air–liquid interface of 0.4–10.7, 0.9–46.6, and 0.1–1.4 µg/cm{sup 2}, respectively, were achieved. Viability was measured by WST-1 assay, cytokines (Il-6, Il-8, TNF-a, MCP) by Luminex technology. Statistically significant effects and dose response on cytokine expression were observed for SAEC cells after exposure to Cu, Pd, or Ag particles. Also, a positive dose response was observed for SAEC viability after Cu exposure. For A549 cells, statistically significant effects on viability were observed after exposure to Cu and Pd particles. The set-up produced a stable flow of aerosol particles with an exposure and dose expressed in terms of number, mass, and surface area. Exposure-related effects on the airway cellular models could be asserted.Graphical Abstract.

  3. Perceived parental rearing style in childhood: internal structure and concurrent validity on the Egna Minnen Beträffande Uppfostran--Child Version in clinical settings.

    Science.gov (United States)

    Penelo, Eva; Viladrich, Carme; Domènech, Josep M

    2010-01-01

    We provide the first validation data of the Spanish version of the Egna Minnen Beträffande Uppfostran--Child Version (EMBU-C) in a clinical context. The EMBU-C is a 41-item self-report questionnaire that assesses perceived parental rearing style in children, comprising 4 subscales (rejection, emotional warmth, control attempts/overprotection, and favoring subjects). The test was administered to a clinical sample of 174 Spanish psychiatric outpatients aged 8 to 12. Confirmatory factor analyses were performed, analyzing the children's reports about their parents' rearing style. The results were almost equivalent for father's and mother's ratings. Confirmatory factor analysis yielded an acceptable fit to data of the 3-factor model when removing the items of the favoring subjects scale (root mean squared error of approximation .73), whereas control attempts scale showed lower values, as in previous studies. The influence of sex (of children and parents) on scale scores was inappreciable and children tended to perceive their parents as progressively less warm as they grew older. As predicted, the scores for rejection and emotional warmth were related to bad relationships with parents, absence of family support, harsh discipline, and lack of parental supervision. The Spanish version of EMBU-C can be used with psychometric guarantees to identify rearing style in psychiatric outpatients because evidences of quality in this setting match those obtained in community samples. Copyright 2010 Elsevier Inc. All rights reserved.

  4. Student-Directed Video Validation of Psychomotor Skills Performance: A Strategy to Facilitate Deliberate Practice, Peer Review, and Team Skill Sets.

    Science.gov (United States)

    DeBourgh, Gregory A; Prion, Susan K

    2017-03-22

    Background Essential nursing skills for safe practice are not limited to technical skills, but include abilities for determining salience among clinical data within dynamic practice environments, demonstrating clinical judgment and reasoning, problem-solving abilities, and teamwork competence. Effective instructional methods are needed to prepare new nurses for entry-to-practice in contemporary healthcare settings. Method This mixed-methods descriptive study explored self-reported perceptions of a process to self-record videos for psychomotor skill performance evaluation in a convenience sample of 102 pre-licensure students. Results Students reported gains in confidence and skill acquisition using team skills to record individual videos of skill performance, and described the importance of teamwork, peer support, and deliberate practice. Conclusion Although time consuming, the production of student-directed video validations of psychomotor skill performance is an authentic task with meaningful accountabilities that is well-received by students as an effective, satisfying learner experience to increase confidence and competence in performing psychomotor skills.

  5. The Development of a Novel, Validated, Rapid and Simple Method for the Detection of Sarcocystis fayeri in Horse Meat in the Sanitary Control Setting.

    Science.gov (United States)

    Furukawa, Masato; Minegishi, Yasutaka; Izumiyama, Shinji; Yagita, Kenji; Mori, Hideto; Uemura, Taku; Etoh, Yoshiki; Maeda, Eriko; Sasaki, Mari; Ichinose, Kazuya; Harada, Seiya; Kamata, Yoichi; Otagiri, Masaki; Sugita-Konishi, Yoshiko; Ohnishi, Takahiro

    2016-01-01

    Sarcocystis fayeri (S. fayeri) is a newly identified causative agent of foodborne disease that is associated with the consumption of raw horse meat. The testing methods prescribed by the Ministry of Health, Labour and Welfare of Japan are time consuming and require the use of expensive equipment and a high level of technical expertise. Accordingly, these methods are not suitable for use in the routine sanitary control setting to prevent outbreaks of foodborne disease. In order to solve these problems, we have developed a new, rapid and simple testing method using LAMP, which takes only 1 hour to perform and which does not involve the use of any expensive equipment or expert techniques. For the validation of this method, an inter-laboratory study was performed among 5 institutes using 10 samples infected with various concentrations of S. fayeri. The results of the inter-laboratory study demonstrated that our LAMP method could detect S. fayeri at concentrations greater than 10(4) copies/g. Thus, this new method could be useful in screening for S. fayeri as a routine sanitary control procedure.

  6. Temporal and Geographic variation in the validity and internal consistency of the Nursing Home Resident Assessment Minimum Data Set 2.0.

    Science.gov (United States)

    Mor, Vincent; Intrator, Orna; Unruh, Mark Aaron; Cai, Shubing

    2011-04-15

    The Minimum Data Set (MDS) for nursing home resident assessment has been required in all U.S. nursing homes since 1990 and has been universally computerized since 1998. Initially intended to structure clinical care planning, uses of the MDS expanded to include policy applications such as case-mix reimbursement, quality monitoring and research. The purpose of this paper is to summarize a series of analyses examining the internal consistency and predictive validity of the MDS data as used in the "real world" in all U.S. nursing homes between 1999 and 2007. We used person level linked MDS and Medicare denominator and all institutional claim files including inpatient (hospital and skilled nursing facilities) for all Medicare fee-for-service beneficiaries entering U.S. nursing homes during the period 1999 to 2007. We calculated the sensitivity and positive predictive value (PPV) of diagnoses taken from Medicare hospital claims and from the MDS among all new admissions from hospitals to nursing homes and the internal consistency (alpha reliability) of pairs of items within the MDS that logically should be related. We also tested the internal consistency of commonly used MDS based multi-item scales and examined the predictive validity of an MDS based severity measure viz. one year survival. Finally, we examined the correspondence of the MDS discharge record to hospitalizations and deaths seen in Medicare claims, and the completeness of MDS assessments upon skilled nursing facility (SNF) admission. Each year there were some 800,000 new admissions directly from hospital to US nursing homes and some 900,000 uninterrupted SNF stays. Comparing Medicare enrollment records and claims with MDS records revealed reasonably good correspondence that improved over time (by 2006 only 3% of deaths had no MDS discharge record, only 5% of SNF stays had no MDS, but over 20% of MDS discharges indicating hospitalization had no associated Medicare claim). The PPV and sensitivity levels of

  7. Selection and validation of a set of reliable reference genes for quantitative RT-PCR studies in the brain of the Cephalopod Mollusc Octopus vulgaris

    Directory of Open Access Journals (Sweden)

    Biffali Elio

    2009-07-01

    Full Text Available Abstract Background Quantitative real-time polymerase chain reaction (RT-qPCR is valuable for studying the molecular events underlying physiological and behavioral phenomena. Normalization of real-time PCR data is critical for a reliable mRNA quantification. Here we identify reference genes to be utilized in RT-qPCR experiments to normalize and monitor the expression of target genes in the brain of the cephalopod mollusc Octopus vulgaris, an invertebrate. Such an approach is novel for this taxon and of advantage in future experiments given the complexity of the behavioral repertoire of this species when compared with its relatively simple neural organization. Results We chose 16S, and 18S rRNA, actB, EEF1A, tubA and ubi as candidate reference genes (housekeeping genes, HKG. The expression of 16S and 18S was highly variable and did not meet the requirements of candidate HKG. The expression of the other genes was almost stable and uniform among samples. We analyzed the expression of HKG into two different set of animals using tissues taken from the central nervous system (brain parts and mantle (here considered as control tissue by BestKeeper, geNorm and NormFinder. We found that HKG expressions differed considerably with respect to brain area and octopus samples in an HKG-specific manner. However, when the mantle is treated as control tissue and the entire central nervous system is considered, NormFinder revealed tubA and ubi as the most suitable HKG pair. These two genes were utilized to evaluate the relative expression of the genes FoxP, creb, dat and TH in O. vulgaris. Conclusion We analyzed the expression profiles of some genes here identified for O. vulgaris by applying RT-qPCR analysis for the first time in cephalopods. We validated candidate reference genes and found the expression of ubi and tubA to be the most appropriate to evaluate the expression of target genes in the brain of different octopuses. Our results also underline the

  8. Detecting Motor Impairment in Early Parkinson's Disease via Natural Typing Interaction With Keyboards: Validation of the neuroQWERTY Approach in an Uncontrolled At-Home Setting.

    Science.gov (United States)

    Arroyo-Gallego, Teresa; Ledesma-Carbayo, María J; Butterworth, Ian; Matarazzo, Michele; Montero-Escribano, Paloma; Puertas-Martín, Verónica; Gray, Martha L; Giancardo, Luca; Sánchez-Ferro, Álvaro

    2018-03-26

    Parkinson's disease (PD) is the second most prevalent neurodegenerative disease and one of the most common forms of movement disorder. Although there is no known cure for PD, existing therapies can provide effective symptomatic relief. However, optimal titration is crucial to avoid adverse effects. Today, decision making for PD management is challenging because it relies on subjective clinical evaluations that require a visit to the clinic. This challenge has motivated recent research initiatives to develop tools that can be used by nonspecialists to assess psychomotor impairment. Among these emerging solutions, we recently reported the neuroQWERTY index, a new digital marker able to detect motor impairment in an early PD cohort through the analysis of the key press and release timing data collected during a controlled in-clinic typing task. The aim of this study was to extend the in-clinic implementation to an at-home implementation by validating the applicability of the neuroQWERTY approach in an uncontrolled at-home setting, using the typing data from subjects' natural interaction with their laptop to enable remote and unobtrusive assessment of PD signs. We implemented the data-collection platform and software to enable access and storage of the typing data generated by users while using their computer at home. We recruited a total of 60 participants; of these participants 52 (25 people with Parkinson's and 27 healthy controls) provided enough data to complete the analysis. Finally, to evaluate whether our in-clinic-built algorithm could be used in an uncontrolled at-home setting, we compared its performance on the data collected during the controlled typing task in the clinic and the results of our method using the data passively collected at home. Despite the randomness and sparsity introduced by the uncontrolled setting, our algorithm performed nearly as well in the at-home data (area under the receiver operating characteristic curve [AUC] of 0.76 and

  9. Health Services OutPatient Experience questionnaire: factorial validity and reliability of a patient-centered outcome measure for outpatient settings in Italy

    Directory of Open Access Journals (Sweden)

    Coluccia A

    2014-09-01

    Full Text Available Anna Coluccia, Fabio Ferretti, Andrea PozzaDepartment of Medical Sciences, Surgery and Neurosciences, Santa Maria alle Scotte University Hospital, University of Siena, Siena, ItalyPurpose: The patient-centered approach to health care does not seem to be sufficiently developed in the Italian context, and is still characterized by the biomedical model. In addition, there is a lack of validated outcome measures to assess outpatient experience as an aspect common to a variety of settings. The current study aimed to evaluate the factorial validity, reliability, and invariance across sex of the Health Services OutPatient Experience (HSOPE questionnaire, a short ten-item measure of patient-centeredness for Italian adult outpatients. The rationale for unidimensionality of the measure was that it could cover global patient experience as a process common to patients with a variety of diseases and irrespective of the phase of treatment course.Patients and methods: The HSOPE was compiled by 1,532 adult outpatients (51% females, mean age 59.22 years, standard deviation 16.26 receiving care in ten facilities at the Santa Maria alle Scotte University Hospital of Siena, Italy. The sample represented all the age cohorts. Twelve percent were young adults, 57% were adults, and 32% were older adults. Exploratory and confirmatory factor analyses were conducted to evaluate factor structure. Reliability was evaluated as internal consistency using Cronbach’s α. Factor invariance was assessed through multigroup analyses.Results: Both exploratory and confirmatory analyses suggested a clearly defined unidimensional structure of the measure, with all the ten items having salient loadings on a single factor. Internal consistency was excellent (α=0.95. Indices of model fit supported a single-factor structure for both male and female outpatient groups. Young adult outpatients had significantly lower scores on perceived patient-centeredness relative to older adults. No

  10. The alcohol use disorders identification test (AUDIT: validation of a Nepali version for the detection of alcohol use disorders and hazardous drinking in medical settings

    Directory of Open Access Journals (Sweden)

    Pradhan Bickram

    2012-10-01

    Full Text Available Abstract Background Alcohol problems are a major health issue in Nepal and remain under diagnosed. Increase in consumption are due to many factors, including advertising, pricing and availability, but accurate information is lacking on the prevalence of current alcohol use disorders. The AUDIT (Alcohol Use Disorder Identification Test questionnaire developed by WHO identifies individuals along the full spectrum of alcohol misuse and hence provides an opportunity for early intervention in non-specialty settings. This study aims to validate a Nepali version of AUDIT among patients attending a university hospital and assess the prevalence of alcohol use disorders along the full spectrum of alcohol misuse. Methods This cross-sectional study was conducted in patients attending the medicine out-patient department of a university hospital. DSM-IV diagnostic categories (alcohol abuse and alcohol dependence were used as the gold standard to calculate the diagnostic parameters of the AUDIT. Hazardous drinking was defined as self reported consumption of ≥21 standard drink units per week for males and ≥14 standard drink units per week for females. Results A total of 1068 individuals successfully completed the study. According to DSM-IV, drinkers were classified as follows: No alcohol problem (n=562; 59.5%, alcohol abusers (n= 78; 8.3% and alcohol dependent (n=304; 32.2%. The prevalence of hazardous drinker was 67.1%. The Nepali version of AUDIT is a reliable and valid screening tool to identify individuals with alcohol use disorders in the Nepalese population. AUDIT showed a good capacity to discriminate dependent patients (with AUDIT ≥11 for both the gender and hazardous drinkers (with AUDIT ≥5 for males and ≥4 for females. For alcohol dependence/abuse the cut off values was ≥9 for both males and females. Conclusion The AUDIT questionnaire is a good screening instrument for detecting alcohol use disorders in patients attending a university

  11. Validation of the Care-Related Quality of Life Instrument in different study settings : findings from The Older Persons and Informal Caregivers Survey Minimum DataSet (TOPICS-MDS)

    NARCIS (Netherlands)

    Lutomski, J. E.; van Exel, N. J. A.; Kempen, G. I. J. M.; van Charante, E. P. Moll; den Elzen, W. P. J.; Jansen, A. P. D.; Krabbe, P. F. M.; Steunenberg, B.; Steyerberg, E. W.; Rikkert, M. G. M. Olde; Melis, R. J. F.

    PURPOSE: Validity is a contextual aspect of a scale which may differ across sample populations and study protocols. The objective of our study was to validate the Care-Related Quality of Life Instrument (CarerQol) across two different study design features, sampling framework (general population vs.

  12. Validation of the Care-Related Quality of Life Instrument in different study settings: findings from The Older Persons and Informal Caregivers Survey Minimum DataSet (TOPICS-MDS)

    NARCIS (Netherlands)

    Lutomski, J.E.; Exel, N.J. van; Kempen, G.I.; Charante, E.P. Moll van; Elzen, W.P. den; Jansen, A.P.; Krabbe, P.F.M.; Steunenberg, B.; Steyerberg, E.W.; Olde Rikkert, M.G.M.; Melis, R.J.F.

    2015-01-01

    PURPOSE: Validity is a contextual aspect of a scale which may differ across sample populations and study protocols. The objective of our study was to validate the Care-Related Quality of Life Instrument (CarerQol) across two different study design features, sampling framework (general population vs.

  13. Determination of selected water-soluble vitamins using hydrophilic chromatography: a comparison of photodiode array, fluorescence, and coulometric detection, and validation in a breakfast cereal matrix.

    Science.gov (United States)

    Langer, Swen; Lodge, John K

    2014-06-01

    Water-soluble vitamins are an important class of compounds that require quantification from food sources to monitor nutritional value. In this study we have analysed six water-soluble B vitamins ([thiamine (B1), riboflavin (B2), nicotinic acid (B3, NAc), nicotinamide (B3, NAm), pyridoxal (B6), folic acid (B9)], and ascorbic acid (vit C) with hydrophilic interaction liquid chromatography (HILIC), and compared UV, fluorescent (FLD) and coulometric detection to optimise a method to quantitate the vitamins from food sources. Employing UV/diode array (DAD) and fluorimetric detection, six B vitamins were detected in a single run using gradient elution from 100% to 60% solvent B [10mM ammonium acetate, pH 5.0, in acetonitrile and water 95:5 (v:v)] over 18 min. UV detection was performed at 268 nm for B1, 260 nm for both B3 species and 284 nm for B9. FLD was employed for B2 at excitation wavelength of 268 nm, emission of 513 nm, and 284 nm/317 nm for B6. Coulometric detection can be used to detect B6 and B9, and vit C, and was performed isocratically at 75% and 85% of solvent B, respectively. B6 was analysed at a potential of 720 mV, while B9 was analysed at 600 mV, and vit C at 30 mV. Retention times (0.96 to 11.81 min), intra-day repeatability (CV 1.6 to 3.6), inter-day variability (CV 1.8 to 11.1), and linearity (R 0.9877 to 0.9995) remained good under these conditions with limits of detection varying from 6.6 to 164.6 ng mL(-1), limits of quantification between 16.8 and 548.7 ng mL(-1). The method was successfully applied for quantification of six B vitamins from a fortified food product and is, to our knowledge, the first to simultaneously determine multiple water-soluble vitamins extracted from a food matrix using HILIC. Copyright © 2014 Elsevier B.V. All rights reserved.

  14. Green's matrix for a second-order self-adjoint matrix differential operator

    International Nuclear Information System (INIS)

    Sisman, Tahsin Cagri; Tekin, Bayram

    2010-01-01

    A systematic construction of the Green's matrix for a second-order self-adjoint matrix differential operator from the linearly independent solutions of the corresponding homogeneous differential equation set is carried out. We follow the general approach of extracting the Green's matrix from the Green's matrix of the corresponding first-order system. This construction is required in the cases where the differential equation set cannot be turned to an algebraic equation set via transform techniques.

  15. GoM Diet Matrix

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This data set was taken from CRD 08-18 at the NEFSC. Specifically, the Gulf of Maine diet matrix was developed for the EMAX exercise described in that center...

  16. Validation of simple quantification methods for 18F FP CIT PET Using Automatic Delineation of volumes of interest based on statistical probabilistic anatomical mapping and isocontour margin setting

    International Nuclear Information System (INIS)

    Kim, Yong Il; Im, Hyung Jun; Paeng, Jin Chul; Lee, Jae Sung; Eo, Jae Seon; Kim, Dong Hyun; Kim, Euishin E.; Kang, Keon Wook; Chung, June Key; Lee Dong Soo

    2012-01-01

    18 F FP CIT positron emission tomography (PET) is an effective imaging for dopamine transporters. In usual clinical practice, 18 F FP CIT PET is analyzed visually or quantified using manual delineation of a volume of interest (VOI) fir the stratum. in this study, we suggested and validated two simple quantitative methods based on automatic VOI delineation using statistical probabilistic anatomical mapping (SPAM) and isocontour margin setting. Seventy five 18 F FP CIT images acquired in routine clinical practice were used for this study. A study-specific image template was made and the subject images were normalized to the template. afterwards, uptakes in the striatal regions and cerebellum were quantified using probabilistic VOI based on SPAM. A quantitative parameter, Q SPAM, was calculated to simulate binding potential. additionally, the functional volume of each striatal region and its uptake were measured in automatically delineated VOI using isocontour margin setting. Uptake volume product(Q UVP) was calculated for each striatal region. Q SPAMa nd Q UVPw as calculated for each visual grading and the influence of cerebral atrophy on the measurements was tested. Image analyses were successful in all the cases. Both the Q SPAMa nd Q UVPw ere significantly different according to visual grading (0.001). The agreements of Q UVPa nd Q SPAMw ith visual grading were slight to fair for the caudate nucleus (K= 0.421 and 0.291, respectively) and good to prefect to the putamen (K=0.663 and 0.607, respectively). Also, Q SPAMa nd Q UVPh ad a significant correlation with each other (0.001). Cerebral atrophy made a significant difference in Q SPAMa nd Q UVPo f the caudate nuclei regions with decreased 18 F FP CIT uptake. Simple quantitative measurements of Q SPAMa nd Q UVPs howed acceptable agreement with visual grad-ing. although Q SPAMi n some group may be influenced by cerebral atrophy, these simple methods are expected to be effective in the quantitative analysis of F FP

  17. Obtaining valid laboratory data in clinical trials conducted in resource diverse settings: lessons learned from a microbicide phase III clinical trial.

    Directory of Open Access Journals (Sweden)

    Tania Crucitti

    2010-10-01

    Full Text Available Over the last decade several phase III microbicides trials have been conducted in developing countries. However, laboratories in resource constrained settings do not always have the experience, infrastructure, and the capacity to deliver laboratory data meeting the high standards of clinical trials. This paper describes the design and outcomes of a laboratory quality assurance program which was implemented during a phase III clinical trial evaluating the efficacy of the candidate microbicide Cellulose Sulfate 6% (CS [1].In order to assess the effectiveness of CS for HIV and STI prevention, a phase III clinical trial was conducted in 5 sites: 3 in Africa and 2 in India. The trial sponsor identified an International Central Reference Laboratory (ICRL, responsible for the design and management of a quality assurance program, which would guarantee the reliability of laboratory data. The ICRL provided advice on the tests, assessed local laboratories, organized trainings, conducted supervision visits, performed re-tests, and prepared control panels. Local laboratories were provided with control panels for HIV rapid tests and Chlamydia trachomatis/Neisseria gonorrhoeae (CT/NG amplification technique. Aliquots from respective control panels were tested by local laboratories and were compared with results obtained at the ICRL.Overall, good results were observed. However, discordances between the ICRL and site laboratories were identified for HIV and CT/NG results. One particular site experienced difficulties with HIV rapid testing shortly after study initiation. At all sites, DNA contamination was identified as a cause of invalid CT/NG results. Both problems were timely detected and solved. Through immediate feedback, guidance and repeated training of laboratory staff, additional inaccuracies were prevented.Quality control guidelines when applied in field laboratories ensured the reliability and validity of final study data. It is essential that sponsors

  18. Validation of a new method for testing provider clinical quality in rural settings in low- and middle-income countries: the observed simulated patient.

    Directory of Open Access Journals (Sweden)

    Tin Aung

    Full Text Available BACKGROUND: Assessing the quality of care provided by individual health practitioners is critical to identifying possible risks to the health of the public. However, existing assessment methods can be inaccurate, expensive, or infeasible in many developing country settings, particularly in rural areas and especially for children. Following an assessment of the strengths and weaknesses of the existing methods for provider assessment, we developed a synthesis method combining components of direct observation, clinical vignettes, and medical mannequins which we have termed "Observed Simulated Patient" or OSP. An OSP assessment involves a trained actor playing the role of a 'mother', a life-size doll representing a 5-year old boy, and a trained observer. The provider being assessed was informed in advance of the role-playing, and told to conduct the diagnosis and treatment as he normally would while verbally describing the examinations. METHODOLOGY/PRINCIPAL FINDINGS: We tested the validity of OSP by conducting parallel scoring of medical providers in Myanmar, assessing the quality of their diagnosis and treatment of pediatric malaria, first by direct observation of true patients and second by OSP. Data were collected from 20 private independent medical practitioners in Mon and Kayin States, Myanmar between December 26, 2010 and January 12, 2011. All areas of assessment showed agreement between OSP and direct observation above 90% except for history taking related to past experience with malaria medicines. In this area, providers did not ask questions of the OSP to the same degree that they questioned real patients (agreement 82.8%. CONCLUSIONS/SIGNIFICANCE: The OSP methodology may provide a valuable option for quality assessment of providers in places, or for health conditions, where other assessment tools are unworkable.

  19. Improved diagnostic accuracy of Alzheimer's disease by combining regional cortical thickness and default mode network functional connectivity: Validated in the Alzheimer's disease neuroimaging initiative set

    Energy Technology Data Exchange (ETDEWEB)

    Park, Ji Eun; Park, Bum Woo; Kim, Sang Joon; Kim, Ho Sung; Choi, Choong Gon; Jung, Seung Jung; Oh, Joo Young; Shim, Woo Hyun [Dept. of Radiology and Research Institute of Radiology, University of Ulsan College of Medicine, Asan Medical Center, Seoul (Korea, Republic of); Lee, Jae Hong; Roh, Jee Hoon [University of Ulsan College of Medicine, Asan Medical Center, Seoul (Korea, Republic of)

    2017-11-15

    To identify potential imaging biomarkers of Alzheimer's disease by combining brain cortical thickness (CThk) and functional connectivity and to validate this model's diagnostic accuracy in a validation set. Data from 98 subjects was retrospectively reviewed, including a study set (n = 63) and a validation set from the Alzheimer's Disease Neuroimaging Initiative (n = 35). From each subject, data for CThk and functional connectivity of the default mode network was extracted from structural T1-weighted and resting-state functional magnetic resonance imaging. Cortical regions with significant differences between patients and healthy controls in the correlation of CThk and functional connectivity were identified in the study set. The diagnostic accuracy of functional connectivity measures combined with CThk in the identified regions was evaluated against that in the medial temporal lobes using the validation set and application of a support vector machine. Group-wise differences in the correlation of CThk and default mode network functional connectivity were identified in the superior temporal (p < 0.001) and supramarginal gyrus (p = 0.007) of the left cerebral hemisphere. Default mode network functional connectivity combined with the CThk of those two regions were more accurate than that combined with the CThk of both medial temporal lobes (91.7% vs. 75%). Combining functional information with CThk of the superior temporal and supramarginal gyri in the left cerebral hemisphere improves diagnostic accuracy, making it a potential imaging biomarker for Alzheimer's disease.

  20. Validation for 2D/3D registration II: The comparison of intensity- and gradient-based merit functions using a new gold standard data set

    International Nuclear Information System (INIS)

    Gendrin, Christelle; Markelj, Primoz; Pawiro, Supriyanto Ardjo; Spoerk, Jakob; Bloch, Christoph; Weber, Christoph; Figl, Michael; Bergmann, Helmar; Birkfellner, Wolfgang; Likar, Bostjan; Pernus, Franjo

    2011-01-01

    Purpose: A new gold standard data set for validation of 2D/3D registration based on a porcine cadaver head with attached fiducial markers was presented in the first part of this article. The advantage of this new phantom is the large amount of soft tissue, which simulates realistic conditions for registration. This article tests the performance of intensity- and gradient-based algorithms for 2D/3D registration using the new phantom data set. Methods: Intensity-based methods with four merit functions, namely, cross correlation, rank correlation, correlation ratio, and mutual information (MI), and two gradient-based algorithms, the backprojection gradient-based (BGB) registration method and the reconstruction gradient-based (RGB) registration method, were compared. Four volumes consisting of CBCT with two fields of view, 64 slice multidetector CT, and magnetic resonance-T1 weighted images were registered to a pair of kV x-ray images and a pair of MV images. A standardized evaluation methodology was employed. Targets were evenly spread over the volumes and 250 starting positions of the 3D volumes with initial displacements of up to 25 mm from the gold standard position were calculated. After the registration, the displacement from the gold standard was retrieved and the root mean square (RMS), mean, and standard deviation mean target registration errors (mTREs) over 250 registrations were derived. Additionally, the following merit properties were computed: Accuracy, capture range, number of minima, risk of nonconvergence, and distinctiveness of optimum for better comparison of the robustness of each merit. Results: Among the merit functions used for the intensity-based method, MI reached the best accuracy with an RMS mTRE down to 1.30 mm. Furthermore, it was the only merit function that could accurately register the CT to the kV x rays with the presence of tissue deformation. As for the gradient-based methods, BGB and RGB methods achieved subvoxel accuracy (RMS m

  1. Matrix inequalities

    CERN Document Server

    Zhan, Xingzhi

    2002-01-01

    The main purpose of this monograph is to report on recent developments in the field of matrix inequalities, with emphasis on useful techniques and ingenious ideas. Among other results this book contains the affirmative solutions of eight conjectures. Many theorems unify or sharpen previous inequalities. The author's aim is to streamline the ideas in the literature. The book can be read by research workers, graduate students and advanced undergraduates.

  2. Matrix Sampling of Items in Large-Scale Assessments

    Directory of Open Access Journals (Sweden)

    Ruth A. Childs

    2003-07-01

    Full Text Available Matrix sampling of items -' that is, division of a set of items into different versions of a test form..-' is used by several large-scale testing programs. Like other test designs, matrixed designs have..both advantages and disadvantages. For example, testing time per student is less than if each..student received all the items, but the comparability of student scores may decrease. Also,..curriculum coverage is maintained, but reporting of scores becomes more complex. In this paper,..matrixed designs are compared with more traditional designs in nine categories of costs:..development costs, materials costs, administration costs, educational costs, scoring costs,..reliability costs, comparability costs, validity costs, and reporting costs. In choosing among test..designs, a testing program should examine the costs in light of its mandate(s, the content of the..tests, and the financial resources available, among other considerations.

  3. In-depth, high-accuracy proteomics of sea urchin tooth organic matrix

    Directory of Open Access Journals (Sweden)

    Mann Matthias

    2008-12-01

    Full Text Available Abstract Background The organic matrix contained in biominerals plays an important role in regulating mineralization and in determining biomineral properties. However, most components of biomineral matrices remain unknown at present. In sea urchin tooth, which is an important model for developmental biology and biomineralization, only few matrix components have been identified. The recent publication of the Strongylocentrotus purpuratus genome sequence rendered possible not only the identification of genes potentially coding for matrix proteins, but also the direct identification of proteins contained in matrices of skeletal elements by in-depth, high-accuracy proteomic analysis. Results We identified 138 proteins in the matrix of tooth powder. Only 56 of these proteins were previously identified in the matrices of test (shell and spine. Among the novel components was an interesting group of five proteins containing alanine- and proline-rich neutral or basic motifs separated by acidic glycine-rich motifs. In addition, four of the five proteins contained either one or two predicted Kazal protease inhibitor domains. The major components of tooth matrix were however largely identical to the set of spicule matrix proteins and MSP130-related proteins identified in test (shell and spine matrix. Comparison of the matrices of crushed teeth to intact teeth revealed a marked dilution of known intracrystalline matrix proteins and a concomitant increase in some intracellular proteins. Conclusion This report presents the most comprehensive list of sea urchin tooth matrix proteins available at present. The complex mixture of proteins identified may reflect many different aspects of the mineralization process. A comparison between intact tooth matrix, presumably containing odontoblast remnants, and crushed tooth matrix served to differentiate between matrix components and possible contributions of cellular remnants. Because LC-MS/MS-based methods directly

  4. Matrix analysis

    CERN Document Server

    Bhatia, Rajendra

    1997-01-01

    A good part of matrix theory is functional analytic in spirit. This statement can be turned around. There are many problems in operator theory, where most of the complexities and subtleties are present in the finite-dimensional case. My purpose in writing this book is to present a systematic treatment of methods that are useful in the study of such problems. This book is intended for use as a text for upper division and gradu­ ate courses. Courses based on parts of the material have been given by me at the Indian Statistical Institute and at the University of Toronto (in collaboration with Chandler Davis). The book should also be useful as a reference for research workers in linear algebra, operator theory, mathe­ matical physics and numerical analysis. A possible subtitle of this book could be Matrix Inequalities. A reader who works through the book should expect to become proficient in the art of deriving such inequalities. Other authors have compared this art to that of cutting diamonds. One first has to...

  5. Preliminary Assessment of ATR-C Capabilities to Provide Integral Benchmark Data for Key Structural/Matrix Materials that May be Used for Nuclear Data Testing and Analytical Methods Validation

    Energy Technology Data Exchange (ETDEWEB)

    John D. Bess

    2009-03-01

    The purpose of this research is to provide a fundamental computational investigation into the possible integration of experimental activities with the Advanced Test Reactor Critical (ATR-C) facility with the development of benchmark experiments. Criticality benchmarks performed in the ATR-C could provide integral data for key matrix and structural materials used in nuclear systems. Results would then be utilized in the improvement of nuclear data libraries and as a means for analytical methods validation. It is proposed that experiments consisting of well-characterized quantities of materials be placed in the Northwest flux trap position of the ATR-C. The reactivity worth of the material could be determined and computationally analyzed through comprehensive benchmark activities including uncertainty analyses. Experiments were modeled in the available benchmark model of the ATR using MCNP5 with the ENDF/B-VII.0 cross section library. A single bar (9.5 cm long, 0.5 cm wide, and 121.92 cm high) of each material could provide sufficient reactivity difference in the core geometry for computational modeling and analysis. However, to provide increased opportunity for the validation of computational models, additional bars of material placed in the flux trap would increase the effective reactivity up to a limit of 1$ insertion. For simplicity in assembly manufacture, approximately four bars of material could provide a means for additional experimental benchmark configurations, except in the case of strong neutron absorbers and many materials providing positive reactivity. Future tasks include the cost analysis and development of the experimental assemblies, including means for the characterization of the neutron flux and spectral indices. Oscillation techniques may also serve to provide additional means for experimentation and validation of computational methods and acquisition of integral data for improving neutron cross sections. Further assessment of oscillation

  6. Preliminary Assessment of ATR-C Capabilities to Provide Integral Benchmark Data for Key Structural/Matrix Materials that May be Used for Nuclear Data Testing and Analytical Methods Validation

    Energy Technology Data Exchange (ETDEWEB)

    John D. Bess

    2009-07-01

    The purpose of this document is to identify some suggested types of experiments that can be performed in the Advanced Test Reactor Critical (ATR-C) facility. A fundamental computational investigation is provided to demonstrate possible integration of experimental activities in the ATR-C with the development of benchmark experiments. Criticality benchmarks performed in the ATR-C could provide integral data for key matrix and structural materials used in nuclear systems. Results would then be utilized in the improvement of nuclear data libraries and as a means for analytical methods validation. It is proposed that experiments consisting of well-characterized quantities of materials be placed in the Northwest flux trap position of the ATR-C. The reactivity worth of the material could be determined and computationally analyzed through comprehensive benchmark activities including uncertainty analyses. Experiments were modeled in the available benchmark model of the ATR using MCNP5 with the ENDF/B-VII.0 cross section library. A single bar (9.5 cm long, 0.5 cm wide, and 121.92 cm high) of each material could provide sufficient reactivity difference in the core geometry for computational modeling and analysis. However, to provide increased opportunity for the validation of computational models, additional bars of material placed in the flux trap would increase the effective reactivity up to a limit of 1$ insertion. For simplicity in assembly manufacture, approximately four bars of material could provide a means for additional experimental benchmark configurations, except in the case of strong neutron absorbers and many materials providing positive reactivity. Future tasks include the cost analysis and development of the experimental assemblies, including means for the characterization of the neutron flux and spectral indices. Oscillation techniques may also serve to provide additional means for experimentation and validation of computational methods and acquisition of

  7. UpSet: Visualization of Intersecting Sets

    Science.gov (United States)

    Lex, Alexander; Gehlenborg, Nils; Strobelt, Hendrik; Vuillemot, Romain; Pfister, Hanspeter

    2016-01-01

    Understanding relationships between sets is an important analysis task that has received widespread attention in the visualization community. The major challenge in this context is the combinatorial explosion of the number of set intersections if the number of sets exceeds a trivial threshold. In this paper we introduce UpSet, a novel visualization technique for the quantitative analysis of sets, their intersections, and aggregates of intersections. UpSet is focused on creating task-driven aggregates, communicating the size and properties of aggregates and intersections, and a duality between the visualization of the elements in a dataset and their set membership. UpSet visualizes set intersections in a matrix layout and introduces aggregates based on groupings and queries. The matrix layout enables the effective representation of associated data, such as the number of elements in the aggregates and intersections, as well as additional summary statistics derived from subset or element attributes. Sorting according to various measures enables a task-driven analysis of relevant intersections and aggregates. The elements represented in the sets and their associated attributes are visualized in a separate view. Queries based on containment in specific intersections, aggregates or driven by attribute filters are propagated between both views. We also introduce several advanced visual encodings and interaction methods to overcome the problems of varying scales and to address scalability. UpSet is web-based and open source. We demonstrate its general utility in multiple use cases from various domains. PMID:26356912

  8. Matrix pentagons

    Science.gov (United States)

    Belitsky, A. V.

    2017-10-01

    The Operator Product Expansion for null polygonal Wilson loop in planar maximally supersymmetric Yang-Mills theory runs systematically in terms of multi-particle pentagon transitions which encode the physics of excitations propagating on the color flux tube ending on the sides of the four-dimensional contour. Their dynamics was unraveled in the past several years and culminated in a complete description of pentagons as an exact function of the 't Hooft coupling. In this paper we provide a solution for the last building block in this program, the SU(4) matrix structure arising from internal symmetry indices of scalars and fermions. This is achieved by a recursive solution of the Mirror and Watson equations obeyed by the so-called singlet pentagons and fixing the form of the twisted component in their tensor decomposition. The non-singlet, or charged, pentagons are deduced from these by a limiting procedure.

  9. Matrix pentagons

    Directory of Open Access Journals (Sweden)

    A.V. Belitsky

    2017-10-01

    Full Text Available The Operator Product Expansion for null polygonal Wilson loop in planar maximally supersymmetric Yang–Mills theory runs systematically in terms of multi-particle pentagon transitions which encode the physics of excitations propagating on the color flux tube ending on the sides of the four-dimensional contour. Their dynamics was unraveled in the past several years and culminated in a complete description of pentagons as an exact function of the 't Hooft coupling. In this paper we provide a solution for the last building block in this program, the SU(4 matrix structure arising from internal symmetry indices of scalars and fermions. This is achieved by a recursive solution of the Mirror and Watson equations obeyed by the so-called singlet pentagons and fixing the form of the twisted component in their tensor decomposition. The non-singlet, or charged, pentagons are deduced from these by a limiting procedure.

  10. The Virtual Care Climate Questionnaire: Development and Validation of a Questionnaire Measuring Perceived Support for Autonomy in a Virtual Care Setting

    NARCIS (Netherlands)

    Smit, E.S.; Dima, A.L.; Immerzeel, S.A.M.; van den Putte, B.; Williams, G.C.

    Background: Web-based health behavior change interventions may be more effective if they offer autonomy-supportive communication facilitating the internalization of motivation for health behavior change. Yet, at this moment no validated tools exist to assess user-perceived autonomy-support of such

  11. Validity, reliability and utility of the Irish Nursing Minimum Data Set for General Nursing in investigating the effectiveness of nursing interventions in a general nursing setting: A repeated measures design.

    LENUS (Irish Health Repository)

    Morris, Roisin

    2013-08-06

    Internationally, nursing professionals are coming under increasing pressure to highlight the contribution they make to health care and patient outcomes. Despite this, difficulties exist in the provision of quality information aimed at describing nursing work in sufficient detail. The Irish Minimum Data Set for General Nursing is a new nursing data collection system aimed at highlighting the contribution of nursing to patient care.

  12. Single-item measures for depression and anxiety: Validation of the Screening Tool for Psychological Distress in an inpatient cardiology setting.

    Science.gov (United States)

    Young, Quincy-Robyn; Nguyen, Michelle; Roth, Susan; Broadberry, Ann; Mackay, Martha H

    2015-12-01

    Depression and anxiety are common among patients with cardiovascular disease (CVD) and confer significant cardiac risk, contributing to CVD morbidity and mortality. Unfortunately, due to the lack of screening tools that address the specific needs of hospitalized patients, few cardiac inpatient programs offer routine screening for these forms of psychological distress, despite recommendations to do so. The purpose of this study was to validate single-item measures for depression and anxiety among cardiac inpatients. Consecutive inpatients were recruited from the cardiology and cardiac surgery step-down units at a university-affiliated, quaternary-care hospital. Subjects completed a questionnaire that included: (a) demographics, (b) single-item-measures for depression and anxiety (from the Screening Tool for Psychological Distress (STOP-D)), and (c) Hospital Anxiety and Depression Scale (HADS). One hundred and five participants were recruited with a wide variety of cardiac diagnoses, having a mean age of 66 years, and 28% were women. Both STOP-D items were highly correlated with their corresponding validated measures and demonstrated robust receiver-operator characteristic curves. Severity scores on both items correlated well with established severity cut-off scores on the corresponding subscales of the HADS. The STOP-D is a self-administered, self-report measure using two independent items that provide severity scores for depression and anxiety. The tool performs very well compared with other previously validated measures. Requiring no additional scoring and being free, STOP-D offers a simple and valid method for identifying hospitalized cardiac patients who are experiencing psychological distress. This crucial first step triggers initiation of appropriate monitoring and intervention, thus reducing the likelihood of the adverse cardiac outcomes associated with psychological distress. © The European Society of Cardiology 2014.

  13. Predictive Validity of the STarT Back Tool for Risk of Persistent Disabling Back Pain in a U.S Primary Care Setting.

    Science.gov (United States)

    Suri, Pradeep; Delaney, Kristin; Rundell, Sean D; Cherkin, Daniel C

    2018-04-03

    To examine the predictive validity of the Subgrouping for Targeted Treatment (STarT Back) tool for classifying people with back pain into categories of low, medium, and high risk of persistent disabling back pain in U.S. primary care. Secondary analysis of data from participants receiving usual care in a randomized clinical trial. Primary care clinics. Adults (N = 1109) ≥18 years of age with back pain. Those with specific causes of back pain (pregnancy, disc herniation, vertebral fracture, spinal stenosis) and work-related injuries were not included. Not applicable. The original 9-item version of the STarT Back tool, administered at baseline, stratified patients by their risk (low, medium, high) of persistent disabling back pain (STarT Back risk group). Persistent disabling back pain was defined as Roland-Morris Disability Questionnaire scores of ≥7 at 6-month follow-up. The STarT Back risk group was a significant predictor of persistent disabling back pain (PSTarT Back risk groups successfully separated people with back pain into distinct categories of risk for persistent disabling back pain at 6-month follow-up in U.S. primary care. These results were very similar to those in the original STarT Back validation study. This validation study is a necessary first step toward identifying whether the entire STarT Back approach, including matched/targeted treatment, can be effectively used for primary care in the United States. Published by Elsevier Inc.

  14. Explicit Covariance Matrix for Particle Measurement Precision

    CERN Document Server

    Karimäki, Veikko

    1997-01-01

    We derive explicit and precise formulae for 3 by 3 error matrix of the particle transverse momentum, direction and impact parameter. The error matrix elements are expressed as functions of up to fourth order statistical moments of the measured coordinates. The formulae are valid for any curvature and track length in case of negligible multiple scattering.

  15. Detection of depression in low resource settings: validation of the Patient Health Questionnaire (PHQ-9) and cultural concepts of distress in Nepal.

    Science.gov (United States)

    Kohrt, Brandon A; Luitel, Nagendra P; Acharya, Prakash; Jordans, Mark J D

    2016-03-08

    Despite recognition of the burden of disease due to mood disorders in low- and middle-income countries, there is a lack of consensus on best practices for detecting depression. Self-report screening tools, such as the Patient Health Questionnaire (PHQ-9), require modification for low literacy populations and to assure cultural and clinical validity. An alternative approach is to employ idioms of distress that are locally salient, but these are not synonymous with psychiatric categories. Therefore, our objectives were to evaluate the validity of the PHQ-9, assess the added value of using idioms of distress, and develop an algorithm for depression detection in primary care. We conducted a transcultural translation of the PHQ-9 in Nepal using qualitative methods to achieve semantic, content, technical, and criterion equivalence. Researchers administered the Nepali PHQ-9 to randomly selected patients in a rural primary health care center. Trained psychosocial counselors administered a validated Nepali depression module of the Composite International Diagnostic Interview (CIDI) to validate the Nepali PHQ-9. Patients were also assessed for local idioms of distress including heart-mind problems (Nepali, manko samasya). Among 125 primary care patients, 17 (14 %) were positive for a major depressive episode in the prior 2 weeks based on CIDI administration. With a Nepali PHQ-9 cutoff ≥ 10: sensitivity = 0.94, specificity = 0.80, positive predictive value (PPV) =0.42, negative predictive value (NPV) =0.99, positive likelihood ratio = 4.62, and negative likelihood ratio = 0.07. For heart-mind problems: sensitivity = 0.94, specificity = 0.27, PPV = 0.17, NPV = 0.97. With an algorithm comprising two screening questions (1. presence of heart-mind problems and 2. function impairment due to heart-mind problems) to determine who should receive the full PHQ-9, the number of patients requiring administration of the PHQ-9 could be reduced by 50 %, PHQ-9 false positives would be

  16. Translation and validation of the breast feeding self efficacy scale into the Kiswahili language in resource restricted setting in Thika – Kenya

    Directory of Open Access Journals (Sweden)

    D.M Mituki

    2017-01-01

    Full Text Available Background Exclusive breastfeeding (EBF is one of the most cost‐effective, health‐ promoting, and disease‐preventing intervention and has been referred to as the cornerstone of child survival. Many mothers however discontinue EBF before the end of six months recommended by World Health Organization (WHO some due to psychosocial issues. Breast feeding self‐efficacy scale‐short form (BSES‐SF, has been used to establish mothers’ self‐efficacy towards breastfeeding by computing breast feeding self‐efficacy (BSE scores. These scores have been used globally to predict EBF duration. Internationally accepted tools can be used to compare data across countries. Such tools however need to be translated into local languages for different countries and set‐ups. Objectives The aim of the study was to translate and validate the English BSES‐SF into Kiswahili the national laguage in Kenya. Methods The study was a pilot study within the main cluster randomized longitudinal study. Pregnant women at 37 weeks gestation were randomly placed into, intervention (n=21 and comparison (n=21 groups. The BSES‐SF questionnaire was used to collect data on BSE at baseline and another questionnaire used to collect socio‐ economic data. Mothers in the intervention were educated on the importance of exclusive breastfeeding (EBF and skills required while those in the comparison group went through usual care provided at the health facility. Nutrition education was tailored to promoting maternal BSE. Results The translated BSES‐SF was found to be easy to understand, it showed good consistency and semantic validity. Predictive validity was demonstrated through significant mean differences between the groups. The intervention group had higher EBF rates at 6 weeks post‐partum (χ2=6.170, p=0.013. The Cronbach’s alpha coefficient for the Kiswahili version of the BSES‐SF was 0.91 with a mean score of 60.95 (SD ±10.36, an item mean of 4.354. Conclusion

  17. Quality Indicators for In-Hospital Pharmaceutical Care of Dutch Elderly Patients Development and Validation of an ACOVE-Based Quality Indicator Set

    NARCIS (Netherlands)

    Wierenga, Peter C.; Klopotowska, Joanna E.; Smorenburg, Susanne M.; van Kan, Hendrikus J.; Bijleveld, Yuma A.; Dijkgraaf, Marcel G.; de Rooij, Sophia E.

    2011-01-01

    Background: In 2001, the ACOVE (Assessing Care Of Vulnerable Elders) quality indicators (QIs) were developed in the US to measure the quality of care of vulnerable elderly patients. However, the ACOVE QI set was developed mainly to assess the overall quality of care of community-dwelling vulnerable

  18. Building, testing and validating a set of home-made von Frey filaments: a precise, accurate and cost effective alternative for nociception assessment.

    Science.gov (United States)

    de Sousa, Marcelo Victor Pires; Ferraresi, Cleber; de Magalhães, Ana Carolina; Yoshimura, Elisabeth Mateus; Hamblin, Michael R

    2014-07-30

    A von Frey filament (vFF) is a type of aesthesiometer usually made of nylon perpendicularly held in a base. It can be used in paw withdrawal pain threshold assessment, one of the most popular tests for pain evaluation using animal models. For this test, a set of filaments, each able to exert a different force, is applied to the animal paw, from the weakest to the strongest, until the paw is withdrawn. We made 20 low cost vFF using nylon filaments of different lengths and constant diameter glued perpendicularly to the ends of popsicle sticks. They were calibrated using a laboratory balance scale. Building and calibrating took around 4h and confirmed the theoretical prediction that the force exerted is inversely proportional to the length and directly proportional to the width of the filament. The calibration showed that they were precise and accurate. We analyzed the paw withdrawal threshold assessed with the set of home-made vFF and with a high quality commercial set of 5 monofilaments vFF (Stoelting, Wood Dale, USA) in two groups (n=5) of healthy mice. The home-made vFF precisely and accurately measured the hind paw withdrawal threshold (20.3±0.9 g). The commercial vFF have different diameters while our set has the same diameter avoiding the problem of lower sensitivity to larger diameter filaments. Building a set of vFF is easy, cost effective, and depending on the kind of tests, can increase precision and accuracy of animal nociception evaluation. Copyright © 2014 Elsevier B.V. All rights reserved.

  19. Validated Outcomes in the Grafting of Autologous Fat to the Breast: The VOGUE Study. Development of a Core Outcome Set for Research and Audit.

    Science.gov (United States)

    Agha, Riaz A; Pidgeon, Thomas E; Borrelli, Mimi R; Dowlut, Naeem; Orkar, Ter-Er K; Ahmed, Maziyah; Pujji, Ojas; Orgill, Dennis P

    2018-05-01

    Autologous fat grafting is an important part of the reconstructive surgeon's toolbox when treating women affected by breast cancer and subsequent tumor extirpation. The debate over safety and efficacy of autologous fat grafting continues within the literature. However, work performed by the authors' group has shown significant heterogeneity in outcome reporting. Core outcome sets have been shown to reduce heterogeneity in outcome reporting. The authors' goal was to develop a core outcome set for autologous fat grafting in breast reconstruction. The authors published their protocol a priori. A Delphi consensus exercise among key stakeholders was conducted using a list of outcomes generated from their previous work. These outcomes were divided into six domains: oncologic, clinical, aesthetic and functional, patient-reported, process, and radiologic. In the first round, 55 of 78 participants (71 percent) completed the Delphi consensus exercise. Consensus was reached on nine of the 13 outcomes. The clarity of the results and lack of additional suggested outcomes deemed further rounds to be unnecessary. The VOGUE Study has led to the development of a much-needed core outcome set in the active research front and clinical area of autologous fat grafting. The authors hope that clinicians will use this core outcome set to audit their practice, and that researchers will implement these outcomes in their study design and reporting of autologous fat grafting outcomes. The authors encourage journals and surgical societies to endorse and encourage use of this core outcome set to help refine the scientific quality of the debate, the discourse, and the literature. Therapeutic, V.

  20. Validity of the linear no-threshold (LNT) hypothesis in setting radiation protection regulations for the inhabitants in high level natural radiation areas of Ramsar, Iran

    International Nuclear Information System (INIS)

    Mortazavi, S.M.J.; Atefi, M.; Razi, Z.; Mortazavi Gh

    2010-01-01

    Some areas in Ramsar, a city in northern Iran, have long been known as inhabited areas with the highest levels of natural radiation. Despite the fact that the health effects of high doses of ionizing radiation are well documented, biological effects of above the background levels of natural radiation are still controversial and the validity of the LNT hypothesis in this area, has been criticized by many investigators around the world. The study of the health effects of high levels of natural radiation in areas such as Ramsar, help scientists to investigate the biological effects without the need for extrapolating the observations either from high doses of radiation to low dose region or from laboratory animals to humans. Considering the importance of these studies, National Radiation Protection Department (NRPD) of the Iranian Nuclear Regulatory Authority has started an integrative research project on the health effects of long-term exposure to high levels of natural radiation. This paper reviews findings of the studies conducted on the plants and humans living or laboratory animals kept in high level natural radiation areas of Ramsar. In human studies, different end points such as DNA damage, chromosome aberrations, blood cells and immunological alterations are discussed. This review comes to the conclusion that no reproducible detrimental health effect has been reported so far. In this paper the validity of LNT hypothesis in the assessment of the health effects of high levels of natural radiation is discussed. (author)

  1. Use of international data sets to evaluate and validate pathway assessment models applicable to exposure and dose reconstruction at DOE facilities. Monthly progress reports and final report, October--December 1994

    International Nuclear Information System (INIS)

    Hoffman, F.O.

    1995-01-01

    The objective of Task 7.lD was to (1) establish a collaborative US-USSR effort to improve and validate our methods of forecasting doses and dose commitments from the direct contamination of food sources, and (2) perform experiments and validation studies to improve our ability to predict rapidly and accurately the long-term internal dose from the contamination of agricultural soil. At early times following an accident, the direct contamination of pasture and food stuffs, particularly leafy vegetation and grain, can be of great importance. This situation has been modeled extensively. However, models employed then to predict the deposition, retention and transport of radionuclides in terrestrial environments employed concepts and data bases that were more than a decade old. The extent to which these models have been tested with independent data sets was limited. The data gathered in the former-USSR (and elsewhere throughout the Northern Hemisphere) offered a unique opportunity to test model predictions of wet and dry deposition, agricultural foodchain bioaccumulation, and short- and long-term retention, redistribution, and resuspension of radionuclides from a variety of natural and artificial surfaces. The current objective of this project is to evaluate and validate pathway-assessment models applicable to exposure and dose reconstruction at DOE facilities through use of international data sets. This project incorporates the activity of Task 7.lD into a multinational effort to evaluate models and data used for the prediction of radionuclide transfer through agricultural and aquatic systems to humans. It also includes participation in two studies, BIOMOVS (BIOspheric MOdel Validation Study) with the Swedish National Institute for Radiation Protection and VAMP (VAlidation of Model Predictions) with the International Atomic Energy Agency, that address testing the performance of models of radionuclide transport through foodchains

  2. Use of international data sets to evaluate and validate pathway assessment models applicable to exposure and dose reconstruction at DOE facilities. Monthly progress reports and final report, October--December 1994

    Energy Technology Data Exchange (ETDEWEB)

    Hoffman, F.O. [Senes Oak Ridge, Inc., TN (United States). Center for Risk Analysis

    1995-04-01

    The objective of Task 7.lD was to (1) establish a collaborative US-USSR effort to improve and validate our methods of forecasting doses and dose commitments from the direct contamination of food sources, and (2) perform experiments and validation studies to improve our ability to predict rapidly and accurately the long-term internal dose from the contamination of agricultural soil. At early times following an accident, the direct contamination of pasture and food stuffs, particularly leafy vegetation and grain, can be of great importance. This situation has been modeled extensively. However, models employed then to predict the deposition, retention and transport of radionuclides in terrestrial environments employed concepts and data bases that were more than a decade old. The extent to which these models have been tested with independent data sets was limited. The data gathered in the former-USSR (and elsewhere throughout the Northern Hemisphere) offered a unique opportunity to test model predictions of wet and dry deposition, agricultural foodchain bioaccumulation, and short- and long-term retention, redistribution, and resuspension of radionuclides from a variety of natural and artificial surfaces. The current objective of this project is to evaluate and validate pathway-assessment models applicable to exposure and dose reconstruction at DOE facilities through use of international data sets. This project incorporates the activity of Task 7.lD into a multinational effort to evaluate models and data used for the prediction of radionuclide transfer through agricultural and aquatic systems to humans. It also includes participation in two studies, BIOMOVS (BIOspheric MOdel Validation Study) with the Swedish National Institute for Radiation Protection and VAMP (VAlidation of Model Predictions) with the International Atomic Energy Agency, that address testing the performance of models of radionuclide transport through foodchains.

  3. Validity and reliability of the Malay version of the Hill-Bone compliance to high blood pressure therapy scale for use in primary healthcare settings in Malaysia: A cross-sectional study.

    Science.gov (United States)

    Cheong, A T; Tong, S F; Sazlina, S G

    2015-01-01

    Hill-Bone compliance to high blood pressure therapy scale (HBTS) is one of the useful scales in primary care settings. It has been tested in America, Africa and Turkey with variable validity and reliability. The aim of this paper was to determine the validity and reliability of the Malay version of HBTS (HBTS-M) for the Malaysian population. HBTS comprises three subscales assessing compliance to medication, appointment and salt intake. The content validity of HBTS to the local population was agreed through consensus of expert panel. The 14 items used in the HBTS were adapted to reflect the local situations. It was translated into Malay and then back-translated into English. The translated version was piloted in 30 participants. This was followed by structural and predictive validity, and internal consistency testing in 262 patients with hypertension, who were on antihypertensive agent(s) for at least 1 year in two primary healthcare clinics in Kuala Lumpur, Malaysia. Exploratory factor analyses and the correlation between HBTS-M total score and blood pressure were performed. The Cronbach's alpha was calculated accordingly. Factor analysis revealed a three-component structure represented by two components on medication adherence and one on salt intake adherence. The Kaiser-Meyer-Olkin statistic was 0.764. The variance explained by each factors were 23.6%, 10.4% and 9.8%, respectively. However, the internal consistency for each component was suboptimal with Cronbach's alpha of 0.64, 0.55 and 0.29, respectively. Although there were two components representing medication adherence, the theoretical concepts underlying each concept cannot be differentiated. In addition, there was no correlation between the HBTS-M total score and blood pressure. HBTS-M did not conform to the structural and predictive validity of the original scale. Its reliability on assessing medication and salt intake adherence would most probably to be suboptimal in the Malaysian primary care setting.

  4. Measuring Post-Partum Haemorrhage in Low-Resource Settings: The Diagnostic Validity of Weighed Blood Loss versus Quantitative Changes in Hemoglobin.

    Directory of Open Access Journals (Sweden)

    Esther Cathyln Atukunda

    Full Text Available Accurate estimation of blood loss is central to prompt diagnosis and management of post-partum hemorrhage (PPH, which remains a leading cause of maternal mortality in low-resource countries. In such settings, blood loss is often estimated visually and subjectively by attending health workers, due to inconsistent availability of laboratory infrastructure. We evaluated the diagnostic accuracy of weighed blood loss (WBL versus changes in peri-partum hemoglobin to detect PPH.Data from this analysis were collected as part of a randomized controlled trial comparing oxytocin with misoprostol for PPH (NCT01866241. Blood samples for complete blood count were drawn on admission and again prior to hospital discharge or before blood transfusion. During delivery, women were placed on drapes and had pre-weighed sanitary towels placed around their perineum. Blood was then drained into a calibrated container and the sanitary towels were added to estimate WBL, where each gram of blood was estimated as a milliliter. Sensitivity, specificity, negative and positive predictive values (PPVs were calculated at various blood volume loss and time combinations, and we fit receiver-operator curves using blood loss at 1, 2, and 24 hours compared to a reference standard of haemoglobin decrease of >10%.A total of 1,140 women were enrolled in the study, of whom 258 (22.6% developed PPH, defined as a haemoglobin drop >10%, and 262 (23.0% had WBL ≥500mL. WBL generally had a poor sensitivity for detection of PPH (85% in high prevalence settings when WBL exceeds 750mL.WBL has poor sensitivity but high specificity compared to laboratory-based methods of PPH diagnosis. These characteristics correspond to a high PPV in areas with high PPH prevalence. Although WBL is not useful for excluding PPH, this low-cost, simple and reproducible method is promising as a reasonable method to identify significant PPH in such settings where quantifiable red cell indices are unavailable.

  5. Design and preliminary validation of a mobile application-based expert system to facilitate repair of medical equipment in resource-limited health settings

    Directory of Open Access Journals (Sweden)

    Wong AL

    2018-05-01

    Full Text Available Alison L Wong,1,2 Kelly M Lacob,1 Madeline G Wilson,1 Stacie M Zwolski,1 Soumyadipta Acharya1 1Center for Bioengineering, Innovation and Design, Johns Hopkins University, Baltimore, MD, USA; 2Division of Plastic Surgery, Dalhousie University, Halifax, NS, Canada Background: One of the greatest barriers to safe surgery is the availability of functional biomedical equipment. Biomedical technicians play a major role in ensuring that equipment is functional. Following in-field observations and an online survey, a mobile application was developed to aid technicians in troubleshooting biomedical equipment. It was hypothesized that this application could be used to aid technicians in equipment repair, as modeled by repair of a pulse oximeter.Methods: To identify specific barriers to equipment repair and maintenance for biomedical technicians, an online survey was conducted to determine current practices and challenges. These findings were used to guide the development of a mobile application system that guides technicians through maintenance and repair tasks. A convenience sample of technicians in Ethiopia tested the application using a broken pulse oximeter task and following this completed usability and content validity surveys.Results: Fifty-three technicians from 13 countries responded to the initial survey. The results of the survey showed that technicians find equipment manuals most useful, but these are not easily accessible. Many do not know how to or are uncomfortable reaching out to human resources. Thirty-three technicians completed the broken pulse oximeter task using the application. All were able to appropriately identify and repair the equipment, and post-task surveys of usability and content validity demonstrated highly positive scores (Agree to Strongly Agree on both scales.Discussion: This research demonstrates the need for improved access to resources for technicians and shows that a mobile application can be used to address a gap in

  6. Development of a validated liquid chromatographic method for quantification of sorafenib tosylate in the presence of stress-induced degradation products and in biological matrix employing analytical quality by design approach.

    Science.gov (United States)

    Sharma, Teenu; Khurana, Rajneet Kaur; Jain, Atul; Katare, O P; Singh, Bhupinder

    2018-05-01

    The current research work envisages an analytical quality by design-enabled development of a simple, rapid, sensitive, specific, robust and cost-effective stability-indicating reversed-phase high-performance liquid chromatographic method for determining stress-induced forced-degradation products of sorafenib tosylate (SFN). An Ishikawa fishbone diagram was constructed to embark upon analytical target profile and critical analytical attributes, i.e. peak area, theoretical plates, retention time and peak tailing. Factor screening using Taguchi orthogonal arrays and quality risk assessment studies carried out using failure mode effect analysis aided the selection of critical method parameters, i.e. mobile phase ratio and flow rate potentially affecting the chosen critical analytical attributes. Systematic optimization using response surface methodology of the chosen critical method parameters was carried out employing a two-factor-three-level-13-run, face-centered cubic design. A method operable design region was earmarked providing optimum method performance using numerical and graphical optimization. The optimum method employed a mobile phase composition consisting of acetonitrile and water (containing orthophosphoric acid, pH 4.1) at 65:35 v/v at a flow rate of 0.8 mL/min with UV detection at 265 nm using a C 18 column. Response surface methodology validation studies confirmed good efficiency and sensitivity of the developed method for analysis of SFN in mobile phase as well as in human plasma matrix. The forced degradation studies were conducted under different recommended stress conditions as per ICH Q1A (R2). Mass spectroscopy studies showed that SFN degrades in strongly acidic, alkaline and oxidative hydrolytic conditions at elevated temperature, while the drug was per se found to be photostable. Oxidative hydrolysis using 30% H 2 O 2 showed maximum degradation with products at retention times of 3.35, 3.65, 4.20 and 5.67 min. The absence of any

  7. Validation philosophy

    International Nuclear Information System (INIS)

    Vornehm, D.

    1994-01-01

    To determine when a set of calculations falls within an umbrella of an existing validation documentation, it is necessary to generate a quantitative definition of range of applicability (our definition is only qualitative) for two reasons: (1) the current trend in our regulatory environment will soon make it impossible to support the legitimacy of a validation without quantitative guidelines; and (2) in my opinion, the lack of support by DOE for further critical experiment work is directly tied to our inability to draw a quantitative open-quotes line-in-the-sandclose quotes beyond which we will not use computer-generated values

  8. A systematic review and meta-analysis of the criterion validity of nutrition assessment tools for diagnosing protein-energy malnutrition in the older community setting (the MACRo study).

    Science.gov (United States)

    Marshall, Skye; Craven, Dana; Kelly, Jaimon; Isenring, Elizabeth

    2017-10-12

    Malnutrition is a significant barrier to healthy and independent ageing in older adults who live in their own homes, and accurate diagnosis is a key step in managing the condition. However, there has not been sufficient systematic review or pooling of existing data regarding malnutrition diagnosis in the geriatric community setting. The current paper was conducted as part of the MACRo (Malnutrition in the Ageing Community Review) Study and seeks to determine the criterion (concurrent and predictive) validity and reliability of nutrition assessment tools in making a diagnosis of protein-energy malnutrition in the general older adult community. A systematic literature review was undertaken using six electronic databases in September 2016. Studies in any language were included which measured malnutrition via a nutrition assessment tool in adults ≥65 years living in their own homes. Data relating to the predictive validity of tools were analysed via meta-analyses. GRADE was used to evaluate the body of evidence. There were 6412 records identified, of which 104 potentially eligible records were screened via full text. Eight papers were included; two which evaluated the concurrent validity of the Mini Nutritional Assessment (MNA) and Subjective Global Assessment (SGA) and six which evaluated the predictive validity of the MNA. The quality of the body of evidence for the concurrent validity of both the MNA and SGA was very low. The quality of the body of evidence for the predictive validity of the MNA in detecting risk of death was moderate (RR: 1.92 [95% CI: 1.55-2.39]; P < 0.00001; n = 2013 participants; n = 4 studies; I 2 : 0%). The quality of the body of evidence for the predictive validity of the MNA in detecting risk of poor physical function was very low (SMD: 1.02 [95%CI: 0.24-1.80]; P = 0.01; n = 4046 participants; n = 3 studies; I 2 :89%). Due to the small number of studies identified and no evaluation of the predictive validity of tools other than

  9. Analysis of Item-Level Bias in the Bayley-III Language Subscales: The Validity and Utility of Standardized Language Assessment in a Multilingual Setting.

    Science.gov (United States)

    Goh, Shaun K Y; Tham, Elaine K H; Magiati, Iliana; Sim, Litwee; Sanmugam, Shamini; Qiu, Anqi; Daniel, Mary L; Broekman, Birit F P; Rifkin-Graboi, Anne

    2017-09-18

    The purpose of this study was to improve standardized language assessments among bilingual toddlers by investigating and removing the effects of bias due to unfamiliarity with cultural norms or a distributed language system. The Expressive and Receptive Bayley-III language scales were adapted for use in a multilingual country (Singapore). Differential item functioning (DIF) was applied to data from 459 two-year-olds without atypical language development. This involved investigating if the probability of success on each item varied according to language exposure while holding latent language ability, gender, and socioeconomic status constant. Associations with language, behavioral, and emotional problems were also examined. Five of 16 items showed DIF, 1 of which may be attributed to cultural bias and another to a distributed language system. The remaining 3 items favored toddlers with higher bilingual exposure. Removal of DIF items reduced associations between language scales and emotional and language problems, but improved the validity of the expressive scale from poor to good. Our findings indicate the importance of considering cultural and distributed language bias in standardized language assessments. We discuss possible mechanisms influencing performance on items favoring bilingual exposure, including the potential role of inhibitory processing.

  10. Development and validation of an observation tool for the assessment of nursing pain management practices in intensive care unit in a standardized clinical simulation setting.

    Science.gov (United States)

    Gosselin, Emilie; Bourgault, Patricia; Lavoie, Stephan; Coleman, Robin-Marie; Méziat-Burdin, Anne

    2014-12-01

    Pain management in the intensive care unit is often inadequate. There is no tool available to assess nursing pain management practices. The aim of this study was to develop and validate a measuring tool to assess nursing pain management in the intensive care unit during standardized clinical simulation. A literature review was performed to identify relevant components demonstrating optimal pain management in adult intensive care units and to integrate them in an observation tool. This tool was submitted to an expert panel and pretested. It was then used to assess pain management practice during 26 discrete standardized clinical simulation sessions with intensive care nurses. The Nursing Observation Tool for Pain Management (NOTPaM) contains 28 statements grouped into 8 categories, which are grouped into 4 dimensions: subjective assessment, objective assessment, interventions, and reassessment. The tool's internal consistency was calculated at a Cronbach's alpha of 0.436 for the whole tool; the alpha varies from 0.328 to 0.518 for each dimension. To evaluate the inter-rater reliability, intra-class correlation coefficient was used, which was calculated at 0.751 (p nurses' pain management in a standardized clinical simulation. The NOTPaM is the first tool created for this purpose. Copyright © 2014 American Society for Pain Management Nursing. Published by Elsevier Inc. All rights reserved.

  11. Hypogeal geological survey in the "Grotta del Re Tiberio" natural cave (Apennines, Italy): a valid tool for reconstructing the structural setting

    Science.gov (United States)

    Ghiselli, Alice; Merazzi, Marzio; Strini, Andrea; Margutti, Roberto; Mercuriali, Michele

    2011-06-01

    As karst systems are natural windows to the underground, speleology, combined with geological surveys, can be useful tools for helping understand the geological evolution of karst areas. In order to enhance the reconstruction of the structural setting in a gypsum karst area (Vena del Gesso, Romagna Apennines), a detailed analysis has been carried out on hypogeal data. Structural features (faults, fractures, tectonic foliations, bedding) have been mapped in the "Grotta del Re Tiberio" cave, in the nearby gypsum quarry tunnels and open pit benches. Five fracture systems and six fault systems have been identified. The fault systems have been further analyzed through stereographic projections and geometric-kinematic evaluations in order to reconstruct the relative chronology of these structures. This analysis led to the detection of two deformation phases. The results permitted linking of the hypogeal data with the surface data both at a local and regional scale. At the local scale, fracture data collected in the underground have been compared with previous authors' surface data coming from the quarry area. The two data sets show a very good correspondence, as every underground fracture system matches with one of the surface fracture system. Moreover, in the cave, a larger number of fractures belonging to each system could be mapped. At the regional scale, the two deformation phases detected can be integrated in the structural setting of the study area, thereby enhancing the tectonic interpretation of the area ( e.g., structures belonging to a new deformation phase, not reported before, have been identified underground). The structural detailed hypogeal survey has, thus, provided very useful data, both by integrating the existing information and revealing new data not detected at the surface. In particular, some small structures ( e.g., displacement markers and short fractures) are better preserved in the hypogeal environment than on the surface where the outcropping

  12. Multiple graph regularized nonnegative matrix factorization

    KAUST Repository

    Wang, Jim Jing-Yan

    2013-10-01

    Non-negative matrix factorization (NMF) has been widely used as a data representation method based on components. To overcome the disadvantage of NMF in failing to consider the manifold structure of a data set, graph regularized NMF (GrNMF) has been proposed by Cai et al. by constructing an affinity graph and searching for a matrix factorization that respects graph structure. Selecting a graph model and its corresponding parameters is critical for this strategy. This process is usually carried out by cross-validation or discrete grid search, which are time consuming and prone to overfitting. In this paper, we propose a GrNMF, called MultiGrNMF, in which the intrinsic manifold is approximated by a linear combination of several graphs with different models and parameters inspired by ensemble manifold regularization. Factorization metrics and linear combination coefficients of graphs are determined simultaneously within a unified object function. They are alternately optimized in an iterative algorithm, thus resulting in a novel data representation algorithm. Extensive experiments on a protein subcellular localization task and an Alzheimer\\'s disease diagnosis task demonstrate the effectiveness of the proposed algorithm. © 2013 Elsevier Ltd. All rights reserved.

  13. Validation of a non-uniform meshing algorithm for the 3D-FDTD method by means of a two-wire crosstalk experimental set-up

    Directory of Open Access Journals (Sweden)

    Raúl Esteban Jiménez-Mejía

    2015-06-01

    Full Text Available This paper presents an algorithm used to automatically mesh a 3D computational domain in order to solve electromagnetic interaction scenarios by means of the Finite-Difference Time-Domain -FDTD-  Method. The proposed algorithm has been formulated in a general mathematical form, where convenient spacing functions can be defined for the problem space discretization, allowing the inclusion of small sized objects in the FDTD method and the calculation of detailed variations of the electromagnetic field at specified regions of the computation domain. The results obtained by using the FDTD method with the proposed algorithm have been contrasted not only with a typical uniform mesh algorithm, but also with experimental measurements for a two-wire crosstalk set-up, leading to excellent agreement between theoretical and experimental waveforms. A discussion about the advantages of the non-uniform mesh over the uniform one is also presented.

  14. Exploiting biospectroscopy as a novel screening tool for cervical cancer: towards a framework to validate its accuracy in a routine clinical setting.

    LENUS (Irish Health Repository)

    Purandare, Nikhil C

    2013-11-01

    Biospectroscopy is an emerging field that harnesses the platform of physical sciences with computational analysis in order to shed novel insights on biological questions. An area where this approach seems to have potential is in screening or diagnostic clinical settings, where there is an urgent need for new approaches to objectively interrogate large numbers of samples in an objective fashion with acceptable levels of sensitivity and specificity. This review outlines the benefits of biospectroscopy in screening for precancer lesions of the cervix due to its ability to separate different grades of dysplasia. It evaluates the feasibility of introducing this technique into cervical screening programs on the basis of its ability to identify biomarkers of progression within derived spectra (\\'biochemical‑cell fingerprints\\').

  15. Development and validation of a Hospital Frailty Risk Score focusing on older people in acute care settings using electronic hospital records: an observational study.

    Science.gov (United States)

    Gilbert, Thomas; Neuburger, Jenny; Kraindler, Joshua; Keeble, Eilis; Smith, Paul; Ariti, Cono; Arora, Sandeepa; Street, Andrew; Parker, Stuart; Roberts, Helen C; Bardsley, Martin; Conroy, Simon

    2018-05-05

    Older people are increasing users of health care globally. We aimed to establish whether older people with characteristics of frailty and who are at risk of adverse health-care outcomes could be identified using routinely collected data. A three-step approach was used to develop and validate a Hospital Frailty Risk Score from International Statistical Classification of Diseases and Related Health Problems, Tenth Revision (ICD-10) diagnostic codes. First, we carried out a cluster analysis to identify a group of older people (≥75 years) admitted to hospital who had high resource use and diagnoses associated with frailty. Second, we created a Hospital Frailty Risk Score based on ICD-10 codes that characterised this group. Third, in separate cohorts, we tested how well the score predicted adverse outcomes and whether it identified similar groups as other frailty tools. In the development cohort (n=22 139), older people with frailty diagnoses formed a distinct group and had higher non-elective hospital use (33·6 bed-days over 2 years compared with 23·0 bed-days for the group with the next highest number of bed-days). In the national validation cohort (n=1 013 590), compared with the 429 762 (42·4%) patients with the lowest risk scores, the 202 718 (20·0%) patients with the highest Hospital Frailty Risk Scores had increased odds of 30-day mortality (odds ratio 1·71, 95% CI 1·68-1·75), long hospital stay (6·03, 5·92-6·10), and 30-day readmission (1·48, 1·46-1·50). The c statistics (ie, model discrimination) between individuals for these three outcomes were 0·60, 0·68, and 0·56, respectively. The Hospital Frailty Risk Score showed fair overlap with dichotomised Fried and Rockwood scales (kappa scores 0·22, 95% CI 0·15-0·30 and 0·30, 0·22-0·38, respectively) and moderate agreement with the Rockwood Frailty Index (Pearson's correlation coefficient 0·41, 95% CI 0·38-0·47). The Hospital Frailty Risk Score provides hospitals and health

  16. Detailed data set of a large administration building as a validation model for DIN V18599-software; Umfangreicher Validierungsdatensatz eines grossen Verwaltungsgebaeudes fuer Software zur DIN V 18599

    Energy Technology Data Exchange (ETDEWEB)

    Hoettges, Kirsten; Woessner, Simon; de Boer, Jan; Erhorn, Hans [Fraunhofer-Institut fuer Bauphysik (IBP), Abt. Energiesysteme, Kassel (Germany); Fraunhofer-Institut fuer Bauphysik (IBP), Stuttgart (Germany)

    2009-04-15

    The calculation method of DIN V 18599 represents a very complex model for the estimation of the energy efficiency of buildings. The method is used for certificates too, thus, the number of users is quite high. This fact and the complexity of the method causes high demands on the related software products. Most of the end user software tools work with the calculation engine ibp18599kernel developed by Fraunhofer-Institut for Building Physics. There is a continuous quality control for both, the kernel and the user interfaces, i.e. the end user software. This paper gives an overview of the process of quality control as well as a documentation of a validation model used within this process, i.e. a complex administration building as a sample. (Abstract Copyright [2009], Wiley Periodicals, Inc.) [German] Mit dem Berechnungsverfahren der DIN V 18599 liegt ein umfassendes Berechnungsmodell fuer die energetische Bewertung von Gebaeuden vor. Der grosse Umfang der Verfahren stellt auch an die softwaretechnischen Umsetzungen fuer die Planungspraxis hohe Anforderungen. Das Fraunhofer-Institut fuer Bauphysik hat hierzu den Rechenkern ibp18599kernel entwickelt, der zwischenzeit-lich von zahlreichen Softwarehaeusern fuer Endanwendungsprogramme zur energetischen Bewertung und Erstellung von Energieausweisen eingesetzt wird. Dieser ''Rechenmotor'' unterliegt einer steten Qualitaetskontrolle zur Sicherstellung der Berechnungsgenauigkeit. Der vorliegende Aufsatz stellt den Prozess der Qualitaetssicherung anhand eines neuen Validierungsbeispiels in Form eines komplexen Verwaltungsgebaeudes vor. Die Fallstudie kann auch zur Validierung anderer Berechnungssysteme, die nicht auf dem Rechenkern basieren, genutzt werden. (Abstract Copyright [2009], Wiley Periodicals, Inc.)

  17. SU-D-202-04: Validation of Deformable Image Registration Algorithms for Head and Neck Adaptive Radiotherapy in Routine Clinical Setting

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, L; Pi, Y; Chen, Z; Xu, X [University of Science and Technology of China, Hefei, Anhui (China); Wang, Z [University of Science and Technology of China, Hefei, Anhui (China); The First Affiliated Hospital of Anhui Medical University, Hefei, Anhui (China); Shi, C [Saint Vincent Medical Center, Bridgeport, CT (United States); Long, T; Luo, W; Wang, F [The First Affiliated Hospital of Anhui Medical University, Hefei, Anhui (China)

    2016-06-15

    Purpose: To evaluate the ROI contours and accumulated dose difference using different deformable image registration (DIR) algorithms for head and neck (H&N) adaptive radiotherapy. Methods: Eight H&N cancer patients were randomly selected from the affiliated hospital. During the treatment, patients were rescanned every week with ROIs well delineated by radiation oncologist on each weekly CT. New weekly treatment plans were also re-designed with consistent dose prescription on the rescanned CT and executed for one week on Siemens CT-on-rails accelerator. At the end, we got six weekly CT scans from CT1 to CT6 including six weekly treatment plans for each patient. The primary CT1 was set as the reference CT for DIR proceeding with the left five weekly CTs using ANACONDA and MORFEUS algorithms separately in RayStation and the external skin ROI was set to be the controlling ROI both. The entire calculated weekly dose were deformed and accumulated on corresponding reference CT1 according to the deformation vector field (DVFs) generated by the two different DIR algorithms respectively. Thus we got both the ANACONDA-based and MORFEUS-based accumulated total dose on CT1 for each patient. At the same time, we mapped the ROIs on CT1 to generate the corresponding ROIs on CT6 using ANACONDA and MORFEUS DIR algorithms. DICE coefficients between the DIR deformed and radiation oncologist delineated ROIs on CT6 were calculated. Results: For DIR accumulated dose, PTV D95 and Left-Eyeball Dmax show significant differences with 67.13 cGy and 109.29 cGy respectively (Table1). For DIR mapped ROIs, PTV, Spinal cord and Left-Optic nerve show difference with −0.025, −0.127 and −0.124 (Table2). Conclusion: Even two excellent DIR algorithms can give divergent results for ROI deformation and dose accumulation. As more and more TPS get DIR module integrated, there is an urgent need to realize the potential risk using DIR in clinical.

  18. Video self-modeling in children with autism: a pilot study validating prerequisite skills and extending the utilization of VSM across skill sets.

    Science.gov (United States)

    Williamson, Robert L; Casey, Laura B; Robertson, Janna Siegel; Buggey, Tom

    2013-01-01

    Given the recent interest in the use of video self-modeling (VSM) to provide instruction within iPod apps and other pieces of handheld mobile assistive technologies, investigating appropriate prerequisite skills for effective use of this intervention is particularly timely and relevant. To provide additional information regarding the efficacy of VSM for students with autism and to provide insights into any possible prerequisite skills students may require for such efficacy, the authors investigated the use of VSM in increasing the instances of effective initiations of interpersonal greetings for three students with autism that exhibited different pre-intervention abilities. Results showed that only one of the three participants showed an increase in self-initiated greetings following the viewing of videos edited to show each participant self-modeling a greeting when entering his or her classroom. Due to the differences in initial skill sets between the three children, this finding supports anecdotally observed student prerequisite abilities mentioned in previous studies that may be required to effectively utilize video based teaching methods.

  19. Identifying common impairments in frail and dependent older people: validation of the COPE assessment for non-specialised health workers in low resource primary health care settings.

    Science.gov (United States)

    A T, Jotheeswaran; Dias, Amit; Philp, Ian; Beard, John; Patel, Vikram; Prince, Martin

    2015-10-14

    Frail and dependent older people in resource-poor settings are poorly served by health systems that lack outreach capacity. The COPE (Caring for Older PEople) multidimensional assessment tool is designed to help community health workers (CHWs) identify clinically significant impairments and deliver evidence-based interventions Older people (n = 150) identified by CHWs as frail or dependent, were assessed at home by the CHW using the structured COPE assessment tool, generating information on impairments in nutrition, mobility, vision, hearing, continence, cognition, mood and behaviour. The older people were reassessed by local physicians who reached a clinical judgment regarding the presence or absence of the same impairments based upon clinical examination guided by the EASY-Care assessment tool. The COPE tool was considered easy to administer, and gave CHWs a sense of empowerment to understand and act upon the needs of older people. Agreement between COPE assessment by CHW and clinician assessors was modest (ranged from 45.8 to 91.3 %) for most impairments. However, the prevalence of impairments was generally higher according to clinicians, particularly for visual impairment (98.7 vs 45.8 %), cognitive impairment (78.4 vs. 38.2 %) and depression (82.0 vs. 59.9 %). Most cases identified by WHO-COPE were clinician confirmed (positive predictive values - 72.2 to 98.5 %), and levels of disability and needs for care among those identified by COPE were higher than those additionally identified by the clinician alone. The COPE is a feasible tool for the identification of specific impairments in frail dependent older people in the community. Those identified are likely to be confirmed as having clinically relevant problems by clinicians working in the same service, and the COPE may be particularly effective at targeting attention upon those with the most substantial unmet needs.

  20. Validation of the Pockit Dengue Virus Reagent Set for Rapid Detection of Dengue Virus in Human Serum on a Field-Deployable PCR System.

    Science.gov (United States)

    Tsai, Jih-Jin; Liu, Li-Teh; Lin, Ping-Chang; Tsai, Ching-Yi; Chou, Pin-Hsing; Tsai, Yun-Long; Chang, Hsiao-Fen Grace; Lee, Pei-Yu Alison

    2018-05-01

    Dengue virus (DENV) infection, a mosquito-borne disease, is a major public health problem in tropical countries. Point-of-care DENV detection with good sensitivity and specificity enables timely early diagnosis of DENV infection, facilitating effective disease management and control, particularly in regions of low resources. The Pockit dengue virus reagent set (GeneReach Biotech), a reverse transcription insulated isothermal PCR (RT-iiPCR), is available to detect all four serotypes of DENV on the field-deployable Pockit system, which is ready for on-site applications. In this study, analytical and clinical performances of the assay were evaluated. The index assay did not react with 14 non-DENV human viruses, indicating good specificity. Compared to the U.S. CDC DENV-1-4 real-time quantitative RT-PCR (qRT-PCR) assay, testing with serial dilutions of virus-spiked human sera demonstrated that the index assay had detection endpoints that were separately comparable with the 4 serotypes. Excellent reproducibility was observed among repeat tests done by six operators at three sites. In clinical performance, 195 clinical sera collected around Kaohsiung city in 2012 and 21 DENV-4-spiked sera were tested with the RT-iiPCR and qRT-PCR assays in parallel. The 121 (11 DENV-1, 78 DENV-2, 11 DENV-3, and 21 DENV-4) qRT-PCR-positive and 95 qRT-PCR-negative samples were all positive and negative by the RT-iiPCR reagent results, respectively, demonstrating high (100%) interrater agreement (95% confidence interval [CI 95% ], ∼98.81% to 100%; κ = 1). With analytical and clinical performance equivalent to those of the reference qRT-PCR assay, the index PCR assay on the field-deployable system can serve as a highly sensitive and specific on-site tool for DENV detection. Copyright © 2018 American Society for Microbiology.

  1. Predictive validity of a service-setting-based measure to identify infancy mental health problems: a population-based cohort study.

    Science.gov (United States)

    Ammitzbøll, Janni; Thygesen, Lau Caspar; Holstein, Bjørn E; Andersen, Anette; Skovgaard, Anne Mette

    2018-06-01

    Measures to identify infancy mental health problems are essential to guide interventions and reduce the risk of developmental psychopathology in early years. We investigated a new service-setting-based measure the Copenhagen Infant Mental Health Screening (CIMHS) within the general child health surveillance by community health nurses (CHN). The study population of 2973 infants was assessed by CIMHS at age 9-10 months. A subsample of 416 children was examined at age 1½ years, using parent interviews including the Child Behavior Checklist (CBCL 1½-5), Check List of Autism and Toddlers (CHAT), Infant-Toddler Symptom Checklist (ITSCL), and the Bayley Scales of Infant and Toddler Development (BSID) and observations of behavior, communication, and interaction. Child mental disorders were diagnosed according to ICD-10 and parent-child relationship disorders according to DC:0-3R. Statistical analyses included logistic regression analyses adjusted and weighted to adjust for sampling and bias. CIMHS problems of sleep, feeding and eating, emotions, attention, communication, and language were associated with an up to fivefold increased risk of child mental disorders across the diagnostic spectrum of ICD-10 diagnoses. Homo-type continuity was seen in problems of sleep and feeding and eating being associated with a threefold increased risk of disorders within the same area, OR 3.0 (95% CI 1.6-5.4) and OR 2.7 (95% CI 1.7-4.2), respectively. The sensitivity at high CIMHS problem scores was 32% and specificity 86%. In summary, CIMHS identify a broad range of infants' mental health problems that are amenable to guide intervention within the general child health surveillance.

  2. The cellulose resource matrix.

    Science.gov (United States)

    Keijsers, Edwin R P; Yılmaz, Gülden; van Dam, Jan E G

    2013-03-01

    feedstock and the performance in the end-application. The cellulose resource matrix should become a practical tool for stakeholders to make choices regarding raw materials, process or market. Although there is a vast amount of scientific and economic information available on cellulose and lignocellulosic resources, the accessibility for the interested layman or entrepreneur is very difficult and the relevance of the numerous details in the larger context is limited. Translation of science to practical accessible information with modern data management and data integration tools is a challenge. Therefore, a detailed matrix structure was composed in which the different elements or entries of the matrix were identified and a tentative rough set up was made. The inventory includes current commodities and new cellulose containing and raw materials as well as exotic sources and specialties. Important chemical and physical properties of the different raw materials were identified for the use in processes and products. When available, the market data such as price and availability were recorded. Established and innovative cellulose extraction and refining processes were reviewed. The demands on the raw material for suitable processing were collected. Processing parameters known to affect the cellulose properties were listed. Current and expected emerging markets were surveyed as well as their different demands on cellulose raw materials and processes. The setting up of the cellulose matrix as a practical tool requires two steps. Firstly, the reduction of the needed data by clustering of the characteristics of raw materials, processes and markets and secondly, the building of a database that can provide the answers to the questions from stakeholders with an indicative character. This paper describes the steps taken to achieve the defined clusters of most relevant and characteristic properties. These data can be expanded where required. More detailed specification can be obtained

  3. GB Diet matrix as informed by EMAX

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This data set was taken from CRD 08-18 at the NEFSC. Specifically, the Georges Bank diet matrix was developed for the EMAX exercise described in that center...

  4. Efficiency criterion for teleportation via channel matrix, measurement matrix and collapsed matrix

    Directory of Open Access Journals (Sweden)

    Xin-Wei Zha

    Full Text Available In this paper, three kinds of coefficient matrixes (channel matrix, measurement matrix, collapsed matrix associated with the pure state for teleportation are presented, the general relation among channel matrix, measurement matrix and collapsed matrix is obtained. In addition, a criterion for judging whether a state can be teleported successfully is given, depending on the relation between the number of parameter of an unknown state and the rank of the collapsed matrix. Keywords: Channel matrix, Measurement matrix, Collapsed matrix, Teleportation

  5. Extended biorthogonal matrix polynomials

    Directory of Open Access Journals (Sweden)

    Ayman Shehata

    2017-01-01

    Full Text Available The pair of biorthogonal matrix polynomials for commutative matrices were first introduced by Varma and Tasdelen in [22]. The main aim of this paper is to extend the properties of the pair of biorthogonal matrix polynomials of Varma and Tasdelen and certain generating matrix functions, finite series, some matrix recurrence relations, several important properties of matrix differential recurrence relations, biorthogonality relations and matrix differential equation for the pair of biorthogonal matrix polynomials J(A,B n (x, k and K(A,B n (x, k are discussed. For the matrix polynomials J(A,B n (x, k, various families of bilinear and bilateral generating matrix functions are constructed in the sequel.

  6. Matrix completion by deep matrix factorization.

    Science.gov (United States)

    Fan, Jicong; Cheng, Jieyu

    2018-02-01

    Conventional methods of matrix completion are linear methods that are not effective in handling data of nonlinear structures. Recently a few researchers attempted to incorporate nonlinear techniques into matrix completion but there still exists considerable limitations. In this paper, a novel method called deep matrix factorization (DMF) is proposed for nonlinear matrix completion. Different from conventional matrix completion methods that are based on linear latent variable models, DMF is on the basis of a nonlinear latent variable model. DMF is formulated as a deep-structure neural network, in which the inputs are the low-dimensional unknown latent variables and the outputs are the partially observed variables. In DMF, the inputs and the parameters of the multilayer neural network are simultaneously optimized to minimize the reconstruction errors for the observed entries. Then the missing entries can be readily recovered by propagating the latent variables to the output layer. DMF is compared with state-of-the-art methods of linear and nonlinear matrix completion in the tasks of toy matrix completion, image inpainting and collaborative filtering. The experimental results verify that DMF is able to provide higher matrix completion accuracy than existing methods do and DMF is applicable to large matrices. Copyright © 2017 Elsevier Ltd. All rights reserved.

  7. Texture zeros in neutrino mass matrix

    Energy Technology Data Exchange (ETDEWEB)

    Dziewit, B., E-mail: bartosz.dziewit@us.edu.pl; Holeczek, J., E-mail: jacek.holeczek@us.edu.pl; Richter, M., E-mail: monikarichter18@gmail.com [University of Silesia, Institute of Physics (Poland); Zajac, S., E-mail: s.zajac@uksw.edu.pl [Cardinal Stefan Wyszyński University in Warsaw, Faculty of Mathematics and Natural Studies (Poland); Zralek, M., E-mail: marek.zralek@us.edu.pl [University of Silesia, Institute of Physics (Poland)

    2017-03-15

    The Standard Model does not explain the hierarchy problem. Before the discovery of nonzero lepton mixing angle θ{sub 13} high hopes in explanation of the shape of the lepton mixing matrix were combined with non-Abelian symmetries. Nowadays, assuming one Higgs doublet, it is unlikely that this is still valid. Texture zeroes, that are combined with abelian symmetries, are intensively studied. The neutrino mass matrix is a natural way to study such symmetries.

  8. Convex nonnegative matrix factorization with manifold regularization.

    Science.gov (United States)

    Hu, Wenjun; Choi, Kup-Sze; Wang, Peiliang; Jiang, Yunliang; Wang, Shitong

    2015-03-01

    Nonnegative Matrix Factorization (NMF) has been extensively applied in many areas, including computer vision, pattern recognition, text mining, and signal processing. However, nonnegative entries are usually required for the data matrix in NMF, which limits its application. Besides, while the basis and encoding vectors obtained by NMF can represent the original data in low dimension, the representations do not always reflect the intrinsic geometric structure embedded in the data. Motivated by manifold learning and Convex NMF (CNMF), we propose a novel matrix factorization method called Graph Regularized and Convex Nonnegative Matrix Factorization (GCNMF) by introducing a graph regularized term into CNMF. The proposed matrix factorization technique not only inherits the intrinsic low-dimensional manifold structure, but also allows the processing of mixed-sign data matrix. Clustering experiments on nonnegative and mixed-sign real-world data sets are conducted to demonstrate the effectiveness of the proposed method. Copyright © 2014 Elsevier Ltd. All rights reserved.

  9. Health system context and implementation of evidence-based practices-development and validation of the Context Assessment for Community Health (COACH) tool for low- and middle-income settings.

    Science.gov (United States)

    Bergström, Anna; Skeen, Sarah; Duc, Duong M; Blandon, Elmer Zelaya; Estabrooks, Carole; Gustavsson, Petter; Hoa, Dinh Thi Phuong; Källestål, Carina; Målqvist, Mats; Nga, Nguyen Thu; Persson, Lars-Åke; Pervin, Jesmin; Peterson, Stefan; Rahman, Anisur; Selling, Katarina; Squires, Janet E; Tomlinson, Mark; Waiswa, Peter; Wallin, Lars

    2015-08-15

    The gap between what is known and what is practiced results in health service users not benefitting from advances in healthcare, and in unnecessary costs. A supportive context is considered a key element for successful implementation of evidence-based practices (EBP). There were no tools available for the systematic mapping of aspects of organizational context influencing the implementation of EBPs in low- and middle-income countries (LMICs). Thus, this project aimed to develop and psychometrically validate a tool for this purpose. The development of the Context Assessment for Community Health (COACH) tool was premised on the context dimension in the Promoting Action on Research Implementation in Health Services framework, and is a derivative product of the Alberta Context Tool. Its development was undertaken in Bangladesh, Vietnam, Uganda, South Africa and Nicaragua in six phases: (1) defining dimensions and draft tool development, (2) content validity amongst in-country expert panels, (3) content validity amongst international experts, (4) response process validity, (5) translation and (6) evaluation of psychometric properties amongst 690 health workers in the five countries. The tool was validated for use amongst physicians, nurse/midwives and community health workers. The six phases of development resulted in a good fit between the theoretical dimensions of the COACH tool and its psychometric properties. The tool has 49 items measuring eight aspects of context: Resources, Community engagement, Commitment to work, Informal payment, Leadership, Work culture, Monitoring services for action and Sources of knowledge. Aspects of organizational context that were identified as influencing the implementation of EBPs in high-income settings were also found to be relevant in LMICs. However, there were additional aspects of context of relevance in LMICs specifically Resources, Community engagement, Commitment to work and Informal payment. Use of the COACH tool will allow

  10. Novel image analysis methods for quantification of in situ 3-D tendon cell and matrix strain.

    Science.gov (United States)

    Fung, Ashley K; Paredes, J J; Andarawis-Puri, Nelly

    2018-01-23

    Macroscopic tendon loads modulate the cellular microenvironment leading to biological outcomes such as degeneration or repair. Previous studies have shown that damage accumulation and the phases of tendon healing are marked by significant changes in the extracellular matrix, but it remains unknown how mechanical forces of the extracellular matrix are translated to mechanotransduction pathways that ultimately drive the biological response. Our overarching hypothesis is that the unique relationship between extracellular matrix strain and cell deformation will dictate biological outcomes, prompting the need for quantitative methods to characterize the local strain environment. While 2-D methods have successfully calculated matrix strain and cell deformation, 3-D methods are necessary to capture the increased complexity that can arise due to high levels of anisotropy and out-of-plane motion, particularly in the disorganized, highly cellular, injured state. In this study, we validated the use of digital volume correlation methods to quantify 3-D matrix strain using images of naïve tendon cells, the collagen fiber matrix, and injured tendon cells. Additionally, naïve tendon cell images were used to develop novel methods for 3-D cell deformation and 3-D cell-matrix strain, which is defined as a quantitative measure of the relationship between matrix strain and cell deformation. The results support that these methods can be used to detect strains with high accuracy and can be further extended to an in vivo setting for observing temporal changes in cell and matrix mechanics during degeneration and healing. Copyright © 2017. Published by Elsevier Ltd.

  11. NLTE steady-state response matrix method.

    Science.gov (United States)

    Faussurier, G.; More, R. M.

    2000-05-01

    A connection between atomic kinetics and non-equilibrium thermodynamics has been recently established by using a collisional-radiative model modified to include line absorption. The calculated net emission can be expressed as a non-local thermodynamic equilibrium (NLTE) symmetric response matrix. In the paper, this connection is extended to both cases of the average-atom model and the Busquet's model (RAdiative-Dependent IOnization Model, RADIOM). The main properties of the response matrix still remain valid. The RADIOM source function found in the literature leads to a diagonal response matrix, stressing the absence of any frequency redistribution among the frequency groups at this order of calculation.

  12. The Matrix Cookbook

    DEFF Research Database (Denmark)

    Petersen, Kaare Brandt; Pedersen, Michael Syskind

    Matrix identities, relations and approximations. A desktop reference for quick overview of mathematics of matrices.......Matrix identities, relations and approximations. A desktop reference for quick overview of mathematics of matrices....

  13. Birth Settings and the Validation of Neonatal Seizures Recorded in Birth Certificates Compared to Medicaid Claims and Hospital Discharge Abstracts Among Live Births in South Carolina, 1996-2013.

    Science.gov (United States)

    Li, Qing; Jenkins, Dorothea D; Kinsman, Stephen L

    2017-05-01

    Objective Neonatal seizures in the first 28 days of life often reflect underlying brain injury or abnormalities, and measure the quality of perinatal care in out-of-hospital births. Using the 2003 revision of birth certificates only, three studies reported more neonatal seizures recorded among home births ​or planned out-of-hospital births compared to hospital births. However, the validity of recording neonatal seizures or serious neurologic dysfunction across birth settings in birth certificates has not been evaluated. We aimed to validate seizure recording in birth certificates across birth settings using multiple datasets. Methods We examined checkbox items "seizures" and "seizure or serious neurologic dysfunction" in the 1989 and 2003 revisions of birth certificates in South Carolina from 1996 to 2013. Gold standards were ICD-9-CM codes 779.0, 345.X, and 780.3 in either hospital discharge abstracts or Medicaid encounters jointly. Results Sensitivity, positive predictive value, false positive rate, and the kappa statistic of neonatal seizures recording were 7%, 66%, 34%, and 0.12 for the 2003 revision of birth certificates in 547,177 hospital births from 2004 to 2013 and 5%, 33%, 67%, and 0.09 for the 1998 revision in 396,776 hospital births from 1996 to 2003, and 0, 0, 100%, -0.002 among 660 intended home births from 2004 to 2013 and 920 home births from 1996 to 2003, respectively. Conclusions for Practice Despite slight improvement across revisions, South Carolina birth certificates under-reported or falsely reported seizures among hospital births and especially home births. Birth certificates alone should not be used to measure neonatal seizures or serious neurologic dysfunction.

  14. Carbonate fuel cell matrix

    Science.gov (United States)

    Farooque, Mohammad; Yuh, Chao-Yi

    1996-01-01

    A carbonate fuel cell matrix comprising support particles and crack attenuator particles which are made platelet in shape to increase the resistance of the matrix to through cracking. Also disclosed is a matrix having porous crack attenuator particles and a matrix whose crack attenuator particles have a thermal coefficient of expansion which is significantly different from that of the support particles, and a method of making platelet-shaped crack attenuator particles.

  15. Transition matrices and orbitals from reduced density matrix theory

    Energy Technology Data Exchange (ETDEWEB)

    Etienne, Thibaud [Université de Lorraine – Nancy, Théorie-Modélisation-Simulation, SRSMC, Boulevard des Aiguillettes 54506, Vandoeuvre-lès-Nancy (France); CNRS, Théorie-Modélisation-Simulation, SRSMC, Boulevard des Aiguillettes 54506, Vandoeuvre-lès-Nancy (France); Unité de Chimie Physique Théorique et Structurale, Université de Namur, Rue de Bruxelles 61, 5000 Namur (Belgium)

    2015-06-28

    In this contribution, we report two different methodologies for characterizing the electronic structure reorganization occurring when a chromophore undergoes an electronic transition. For the first method, we start by setting the theoretical background necessary to the reinterpretation through simple tensor analysis of (i) the transition density matrix and (ii) the natural transition orbitals in the scope of reduced density matrix theory. This novel interpretation is made more clear thanks to a short compendium of the one-particle reduced density matrix theory in a Fock space. The formalism is further applied to two different classes of excited states calculation methods, both requiring a single-determinant reference, that express an excited state as a hole-particle mono-excited configurations expansion, to which particle-hole correlation is coupled (time-dependent Hartree-Fock/time-dependent density functional theory) or not (configuration interaction single/Tamm-Dancoff approximation). For the second methodology presented in this paper, we introduce a novel and complementary concept related to electronic transitions with the canonical transition density matrix and the canonical transition orbitals. Their expression actually reflects the electronic cloud polarisation in the orbital space with a decomposition based on the actual contribution of one-particle excitations from occupied canonical orbitals to virtual ones. This approach validates our novel interpretation of the transition density matrix elements in terms of the Euclidean norm of elementary transition vectors in a linear tensor space. A proper use of these new concepts leads to the conclusion that despite the different principles underlying their construction, they provide two equivalent excited states topological analyses. This connexion is evidenced through simple illustrations of (in)organic dyes electronic transitions analysis.

  16. Matrix with Prescribed Eigenvectors

    Science.gov (United States)

    Ahmad, Faiz

    2011-01-01

    It is a routine matter for undergraduates to find eigenvalues and eigenvectors of a given matrix. But the converse problem of finding a matrix with prescribed eigenvalues and eigenvectors is rarely discussed in elementary texts on linear algebra. This problem is related to the "spectral" decomposition of a matrix and has important technical…

  17. Triangularization of a Matrix

    Indian Academy of Sciences (India)

    Much of linear algebra is devoted to reducing a matrix (via similarity or unitary similarity) to another that has lots of zeros. The simplest such theorem is the Schur triangularization theorem. This says that every matrix is unitarily similar to an upper triangular matrix. Our aim here is to show that though it is very easy to prove it ...

  18. Initial validation of the Argentinean Spanish version of the PedsQL™ 4.0 Generic Core Scales in children and adolescents with chronic diseases: acceptability and comprehensibility in low-income settings

    Directory of Open Access Journals (Sweden)

    Bauer Gabriela

    2008-08-01

    Full Text Available Abstract Background To validate the Argentinean Spanish version of the PedsQL™ 4.0 Generic Core Scales in Argentinean children and adolescents with chronic conditions and to assess the impact of socio-demographic characteristics on the instrument's comprehensibility and acceptability. Reliability, and known-groups, and convergent validity were tested. Methods Consecutive sample of 287 children with chronic conditions and 105 healthy children, ages 2–18, and their parents. Chronically ill children were: (1 attending outpatient clinics and (2 had one of the following diagnoses: stem cell transplant, chronic obstructive pulmonary disease, HIV/AIDS, cancer, end stage renal disease, complex congenital cardiopathy. Patients and adult proxies completed the PedsQL™ 4.0 and an overall health status assessment. Physicians were asked to rate degree of health status impairment. Results The PedsQL™ 4.0 was feasible (only 9 children, all 5 to 7 year-olds, could not complete the instrument, easy to administer, completed without, or with minimal, help by most children and parents, and required a brief administration time (average 5–6 minutes. People living below the poverty line and/or low literacy needed more help to complete the instrument. Cronbach Alpha's internal consistency values for the total and subscale scores exceeded 0.70 for self-reports of children over 8 years-old and parent-reports of children over 5 years of age. Reliability of proxy-reports of 2–4 year-olds was low but improved when school items were excluded. Internal consistency for 5–7 year-olds was low (α range = 0.28–0.76. Construct validity was good. Child self-report and parent proxy-report PedsQL™ 4.0 scores were moderately but significantly correlated (ρ = 0.39, p Conclusion Results suggest that the Argentinean Spanish PedsQL™ 4.0 is suitable for research purposes in the public health setting for children over 8 years old and parents of children over 5 years old

  19. Validation of Simple Quantification Methods for (18)F-FP-CIT PET Using Automatic Delineation of Volumes of Interest Based on Statistical Probabilistic Anatomical Mapping and Isocontour Margin Setting.

    Science.gov (United States)

    Kim, Yong-Il; Im, Hyung-Jun; Paeng, Jin Chul; Lee, Jae Sung; Eo, Jae Seon; Kim, Dong Hyun; Kim, Euishin E; Kang, Keon Wook; Chung, June-Key; Lee, Dong Soo

    2012-12-01

    (18)F-FP-CIT positron emission tomography (PET) is an effective imaging for dopamine transporters. In usual clinical practice, (18)F-FP-CIT PET is analyzed visually or quantified using manual delineation of a volume of interest (VOI) for the striatum. In this study, we suggested and validated two simple quantitative methods based on automatic VOI delineation using statistical probabilistic anatomical mapping (SPAM) and isocontour margin setting. Seventy-five (18)F-FP-CIT PET images acquired in routine clinical practice were used for this study. A study-specific image template was made and the subject images were normalized to the template. Afterwards, uptakes in the striatal regions and cerebellum were quantified using probabilistic VOI based on SPAM. A quantitative parameter, QSPAM, was calculated to simulate binding potential. Additionally, the functional volume of each striatal region and its uptake were measured in automatically delineated VOI using isocontour margin setting. Uptake-volume product (QUVP) was calculated for each striatal region. QSPAM and QUVP were compared with visual grading and the influence of cerebral atrophy on the measurements was tested. Image analyses were successful in all the cases. Both the QSPAM and QUVP were significantly different according to visual grading (P Simple quantitative measurements of QSPAM and QUVP showed acceptable agreement with visual grading. Although QSPAM in some group may be influenced by cerebral atrophy, these simple methods are expected to be effective in the quantitative analysis of (18)F-FP-CIT PET in usual clinical practice.

  20. The validation of language tests

    African Journals Online (AJOL)

    KATEVG

    Stellenbosch Papers in Linguistics, Vol. ... validation is necessary because of the major impact which test results can have on the many ... Messick (1989: 20) introduces his much-quoted progressive matrix (cf. table 1), which ... argue that current accounts of validity only superficially address theories of measurement.

  1. Matrix method for acoustic levitation simulation.

    Science.gov (United States)

    Andrade, Marco A B; Perez, Nicolas; Buiochi, Flavio; Adamowski, Julio C

    2011-08-01

    A matrix method is presented for simulating acoustic levitators. A typical acoustic levitator consists of an ultrasonic transducer and a reflector. The matrix method is used to determine the potential for acoustic radiation force that acts on a small sphere in the standing wave field produced by the levitator. The method is based on the Rayleigh integral and it takes into account the multiple reflections that occur between the transducer and the reflector. The potential for acoustic radiation force obtained by the matrix method is validated by comparing the matrix method results with those obtained by the finite element method when using an axisymmetric model of a single-axis acoustic levitator. After validation, the method is applied in the simulation of a noncontact manipulation system consisting of two 37.9-kHz Langevin-type transducers and a plane reflector. The manipulation system allows control of the horizontal position of a small levitated sphere from -6 mm to 6 mm, which is done by changing the phase difference between the two transducers. The horizontal position of the sphere predicted by the matrix method agrees with the horizontal positions measured experimentally with a charge-coupled device camera. The main advantage of the matrix method is that it allows simulation of non-symmetric acoustic levitators without requiring much computational effort.

  2. [Penile augmentation using acellular dermal matrix].

    Science.gov (United States)

    Zhang, Jin-ming; Cui, Yong-yan; Pan, Shu-juan; Liang, Wei-qiang; Chen, Xiao-xuan

    2004-11-01

    Penile enhancement was performed using acellular dermal matrix. Multiple layers of acellular dermal matrix were placed underneath the penile skin to enlarge its girth. Since March 2002, penile augmentation has been performed on 12 cases using acellular dermal matrix. Postoperatively all the patients had a 1.3-3.1 cm (2.6 cm in average) increase in penile girth in a flaccid state. The penis had normal appearance and feeling without contour deformities. All patients gained sexual ability 3 months after the operation. One had a delayed wound healing due to tight dressing, which was repaired with a scrotal skin flap. Penile enlargement by implantation of multiple layers of acellular dermal matrix was a safe and effective operation. This method can be performed in an outpatient ambulatory setting. The advantages of the acellular dermal matrix over the autogenous dermal fat grafts are elimination of donor site injury and scar and significant shortening of operation time.

  3. Counting SET-free sets

    OpenAIRE

    Harman, Nate

    2016-01-01

    We consider the following counting problem related to the card game SET: How many $k$-element SET-free sets are there in an $n$-dimensional SET deck? Through a series of algebraic reformulations and reinterpretations, we show the answer to this question satisfies two polynomiality conditions.

  4. Seasonal evolution of soil and plant parameters on the agricultural Gebesee test site: a database for the set-up and validation of EO-LDAS and satellite-aided retrieval models

    Directory of Open Access Journals (Sweden)

    S. C. Truckenbrodt

    2018-03-01

    Full Text Available Ground reference data are a prerequisite for the calibration, update, and validation of retrieval models facilitating the monitoring of land parameters based on Earth Observation data. Here, we describe the acquisition of a comprehensive ground reference database which was created to test and validate the recently developed Earth Observation Land Data Assimilation System (EO-LDAS and products derived from remote sensing observations in the visible and infrared range. In situ data were collected for seven crop types (winter barley, winter wheat, spring wheat, durum, winter rape, potato, and sugar beet cultivated on the agricultural Gebesee test site, central Germany, in 2013 and 2014. The database contains information on hyperspectral surface reflectance factors, the evolution of biophysical and biochemical plant parameters, phenology, surface conditions, atmospheric states, and a set of ground control points. Ground reference data were gathered at an approximately weekly resolution and on different spatial scales to investigate variations within and between acreages. In situ data collected less than 1 day apart from satellite acquisitions (RapidEye, SPOT 5, Landsat-7 and -8 with a cloud coverage  ≤  25 % are available for 10 and 15 days in 2013 and 2014, respectively. The measurements show that the investigated growing seasons were characterized by distinct meteorological conditions causing interannual variations in the parameter evolution. Here, the experimental design of the field campaigns, and methods employed in the determination of all parameters, are described in detail. Insights into the database are provided and potential fields of application are discussed. The data will contribute to a further development of crop monitoring methods based on remote sensing techniques. The database is freely available at PANGAEA (https://doi.org/10.1594/PANGAEA.874251.

  5. Seasonal evolution of soil and plant parameters on the agricultural Gebesee test site: a database for the set-up and validation of EO-LDAS and satellite-aided retrieval models

    Science.gov (United States)

    Truckenbrodt, Sina C.; Schmullius, Christiane C.

    2018-03-01

    Ground reference data are a prerequisite for the calibration, update, and validation of retrieval models facilitating the monitoring of land parameters based on Earth Observation data. Here, we describe the acquisition of a comprehensive ground reference database which was created to test and validate the recently developed Earth Observation Land Data Assimilation System (EO-LDAS) and products derived from remote sensing observations in the visible and infrared range. In situ data were collected for seven crop types (winter barley, winter wheat, spring wheat, durum, winter rape, potato, and sugar beet) cultivated on the agricultural Gebesee test site, central Germany, in 2013 and 2014. The database contains information on hyperspectral surface reflectance factors, the evolution of biophysical and biochemical plant parameters, phenology, surface conditions, atmospheric states, and a set of ground control points. Ground reference data were gathered at an approximately weekly resolution and on different spatial scales to investigate variations within and between acreages. In situ data collected less than 1 day apart from satellite acquisitions (RapidEye, SPOT 5, Landsat-7 and -8) with a cloud coverage ≤ 25 % are available for 10 and 15 days in 2013 and 2014, respectively. The measurements show that the investigated growing seasons were characterized by distinct meteorological conditions causing interannual variations in the parameter evolution. Here, the experimental design of the field campaigns, and methods employed in the determination of all parameters, are described in detail. Insights into the database are provided and potential fields of application are discussed. The data will contribute to a further development of crop monitoring methods based on remote sensing techniques. The database is freely available at PANGAEA (https://doi.org/10.1594/PANGAEA.874251).

  6. Principles of Proper Validation

    DEFF Research Database (Denmark)

    Esbensen, Kim; Geladi, Paul

    2010-01-01

    to suffer from the same deficiencies. The PPV are universal and can be applied to all situations in which the assessment of performance is desired: prediction-, classification-, time series forecasting-, modeling validation. The key element of PPV is the Theory of Sampling (TOS), which allow insight......) is critically necessary for the inclusion of the sampling errors incurred in all 'future' situations in which the validated model must perform. Logically, therefore, all one data set re-sampling approaches for validation, especially cross-validation and leverage-corrected validation, should be terminated...

  7. Parallelism in matrix computations

    CERN Document Server

    Gallopoulos, Efstratios; Sameh, Ahmed H

    2016-01-01

    This book is primarily intended as a research monograph that could also be used in graduate courses for the design of parallel algorithms in matrix computations. It assumes general but not extensive knowledge of numerical linear algebra, parallel architectures, and parallel programming paradigms. The book consists of four parts: (I) Basics; (II) Dense and Special Matrix Computations; (III) Sparse Matrix Computations; and (IV) Matrix functions and characteristics. Part I deals with parallel programming paradigms and fundamental kernels, including reordering schemes for sparse matrices. Part II is devoted to dense matrix computations such as parallel algorithms for solving linear systems, linear least squares, the symmetric algebraic eigenvalue problem, and the singular-value decomposition. It also deals with the development of parallel algorithms for special linear systems such as banded ,Vandermonde ,Toeplitz ,and block Toeplitz systems. Part III addresses sparse matrix computations: (a) the development of pa...

  8. Neutrino mass matrix

    International Nuclear Information System (INIS)

    Strobel, E.L.

    1985-01-01

    Given the many conflicting experimental results, examination is made of the neutrino mass matrix in order to determine possible masses and mixings. It is assumed that the Dirac mass matrix for the electron, muon, and tau neutrinos is similar in form to those of the quarks and charged leptons, and that the smallness of the observed neutrino masses results from the Gell-Mann-Ramond-Slansky mechanism. Analysis of masses and mixings for the neutrinos is performed using general structures for the Majorana mass matrix. It is shown that if certain tentative experimental results concerning the neutrino masses and mixing angles are confirmed, significant limitations may be placed on the Majorana mass matrix. The most satisfactory simple assumption concerning the Majorana mass matrix is that it is approximately proportional to the Dirac mass matrix. A very recent experimental neutrino mass result and its implications are discussed. Some general properties of matrices with structure similar to the Dirac mass matrices are discussed

  9. Impacts of Sample Design for Validation Data on the Accuracy of Feedforward Neural Network Classification

    Directory of Open Access Journals (Sweden)

    Giles M. Foody

    2017-08-01

    Full Text Available Validation data are often used to evaluate the performance of a trained neural network and used in the selection of a network deemed optimal for the task at-hand. Optimality is commonly assessed with a measure, such as overall classification accuracy. The latter is often calculated directly from a confusion matrix showing the counts of cases in the validation set with particular labelling properties. The sample design used to form the validation set can, however, influence the estimated magnitude of the accuracy. Commonly, the validation set is formed with a stratified sample to give balanced classes, but also via random sampling, which reflects class abundance. It is suggested that if the ultimate aim is to accurately classify a dataset in which the classes do vary in abundance, a validation set formed via random, rather than stratified, sampling is preferred. This is illustrated with the classification of simulated and remotely-sensed datasets. With both datasets, statistically significant differences in the accuracy with which the data could be classified arose from the use of validation sets formed via random and stratified sampling (z = 2.7 and 1.9 for the simulated and real datasets respectively, for both p < 0.05%. The accuracy of the classifications that used a stratified sample in validation were smaller, a result of cases of an abundant class being commissioned into a rarer class. Simple means to address the issue are suggested.

  10. The Matrix exponential, Dynamic Systems and Control

    DEFF Research Database (Denmark)

    Poulsen, Niels Kjølstad

    The matrix exponential can be found in various connections in analysis and control of dynamic systems. In this short note we are going to list a few examples. The matrix exponential usably pops up in connection to the sampling process, whatever it is in a deterministic or a stochastic setting...... or it is a tool for determining a Gramian matrix. This note is intended to be used in connection to the teaching post the course in Stochastic Adaptive Control (02421) given at Informatics and Mathematical Modelling (IMM), The Technical University of Denmark. This work is a result of a study of the litterature....

  11. Matrix metalloproteinases in lung biology

    Directory of Open Access Journals (Sweden)

    Parks William C

    2000-12-01

    Full Text Available Abstract Despite much information on their catalytic properties and gene regulation, we actually know very little of what matrix metalloproteinases (MMPs do in tissues. The catalytic activity of these enzymes has been implicated to function in normal lung biology by participating in branching morphogenesis, homeostasis, and repair, among other events. Overexpression of MMPs, however, has also been blamed for much of the tissue destruction associated with lung inflammation and disease. Beyond their role in the turnover and degradation of extracellular matrix proteins, MMPs also process, activate, and deactivate a variety of soluble factors, and seldom is it readily apparent by presence alone if a specific proteinase in an inflammatory setting is contributing to a reparative or disease process. An important goal of MMP research will be to identify the actual substrates upon which specific enzymes act. This information, in turn, will lead to a clearer understanding of how these extracellular proteinases function in lung development, repair, and disease.

  12. POLYMAT-C: a comprehensive SPSS program for computing the polychoric correlation matrix.

    Science.gov (United States)

    Lorenzo-Seva, Urbano; Ferrando, Pere J

    2015-09-01

    We provide a free noncommercial SPSS program that implements procedures for (a) obtaining the polychoric correlation matrix between a set of ordered categorical measures, so that it can be used as input for the SPSS factor analysis (FA) program; (b) testing the null hypothesis of zero population correlation for each element of the matrix by using appropriate simulation procedures; (c) obtaining valid and accurate confidence intervals via bootstrap resampling for those correlations found to be significant; and (d) performing, if necessary, a smoothing procedure that makes the matrix amenable to any FA estimation procedure. For the main purpose (a), the program uses a robust unified procedure that allows four different types of estimates to be obtained at the user's choice. Overall, we hope the program will be a very useful tool for the applied researcher, not only because it provides an appropriate input matrix for FA, but also because it allows the researcher to carefully check the appropriateness of the matrix for this purpose. The SPSS syntax, a short manual, and data files related to this article are available as Supplemental materials that are available for download with this article.

  13. Patience of matrix games

    DEFF Research Database (Denmark)

    Hansen, Kristoffer Arnsfelt; Ibsen-Jensen, Rasmus; Podolskii, Vladimir V.

    2013-01-01

    For matrix games we study how small nonzero probability must be used in optimal strategies. We show that for image win–lose–draw games (i.e. image matrix games) nonzero probabilities smaller than image are never needed. We also construct an explicit image win–lose game such that the unique optimal...

  14. Matrix comparison, Part 2

    DEFF Research Database (Denmark)

    Schneider, Jesper Wiborg; Borlund, Pia

    2007-01-01

    The present two-part article introduces matrix comparison as a formal means for evaluation purposes in informetric studies such as cocitation analysis. In the first part, the motivation behind introducing matrix comparison to informetric studies, as well as two important issues influencing such c...

  15. Unitarity of CKM Matrix

    CERN Document Server

    Saleem, M

    2002-01-01

    The Unitarity of the CKM matrix is examined in the light of the latest available accurate data. The analysis shows that a conclusive result cannot be derived at present. Only more precise data can determine whether the CKM matrix opens new vistas beyond the standard model or not.

  16. Non-negative Matrix Factorization for Binary Data

    DEFF Research Database (Denmark)

    Larsen, Jacob Søgaard; Clemmensen, Line Katrine Harder

    We propose the Logistic Non-negative Matrix Factorization for decomposition of binary data. Binary data are frequently generated in e.g. text analysis, sensory data, market basket data etc. A common method for analysing non-negative data is the Non-negative Matrix Factorization, though...... this is in theory not appropriate for binary data, and thus we propose a novel Non-negative Matrix Factorization based on the logistic link function. Furthermore we generalize the method to handle missing data. The formulation of the method is compared to a previously proposed method (Tome et al., 2015). We compare...... the performance of the Logistic Non-negative Matrix Factorization to Least Squares Non-negative Matrix Factorization and Kullback-Leibler (KL) Non-negative Matrix Factorization on sets of binary data: a synthetic dataset, a set of student comments on their professors collected in a binary term-document matrix...

  17. A Normalized Transfer Matrix Method for the Free Vibration of Stepped Beams: Comparison with Experimental and FE(3D Methods

    Directory of Open Access Journals (Sweden)

    Tamer Ahmed El-Sayed

    2017-01-01

    Full Text Available The exact solution for multistepped Timoshenko beam is derived using a set of fundamental solutions. This set of solutions is derived to normalize the solution at the origin of the coordinates. The start, end, and intermediate boundary conditions involve concentrated masses and linear and rotational elastic supports. The beam start, end, and intermediate equations are assembled using the present normalized transfer matrix (NTM. The advantage of this method is that it is quicker than the standard method because the size of the complete system coefficient matrix is 4 × 4. In addition, during the assembly of this matrix, there are no inverse matrix steps required. The validity of this method is tested by comparing the results of the current method with the literature. Then the validity of the exact stepped analysis is checked using experimental and FE(3D methods. The experimental results for stepped beams with single step and two steps, for sixteen different test samples, are in excellent agreement with those of the three-dimensional finite element FE(3D. The comparison between the NTM method and the finite element method results shows that the modal percentage deviation is increased when a beam step location coincides with a peak point in the mode shape. Meanwhile, the deviation decreases when a beam step location coincides with a straight portion in the mode shape.

  18. Detecting Motor Impairment in Early Parkinson’s Disease via Natural Typing Interaction With Keyboards: Validation of the neuroQWERTY Approach in an Uncontrolled At-Home Setting

    Science.gov (United States)

    Ledesma-Carbayo, María J; Butterworth, Ian; Matarazzo, Michele; Montero-Escribano, Paloma; Puertas-Martín, Verónica; Gray, Martha L

    2018-01-01

    Background Parkinson’s disease (PD) is the second most prevalent neurodegenerative disease and one of the most common forms of movement disorder. Although there is no known cure for PD, existing therapies can provide effective symptomatic relief. However, optimal titration is crucial to avoid adverse effects. Today, decision making for PD management is challenging because it relies on subjective clinical evaluations that require a visit to the clinic. This challenge has motivated recent research initiatives to develop tools that can be used by nonspecialists to assess psychomotor impairment. Among these emerging solutions, we recently reported the neuroQWERTY index, a new digital marker able to detect motor impairment in an early PD cohort through the analysis of the key press and release timing data collected during a controlled in-clinic typing task. Objective The aim of this study was to extend the in-clinic implementation to an at-home implementation by validating the applicability of the neuroQWERTY approach in an uncontrolled at-home setting, using the typing data from subjects’ natural interaction with their laptop to enable remote and unobtrusive assessment of PD signs. Methods We implemented the data-collection platform and software to enable access and storage of the typing data generated by users while using their computer at home. We recruited a total of 60 participants; of these participants 52 (25 people with Parkinson’s and 27 healthy controls) provided enough data to complete the analysis. Finally, to evaluate whether our in-clinic-built algorithm could be used in an uncontrolled at-home setting, we compared its performance on the data collected during the controlled typing task in the clinic and the results of our method using the data passively collected at home. Results Despite the randomness and sparsity introduced by the uncontrolled setting, our algorithm performed nearly as well in the at-home data (area under the receiver operating

  19. Fuzzy risk matrix

    International Nuclear Information System (INIS)

    Markowski, Adam S.; Mannan, M. Sam

    2008-01-01

    A risk matrix is a mechanism to characterize and rank process risks that are typically identified through one or more multifunctional reviews (e.g., process hazard analysis, audits, or incident investigation). This paper describes a procedure for developing a fuzzy risk matrix that may be used for emerging fuzzy logic applications in different safety analyses (e.g., LOPA). The fuzzification of frequency and severity of the consequences of the incident scenario are described which are basic inputs for fuzzy risk matrix. Subsequently using different design of risk matrix, fuzzy rules are established enabling the development of fuzzy risk matrices. Three types of fuzzy risk matrix have been developed (low-cost, standard, and high-cost), and using a distillation column case study, the effect of the design on final defuzzified risk index is demonstrated

  20. Automatic sets and Delone sets

    International Nuclear Information System (INIS)

    Barbe, A; Haeseler, F von

    2004-01-01

    Automatic sets D part of Z m are characterized by having a finite number of decimations. They are equivalently generated by fixed points of certain substitution systems, or by certain finite automata. As examples, two-dimensional versions of the Thue-Morse, Baum-Sweet, Rudin-Shapiro and paperfolding sequences are presented. We give a necessary and sufficient condition for an automatic set D part of Z m to be a Delone set in R m . The result is then extended to automatic sets that are defined as fixed points of certain substitutions. The morphology of automatic sets is discussed by means of examples

  1. Hexagonal response matrix using symmetries

    International Nuclear Information System (INIS)

    Gotoh, Y.

    1991-01-01

    A response matrix for use in core calculations for nuclear reactors with hexagonal fuel assemblies is presented. It is based on the incoming currents averaged over the half-surface of a hexagonal node by applying symmetry theory. The boundary conditions of the incoming currents on the half-surface of the node are expressed by a complete set of orthogonal vectors which are constructed from symmetrized functions. The expansion coefficients of the functions are determined by the boundary conditions of incoming currents. (author)

  2. The nuclear reaction matrix

    International Nuclear Information System (INIS)

    Krenciglowa, E.M.; Kung, C.L.; Kuo, T.T.S.; Osnes, E.; and Department of Physics, State University of New York at Stony Brook, Stony Brook, New York 11794)

    1976-01-01

    Different definitions of the reaction matrix G appropriate to the calculation of nuclear structure are reviewed and discussed. Qualitative physical arguments are presented in support of a two-step calculation of the G-matrix for finite nuclei. In the first step the high-energy excitations are included using orthogonalized plane-wave intermediate states, and in the second step the low-energy excitations are added in, using harmonic oscillator intermediate states. Accurate calculations of G-matrix elements for nuclear structure calculations in the Aapprox. =18 region are performed following this procedure and treating the Pauli exclusion operator Q 2 /sub p/ by the method of Tsai and Kuo. The treatment of Q 2 /sub p/, the effect of the intermediate-state spectrum and the energy dependence of the reaction matrix are investigated in detail. The present matrix elements are compared with various matrix elements given in the literature. In particular, close agreement is obtained with the matrix elements calculated by Kuo and Brown using approximate methods

  3. Low-Rank Matrix Factorization With Adaptive Graph Regularizer.

    Science.gov (United States)

    Lu, Gui-Fu; Wang, Yong; Zou, Jian

    2016-05-01

    In this paper, we present a novel low-rank matrix factorization algorithm with adaptive graph regularizer (LMFAGR). We extend the recently proposed low-rank matrix with manifold regularization (MMF) method with an adaptive regularizer. Different from MMF, which constructs an affinity graph in advance, LMFAGR can simultaneously seek graph weight matrix and low-dimensional representations of data. That is, graph construction and low-rank matrix factorization are incorporated into a unified framework, which results in an automatically updated graph rather than a predefined one. The experimental results on some data sets demonstrate that the proposed algorithm outperforms the state-of-the-art low-rank matrix factorization methods.

  4. The Lehmer Matrix and Its Recursive Analogue

    Science.gov (United States)

    2010-01-01

    LU factorization of matrix A by considering det A = det U = ∏n i=1 2i−1 i2 . The nth Catalan number is given in terms of binomial coefficients by Cn...for failing to comply with a collection of information if it does not display a currently valid OMB control number . 1. REPORT DATE 2010 2. REPORT...TYPE 3. DATES COVERED 00-00-2010 to 00-00-2010 4. TITLE AND SUBTITLE The Lehmer matrix and its recursive analogue 5a. CONTRACT NUMBER 5b

  5. Matrix Metalloproteinase Enzyme Family

    Directory of Open Access Journals (Sweden)

    Ozlem Goruroglu Ozturk

    2013-04-01

    Full Text Available Matrix metalloproteinases play an important role in many biological processes such as embriogenesis, tissue remodeling, wound healing, and angiogenesis, and in some pathological conditions such as atherosclerosis, arthritis and cancer. Currently, 24 genes have been identified in humans that encode different groups of matrix metalloproteinase enzymes. This review discuss the members of the matrix metalloproteinase family and their substrate specificity, structure, function and the regulation of their enzyme activity by tissue inhibitors. [Archives Medical Review Journal 2013; 22(2.000: 209-220

  6. Matrix groups for undergraduates

    CERN Document Server

    Tapp, Kristopher

    2005-01-01

    Matrix groups touch an enormous spectrum of the mathematical arena. This textbook brings them into the undergraduate curriculum. It makes an excellent one-semester course for students familiar with linear and abstract algebra and prepares them for a graduate course on Lie groups. Matrix Groups for Undergraduates is concrete and example-driven, with geometric motivation and rigorous proofs. The story begins and ends with the rotations of a globe. In between, the author combines rigor and intuition to describe basic objects of Lie theory: Lie algebras, matrix exponentiation, Lie brackets, and maximal tori.

  7. Elementary matrix theory

    CERN Document Server

    Eves, Howard

    1980-01-01

    The usefulness of matrix theory as a tool in disciplines ranging from quantum mechanics to psychometrics is widely recognized, and courses in matrix theory are increasingly a standard part of the undergraduate curriculum.This outstanding text offers an unusual introduction to matrix theory at the undergraduate level. Unlike most texts dealing with the topic, which tend to remain on an abstract level, Dr. Eves' book employs a concrete elementary approach, avoiding abstraction until the final chapter. This practical method renders the text especially accessible to students of physics, engineeri

  8. The fitness for purpose of analytical methods applied to fluorimetric uranium determination in water matrix

    International Nuclear Information System (INIS)

    Grinman, Ana; Giustina, Daniel; Mondini, Julia; Diodat, Jorge

    2008-01-01

    Full text: This paper describes the steps which should be followed by a laboratory in order to validate the fluorimetric method for natural uranium in water matrix. The validation of an analytical method is a necessary requirement prior accreditation under Standard norm ISO/IEC 17025, of a non normalized method. Different analytical techniques differ in a sort of variables to be validated. Depending on the chemical process, measurement technique, matrix type, data fitting and measurement efficiency, a laboratory must set up experiments to verify reliability of data, through the application of several statistical tests and by participating in Quality Programs (QP) organized by reference laboratories such as the National Institute of Standards and Technology (NIST), National Physics Laboratory (NPL), or Environmental Measurements Laboratory (EML). However, the participation in QP not only involves international reference laboratories, but also, the national ones which are able to prove proficiency to the Argentinean Accreditation Board. The parameters that the ARN laboratory had to validate in the fluorimetric method to fit in accordance with Eurachem guide and IUPAC definitions, are: Detection Limit, Quantification Limit, Precision, Intra laboratory Precision, Reproducibility Limit, Repeatability Limit, Linear Range and Robustness. Assays to fit the above parameters were designed on the bases of statistics requirements, and a detailed data treatment is presented together with the respective tests in order to show the parameters validated. As a final conclusion, the uranium determination by fluorimetry is a reliable method for direct measurement to meet radioprotection requirements in water matrix, within its linear range which is fixed every time a calibration is carried out at the beginning of the analysis. The detection limit ( depending on blank standard deviation and slope) varies between 3 ug U and 5 ug U which yields minimum detectable concentrations (MDC) of

  9. Validation of HEDR models

    International Nuclear Information System (INIS)

    Napier, B.A.; Simpson, J.C.; Eslinger, P.W.; Ramsdell, J.V. Jr.; Thiede, M.E.; Walters, W.H.

    1994-05-01

    The Hanford Environmental Dose Reconstruction (HEDR) Project has developed a set of computer models for estimating the possible radiation doses that individuals may have received from past Hanford Site operations. This document describes the validation of these models. In the HEDR Project, the model validation exercise consisted of comparing computational model estimates with limited historical field measurements and experimental measurements that are independent of those used to develop the models. The results of any one test do not mean that a model is valid. Rather, the collection of tests together provide a level of confidence that the HEDR models are valid

  10. Explicating Validity

    Science.gov (United States)

    Kane, Michael T.

    2016-01-01

    How we choose to use a term depends on what we want to do with it. If "validity" is to be used to support a score interpretation, validation would require an analysis of the plausibility of that interpretation. If validity is to be used to support score uses, validation would require an analysis of the appropriateness of the proposed…

  11. Whitby Mudstone, flow from matrix to fractures

    Science.gov (United States)

    Houben, Maartje; Hardebol, Nico; Barnhoorn, Auke; Boersma, Quinten; Peach, Colin; Bertotti, Giovanni; Drury, Martyn

    2016-04-01

    Fluid flow from matrix to well in shales would be faster if we account for the duality of the permeable medium considering a high permeable fracture network together with a tight matrix. To investigate how long and how far a gas molecule would have to travel through the matrix until it reaches an open connected fracture we investigated the permeability of the Whitby Mudstone (UK) matrix in combination with mapping the fracture network present in the current outcrops of the Whitby Mudstone at the Yorkshire coast. Matrix permeability was measured perpendicular to the bedding using a pressure step decay method on core samples and permeability values are in the microdarcy range. The natural fracture network present in the pavement shows a connected network with dominant NS and EW strikes, where the NS fractures are the main fracture set with an orthogonal fracture set EW. Fracture spacing relations in the pavements show that the average distance to the nearest fracture varies between 7 cm (EW) and 14 cm (NS), where 90% of the matrix is 30 cm away from the nearest fracture. By making some assumptions like; fracture network at depth is similar to what is exposed in the current pavements and open to flow, fracture network is at hydrostatic pressure at 3 km depth, overpressure between matrix and fractures is 10% and a matrix permeability perpendicular to the bedding of 0.1 microdarcy, we have calculated the time it takes for a gas molecule to travel to the nearest fracture. These input values give travel times up to 8 days for a distance of 14 cm. If the permeability is changed to 1 nanodarcy or 10 microdarcy travel times change to 2.2 years or 2 hours respectively.

  12. Hacking the Matrix.

    Science.gov (United States)

    Czerwinski, Michael; Spence, Jason R

    2017-01-05

    Recently in Nature, Gjorevski et al. (2016) describe a fully defined synthetic hydrogel that mimics the extracellular matrix to support in vitro growth of intestinal stem cells and organoids. The hydrogel allows exquisite control over the chemical and physical in vitro niche and enables identification of regulatory properties of the matrix. Copyright © 2017 Elsevier Inc. All rights reserved.

  13. The Matrix Organization Revisited

    DEFF Research Database (Denmark)

    Gattiker, Urs E.; Ulhøi, John Parm

    1999-01-01

    This paper gives a short overview of matrix structure and technology management. It outlines some of the characteristics and also points out that many organizations may actualy be hybrids (i.e. mix several ways of organizing to allocate resorces effectively).......This paper gives a short overview of matrix structure and technology management. It outlines some of the characteristics and also points out that many organizations may actualy be hybrids (i.e. mix several ways of organizing to allocate resorces effectively)....

  14. The Exopolysaccharide Matrix

    Science.gov (United States)

    Koo, H.; Falsetta, M.L.; Klein, M.I.

    2013-01-01

    Many infectious diseases in humans are caused or exacerbated by biofilms. Dental caries is a prime example of a biofilm-dependent disease, resulting from interactions of microorganisms, host factors, and diet (sugars), which modulate the dynamic formation of biofilms on tooth surfaces. All biofilms have a microbial-derived extracellular matrix as an essential constituent. The exopolysaccharides formed through interactions between sucrose- (and starch-) and Streptococcus mutans-derived exoenzymes present in the pellicle and on microbial surfaces (including non-mutans) provide binding sites for cariogenic and other organisms. The polymers formed in situ enmesh the microorganisms while forming a matrix facilitating the assembly of three-dimensional (3D) multicellular structures that encompass a series of microenvironments and are firmly attached to teeth. The metabolic activity of microbes embedded in this exopolysaccharide-rich and diffusion-limiting matrix leads to acidification of the milieu and, eventually, acid-dissolution of enamel. Here, we discuss recent advances concerning spatio-temporal development of the exopolysaccharide matrix and its essential role in the pathogenesis of dental caries. We focus on how the matrix serves as a 3D scaffold for biofilm assembly while creating spatial heterogeneities and low-pH microenvironments/niches. Further understanding on how the matrix modulates microbial activity and virulence expression could lead to new approaches to control cariogenic biofilms. PMID:24045647

  15. Fully Decentralized Semi-supervised Learning via Privacy-preserving Matrix Completion.

    Science.gov (United States)

    Fierimonte, Roberto; Scardapane, Simone; Uncini, Aurelio; Panella, Massimo

    2016-08-26

    Distributed learning refers to the problem of inferring a function when the training data are distributed among different nodes. While significant work has been done in the contexts of supervised and unsupervised learning, the intermediate case of Semi-supervised learning in the distributed setting has received less attention. In this paper, we propose an algorithm for this class of problems, by extending the framework of manifold regularization. The main component of the proposed algorithm consists of a fully distributed computation of the adjacency matrix of the training patterns. To this end, we propose a novel algorithm for low-rank distributed matrix completion, based on the framework of diffusion adaptation. Overall, the distributed Semi-supervised algorithm is efficient and scalable, and it can preserve privacy by the inclusion of flexible privacy-preserving mechanisms for similarity computation. The experimental results and comparison on a wide range of standard Semi-supervised benchmarks validate our proposal.

  16. Betatron coupling: Merging Hamiltonian and matrix approaches

    Directory of Open Access Journals (Sweden)

    R. Calaga

    2005-03-01

    Full Text Available Betatron coupling is usually analyzed using either matrix formalism or Hamiltonian perturbation theory. The latter is less exact but provides a better physical insight. In this paper direct relations are derived between the two formalisms. This makes it possible to interpret the matrix approach in terms of resonances, as well as use results of both formalisms indistinctly. An approach to measure the complete coupling matrix and its determinant from turn-by-turn data is presented. Simulations using methodical accelerator design MAD-X, an accelerator design and tracking program, were performed to validate the relations and understand the scope of their application to real accelerators such as the Relativistic Heavy Ion Collider.

  17. The QCD spin chain S matrix

    International Nuclear Information System (INIS)

    Ahn, Changrim; Nepomechie, Rafael I.; Suzuki, Junji

    2008-01-01

    Beisert et al. have identified an integrable SU(2,2) quantum spin chain which gives the one-loop anomalous dimensions of certain operators in large N c QCD. We derive a set of nonlinear integral equations (NLIEs) for this model, and compute the scattering matrix of the various (in particular, magnon) excitations

  18. Inverter Matrix for the Clementine Mission

    Science.gov (United States)

    Buehler, M. G.; Blaes, B. R.; Tardio, G.; Soli, G. A.

    1994-01-01

    An inverter matrix test circuit was designed for the Clementine space mission and is built into the RRELAX (Radiation and Reliability Assurance Experiment). The objective is to develop a circuit that will allow the evaluation of the CMOS FETs using a lean data set in the noisy spacecraft environment.

  19. ANL Critical Assembly Covariance Matrix Generation - Addendum

    Energy Technology Data Exchange (ETDEWEB)

    McKnight, Richard D. [Argonne National Lab. (ANL), Argonne, IL (United States); Grimm, Karl N. [Argonne National Lab. (ANL), Argonne, IL (United States)

    2014-01-13

    In March 2012, a report was issued on covariance matrices for Argonne National Laboratory (ANL) critical experiments. That report detailed the theory behind the calculation of covariance matrices and the methodology used to determine the matrices for a set of 33 ANL experimental set-ups. Since that time, three new experiments have been evaluated and approved. This report essentially updates the previous report by adding in these new experiments to the preceding covariance matrix structure.

  20. Implementation and automated validation of the minimal Z' model in FeynRules

    International Nuclear Information System (INIS)

    Basso, L.; Christensen, N.D.; Duhr, C.; Fuks, B.; Speckner, C.

    2012-01-01

    We describe the implementation of a well-known class of U(1) gauge models, the 'minimal' Z' models, in FeynRules. We also describe a new automated validation tool for FeynRules models which is controlled by a web interface and allows the user to run a complete set of 2 → 2 processes on different matrix element generators, different gauges, and compare between them all. If existing, the comparison with independent implementations is also possible. This tool has been used to validate our implementation of the 'minimal' Z' models. (authors)

  1. Matrix Information Geometry

    CERN Document Server

    Bhatia, Rajendra

    2013-01-01

    This book is an outcome of the Indo-French Workshop on Matrix Information Geometries (MIG): Applications in Sensor and Cognitive Systems Engineering, which was held in Ecole Polytechnique and Thales Research and Technology Center, Palaiseau, France, in February 23-25, 2011. The workshop was generously funded by the Indo-French Centre for the Promotion of Advanced Research (IFCPAR).  During the event, 22 renowned invited french or indian speakers gave lectures on their areas of expertise within the field of matrix analysis or processing. From these talks, a total of 17 original contribution or state-of-the-art chapters have been assembled in this volume. All articles were thoroughly peer-reviewed and improved, according to the suggestions of the international referees. The 17 contributions presented  are organized in three parts: (1) State-of-the-art surveys & original matrix theory work, (2) Advanced matrix theory for radar processing, and (3) Matrix-based signal processing applications.  

  2. Robust Face Recognition via Multi-Scale Patch-Based Matrix Regression.

    Directory of Open Access Journals (Sweden)

    Guangwei Gao

    Full Text Available In many real-world applications such as smart card solutions, law enforcement, surveillance and access control, the limited training sample size is the most fundamental problem. By making use of the low-rank structural information of the reconstructed error image, the so-called nuclear norm-based matrix regression has been demonstrated to be effective for robust face recognition with continuous occlusions. However, the recognition performance of nuclear norm-based matrix regression degrades greatly in the face of the small sample size problem. An alternative solution to tackle this problem is performing matrix regression on each patch and then integrating the outputs from all patches. However, it is difficult to set an optimal patch size across different databases. To fully utilize the complementary information from different patch scales for the final decision, we propose a multi-scale patch-based matrix regression scheme based on which the ensemble of multi-scale outputs can be achieved optimally. Extensive experiments on benchmark face databases validate the effectiveness and robustness of our method, which outperforms several state-of-the-art patch-based face recognition algorithms.

  3. Discriminant validity of well-being measures.

    Science.gov (United States)

    Lucas, R E; Diener, E; Suh, E

    1996-09-01

    The convergent and discriminant validities of well-being concepts were examined using multitrait-multimethod matrix analyses (D. T. Campbell & D. W. Fiske, 1959) on 3 sets of data. In Study 1, participants completed measures of life satisfaction, positive affect, negative affect, self-esteem, and optimism on 2 occasions 4 weeks apart and also obtained 3 informant ratings. In Study 2, participants completed each of the 5 measures on 2 occasions 2 years apart and collected informant reports at Time 2. In Study 3, participants completed 2 different scales for each of the 5 constructs. Analyses showed that (a) life satisfaction is discriminable from positive and negative affect, (b) positive affect is discriminable from negative affect, (c) life satisfaction is discriminable from optimism and self-esteem, and (d) optimism is separable from trait measures of negative affect.

  4. 2016 MATRIX annals

    CERN Document Server

    Praeger, Cheryl; Tao, Terence

    2018-01-01

    MATRIX is Australia’s international, residential mathematical research institute. It facilitates new collaborations and mathematical advances through intensive residential research programs, each lasting 1-4 weeks. This book is a scientific record of the five programs held at MATRIX in its first year, 2016: Higher Structures in Geometry and Physics (Chapters 1-5 and 18-21); Winter of Disconnectedness (Chapter 6 and 22-26); Approximation and Optimisation (Chapters 7-8); Refining C*-Algebraic Invariants for Dynamics using KK-theory (Chapters 9-13); Interactions between Topological Recursion, Modularity, Quantum Invariants and Low-dimensional Topology (Chapters 14-17 and 27). The MATRIX Scientific Committee selected these programs based on their scientific excellence and the participation rate of high-profile international participants. Each program included ample unstructured time to encourage collaborative research; some of the longer programs also included an embedded conference or lecture series. The artic...

  5. Matrix interdiction problem

    Energy Technology Data Exchange (ETDEWEB)

    Pan, Feng [Los Alamos National Laboratory; Kasiviswanathan, Shiva [Los Alamos National Laboratory

    2010-01-01

    In the matrix interdiction problem, a real-valued matrix and an integer k is given. The objective is to remove k columns such that the sum over all rows of the maximum entry in each row is minimized. This combinatorial problem is closely related to bipartite network interdiction problem which can be applied to prioritize the border checkpoints in order to minimize the probability that an adversary can successfully cross the border. After introducing the matrix interdiction problem, we will prove the problem is NP-hard, and even NP-hard to approximate with an additive n{gamma} factor for a fixed constant {gamma}. We also present an algorithm for this problem that achieves a factor of (n-k) mUltiplicative approximation ratio.

  6. MATLAB matrix algebra

    CERN Document Server

    Pérez López, César

    2014-01-01

    MATLAB is a high-level language and environment for numerical computation, visualization, and programming. Using MATLAB, you can analyze data, develop algorithms, and create models and applications. The language, tools, and built-in math functions enable you to explore multiple approaches and reach a solution faster than with spreadsheets or traditional programming languages, such as C/C++ or Java. MATLAB Matrix Algebra introduces you to the MATLAB language with practical hands-on instructions and results, allowing you to quickly achieve your goals. Starting with a look at symbolic and numeric variables, with an emphasis on vector and matrix variables, you will go on to examine functions and operations that support vectors and matrices as arguments, including those based on analytic parent functions. Computational methods for finding eigenvalues and eigenvectors of matrices are detailed, leading to various matrix decompositions. Applications such as change of bases, the classification of quadratic forms and ...

  7. Development of a clinical diagnostic matrix for characterizing inherited epidermolysis bullosa.

    Science.gov (United States)

    Yenamandra, V K; Moss, C; Sreenivas, V; Khan, M; Sivasubbu, S; Sharma, V K; Sethuraman, G

    2017-06-01

    Accurately diagnosing the subtype of epidermolysis bullosa (EB) is critical for management and genetic counselling. Modern laboratory techniques are largely inaccessible in developing countries, where the diagnosis remains clinical and often inaccurate. To develop a simple clinical diagnostic tool to aid in the diagnosis and subtyping of EB. We developed a matrix indicating presence or absence of a set of distinctive clinical features (as rows) for the nine most prevalent EB subtypes (as columns). To test an individual patient, presence or absence of these features was compared with the findings expected in each of the nine subtypes to see which corresponded best. If two or more diagnoses scored equally, the diagnosis with the greatest number of specific features was selected. The matrix was tested using findings from 74 genetically characterized patients with EB aged > 6 months by an investigator blinded to molecular diagnosis. For concordance, matrix diagnoses were compared with molecular diagnoses. Overall, concordance between the matrix and molecular diagnoses for the four major types of EB was 91·9%, with a kappa coefficient of 0·88 [95% confidence interval (CI) 0·81-0·95; P < 0·001]. The matrix achieved a 75·7% agreement in classifying EB into its nine subtypes, with a kappa coefficient of 0·73 (95% CI 0·69-0·77; P < 0·001). The matrix appears to be simple, valid and useful in predicting the type and subtype of EB. An electronic version will facilitate further testing. © 2016 British Association of Dermatologists.

  8. Chern-Simons couplings for dielectric F-strings in matrix string theory

    International Nuclear Information System (INIS)

    Brecher, Dominic; Janssen, Bert; Lozano, Yolanda

    2002-01-01

    We compute the non-abelian couplings in the Chern-Simons action for a set of coinciding fundamental strings in both the type IIA and type IIB Matrix string theories. Starting from Matrix theory in a weakly curved background, we construct the linear couplings of closed string fields to type IIA Matrix strings. Further dualities give a type IIB Matrix string theory and a type IIA theory of Matrix strings with winding. (Abstract Copyright[2002], Wiley Periodicals, Inc.)

  9. Elementary matrix algebra

    CERN Document Server

    Hohn, Franz E

    2012-01-01

    This complete and coherent exposition, complemented by numerous illustrative examples, offers readers a text that can teach by itself. Fully rigorous in its treatment, it offers a mathematically sound sequencing of topics. The work starts with the most basic laws of matrix algebra and progresses to the sweep-out process for obtaining the complete solution of any given system of linear equations - homogeneous or nonhomogeneous - and the role of matrix algebra in the presentation of useful geometric ideas, techniques, and terminology.Other subjects include the complete treatment of the structur

  10. Complex matrix model duality

    International Nuclear Information System (INIS)

    Brown, T.W.

    2010-11-01

    The same complex matrix model calculates both tachyon scattering for the c=1 non-critical string at the self-dual radius and certain correlation functions of half-BPS operators in N=4 super- Yang-Mills. It is dual to another complex matrix model where the couplings of the first model are encoded in the Kontsevich-like variables of the second. The duality between the theories is mirrored by the duality of their Feynman diagrams. Analogously to the Hermitian Kontsevich- Penner model, the correlation functions of the second model can be written as sums over discrete points in subspaces of the moduli space of punctured Riemann surfaces. (orig.)

  11. Complex matrix model duality

    Energy Technology Data Exchange (ETDEWEB)

    Brown, T.W.

    2010-11-15

    The same complex matrix model calculates both tachyon scattering for the c=1 non-critical string at the self-dual radius and certain correlation functions of half-BPS operators in N=4 super- Yang-Mills. It is dual to another complex matrix model where the couplings of the first model are encoded in the Kontsevich-like variables of the second. The duality between the theories is mirrored by the duality of their Feynman diagrams. Analogously to the Hermitian Kontsevich- Penner model, the correlation functions of the second model can be written as sums over discrete points in subspaces of the moduli space of punctured Riemann surfaces. (orig.)

  12. Rigidity percolation in dispersions with a structured viscoelastic matrix

    NARCIS (Netherlands)

    Wilbrink, M.W.L.; Michels, M.A.J.; Vellinga, W.P.; Meijer, H.E.H.

    2005-01-01

    This paper deals with rigidity percolation in composite materials consisting of a dispersion of mineral particles in a microstructured viscoelastic matrix. The viscoelastic matrix in this specific case is a hydrocarbon refinery residue. In a set of model random composites the mean interparticle

  13. Three Interpretations of the Matrix Equation Ax = b

    Science.gov (United States)

    Larson, Christine; Zandieh, Michelle

    2013-01-01

    Many of the central ideas in an introductory undergraduate linear algebra course are closely tied to a set of interpretations of the matrix equation Ax = b (A is a matrix, x and b are vectors): linear combination interpretations, systems interpretations, and transformation interpretations. We consider graphic and symbolic representations for each,…

  14. HEDR model validation plan

    International Nuclear Information System (INIS)

    Napier, B.A.; Gilbert, R.O.; Simpson, J.C.; Ramsdell, J.V. Jr.; Thiede, M.E.; Walters, W.H.

    1993-06-01

    The Hanford Environmental Dose Reconstruction (HEDR) Project has developed a set of computational ''tools'' for estimating the possible radiation dose that individuals may have received from past Hanford Site operations. This document describes the planned activities to ''validate'' these tools. In the sense of the HEDR Project, ''validation'' is a process carried out by comparing computational model predictions with field observations and experimental measurements that are independent of those used to develop the model

  15. The algebras of large N matrix mechanics

    Energy Technology Data Exchange (ETDEWEB)

    Halpern, M.B.; Schwartz, C.

    1999-09-16

    Extending early work, we formulate the large N matrix mechanics of general bosonic, fermionic and supersymmetric matrix models, including Matrix theory: The Hamiltonian framework of large N matrix mechanics provides a natural setting in which to study the algebras of the large N limit, including (reduced) Lie algebras, (reduced) supersymmetry algebras and free algebras. We find in particular a broad array of new free algebras which we call symmetric Cuntz algebras, interacting symmetric Cuntz algebras, symmetric Bose/Fermi/Cuntz algebras and symmetric Cuntz superalgebras, and we discuss the role of these algebras in solving the large N theory. Most important, the interacting Cuntz algebras are associated to a set of new (hidden!) local quantities which are generically conserved only at large N. A number of other new large N phenomena are also observed, including the intrinsic nonlocality of the (reduced) trace class operators of the theory and a closely related large N field identification phenomenon which is associated to another set (this time nonlocal) of new conserved quantities at large N.

  16. Intermediate coupling collision strengths from LS coupled R-matrix elements

    International Nuclear Information System (INIS)

    Clark, R.E.H.

    1978-01-01

    Fine structure collision strength for transitions between two groups of states in intermediate coupling and with inclusion of configuration mixing are obtained from LS coupled reactance matrix elements (R-matrix elements) and a set of mixing coefficients. The LS coupled R-matrix elements are transformed to pair coupling using Wigner 6-j coefficients. From these pair coupled R-matrix elements together with a set of mixing coefficients, R-matrix elements are obtained which include the intermediate coupling and configuration mixing effects. Finally, from the latter R-matrix elements, collision strengths for fine structure transitions are computed (with inclusion of both intermediate coupling and configuration mixing). (Auth.)

  17. Ethical Matrix Manual

    NARCIS (Netherlands)

    Mepham, B.; Kaiser, M.; Thorstensen, E.; Tomkins, S.; Millar, K.

    2006-01-01

    The ethical matrix is a conceptual tool designed to help decision-makers (as individuals or working in groups) reach sound judgements or decisions about the ethical acceptability and/or optimal regulatory controls for existing or prospective technologies in the field of food and agriculture.

  18. Combinatorial matrix theory

    CERN Document Server

    Mitjana, Margarida

    2018-01-01

    This book contains the notes of the lectures delivered at an Advanced Course on Combinatorial Matrix Theory held at Centre de Recerca Matemàtica (CRM) in Barcelona. These notes correspond to five series of lectures. The first series is dedicated to the study of several matrix classes defined combinatorially, and was delivered by Richard A. Brualdi. The second one, given by Pauline van den Driessche, is concerned with the study of spectral properties of matrices with a given sign pattern. Dragan Stevanović delivered the third one, devoted to describing the spectral radius of a graph as a tool to provide bounds of parameters related with properties of a graph. The fourth lecture was delivered by Stephen Kirkland and is dedicated to the applications of the Group Inverse of the Laplacian matrix. The last one, given by Ángeles Carmona, focuses on boundary value problems on finite networks with special in-depth on the M-matrix inverse problem.

  19. Visualizing Matrix Multiplication

    Science.gov (United States)

    Daugulis, Peteris; Sondore, Anita

    2018-01-01

    Efficient visualizations of computational algorithms are important tools for students, educators, and researchers. In this article, we point out an innovative visualization technique for matrix multiplication. This method differs from the standard, formal approach by using block matrices to make computations more visual. We find this method a…

  20. Challenging the CSCW matrix

    DEFF Research Database (Denmark)

    Jørnø, Rasmus Leth Vergmann; Gynther, Karsten; Christensen, Ove

    2014-01-01

    useful information, we question whether the axis of time and space comprising the matrix pertains to relevant defining properties of the tools, technology or learning environments to which they are applied. Subsequently we offer an example of an Adobe Connect e-learning session as an illustration...

  1. Trust, but Verify: Standard Setting That Honors and Validates Professional Teacher Judgment. Subtitle: A Tenuous Titanic Tale of Testy Testing and Titillating Touchstones (A Screen Play with an Unknown Number of Acts).

    Science.gov (United States)

    Matter, M. Kevin

    The Cherry Creek School district (Englewood, Colorado) is a growing district of 37,000 students in the Denver area. The 1988 Colorado State School Finance Act required district-set proficiencies (standards), and forced agreement on a set of values for student knowledge and skills. State-adopted standards added additional requirements for the…

  2. Petz recovery versus matrix reconstruction

    Science.gov (United States)

    Holzäpfel, Milan; Cramer, Marcus; Datta, Nilanjana; Plenio, Martin B.

    2018-04-01

    The reconstruction of the state of a multipartite quantum mechanical system represents a fundamental task in quantum information science. At its most basic, it concerns a state of a bipartite quantum system whose subsystems are subjected to local operations. We compare two different methods for obtaining the original state from the state resulting from the action of these operations. The first method involves quantum operations called Petz recovery maps, acting locally on the two subsystems. The second method is called matrix (or state) reconstruction and involves local, linear maps that are not necessarily completely positive. Moreover, we compare the quantities on which the maps employed in the two methods depend. We show that any state that admits Petz recovery also admits state reconstruction. However, the latter is successful for a strictly larger set of states. We also compare these methods in the context of a finite spin chain. Here, the state of a finite spin chain is reconstructed from the reduced states of a few neighbouring spins. In this setting, state reconstruction is the same as the matrix product operator reconstruction proposed by Baumgratz et al. [Phys. Rev. Lett. 111, 020401 (2013)]. Finally, we generalize both these methods so that they employ long-range measurements instead of relying solely on short-range correlations embodied in such local reduced states. Long-range measurements enable the reconstruction of states which cannot be reconstructed from measurements of local few-body observables alone and hereby we improve existing methods for quantum state tomography of quantum many-body systems.

  3. R-matrix analysis code (RAC)

    International Nuclear Information System (INIS)

    Chen Zhenpeng; Qi Huiquan

    1990-01-01

    A comprehensive R-matrix analysis code has been developed. It is based on the multichannel and multilevel R-matrix theory and runs in VAX computer with FORTRAN-77. With this code many kinds of experimental data for one nuclear system can be fitted simultaneously. The comparisions between code RAC and code EDA of LANL are made. The data show both codes produced the same calculation results when one set of R-matrix parameters was used. The differential cross section of 10 B (n, α) 7 Li for E n = 0.4 MeV and the polarization of 16 O (n,n) 16 O for E n = 2.56 MeV are presented

  4. Prediction of MHC class II binding affinity using SMM-align, a novel stabilization matrix alignment method

    DEFF Research Database (Denmark)

    Nielsen, Morten; Lundegaard, Claus; Lund, Ole

    2007-01-01

    the correct alignment of a peptide in the binding groove a crucial part of identifying the core of an MHC class II binding motif. Here, we present a novel stabilization matrix alignment method, SMM-align, that allows for direct prediction of peptide:MHC binding affinities. The predictive performance...... of the method is validated on a large MHC class II benchmark data set covering 14 HLA-DR (human MHC) and three mouse H2-IA alleles. RESULTS: The predictive performance of the SMM-align method was demonstrated to be superior to that of the Gibbs sampler, TEPITOPE, SVRMHC, and MHCpred methods. Cross validation...... between peptide data set obtained from different sources demonstrated that direct incorporation of peptide length potentially results in over-fitting of the binding prediction method. Focusing on amino terminal peptide flanking residues (PFR), we demonstrate a consistent gain in predictive performance...

  5. FACTAR validation

    International Nuclear Information System (INIS)

    Middleton, P.B.; Wadsworth, S.L.; Rock, R.C.; Sills, H.E.; Langman, V.J.

    1995-01-01

    A detailed strategy to validate fuel channel thermal mechanical behaviour codes for use of current power reactor safety analysis is presented. The strategy is derived from a validation process that has been recently adopted industry wide. Focus of the discussion is on the validation plan for a code, FACTAR, for application in assessing fuel channel integrity safety concerns during a large break loss of coolant accident (LOCA). (author)

  6. Experimental validation of lead cross sections for scale and MCNP

    International Nuclear Information System (INIS)

    Henrikson, D.J.

    1995-01-01

    Moving spent nuclear fuel between facilities often requires the use of lead-shielded casks. Criticality safety that is based upon calculations requires experimental validation of the fuel matrix and lead cross section libraries. A series of critical experiments using a high-enriched uranium-aluminum fuel element with a variety of reflectors, including lead, has been identified. Twenty-one configurations were evaluated in this study. The fuel element was modelled for KENO V.a and MCNP 4a using various cross section sets. The experiments addressed in this report can be used to validate lead-reflected calculations. Factors influencing calculated k eff which require further study include diameters of styrofoam inserts and homogenization

  7. A J matrix engine for density functional theory calculations

    International Nuclear Information System (INIS)

    White, C.A.; Head-Gordon, M.

    1996-01-01

    We introduce a new method for the formation of the J matrix (Coulomb interaction matrix) within a basis of Cartesian Gaussian functions, as needed in density functional theory and Hartree endash Fock calculations. By summing the density matrix into the underlying Gaussian integral formulas, we have developed a J matrix open-quote open-quote engine close-quote close-quote which forms the exact J matrix without explicitly forming the full set of two electron integral intermediates. Several precomputable quantities have been identified, substantially reducing the number of floating point operations and memory accesses needed in a J matrix calculation. Initial timings indicate a speedup of greater than four times for the (pp parallel pp) class of integrals with speedups increasing to over ten times for (ff parallel ff) integrals. copyright 1996 American Institute of Physics

  8. Development of natural matrix reference materials for monitoring environmental radioactivity

    International Nuclear Information System (INIS)

    Holmes, A.S.; Houlgate, P.R.; Pang, S.; Brookman, B.

    1992-01-01

    The Department of the Environment commissioned the Laboratory of the Government Chemist to carry out a contract on natural matrix reference materials. A survey of current availability of such materials in the western world, along with the UK's need, was conducted. Four suitable matrices were identified for production and validation. Due to a number of unforeseen problems with the collection, processing and validation of the materials, the production of the four identified reference materials was not completed in the allocated period of time. In the future production of natural matrix reference materials the time required, the cost and the problems encountered should not be underestimated. Certified natural matrix reference materials are a vital part of traceability in analytical science and without them there is no absolute method of checking the validity of measurement in the field of radiochemical analysis. (author)

  9. Paths correlation matrix.

    Science.gov (United States)

    Qian, Weixian; Zhou, Xiaojun; Lu, Yingcheng; Xu, Jiang

    2015-09-15

    Both the Jones and Mueller matrices encounter difficulties when physically modeling mixed materials or rough surfaces due to the complexity of light-matter interactions. To address these issues, we derived a matrix called the paths correlation matrix (PCM), which is a probabilistic mixture of Jones matrices of every light propagation path. Because PCM is related to actual light propagation paths, it is well suited for physical modeling. Experiments were performed, and the reflection PCM of a mixture of polypropylene and graphite was measured. The PCM of the mixed sample was accurately decomposed into pure polypropylene's single reflection, pure graphite's single reflection, and depolarization caused by multiple reflections, which is consistent with the theoretical derivation. Reflection parameters of rough surface can be calculated from PCM decomposition, and the results fit well with the theoretical calculations provided by the Fresnel equations. These theoretical and experimental analyses verify that PCM is an efficient way to physically model light-matter interactions.

  10. Partially separable t matrix

    International Nuclear Information System (INIS)

    Sasakawa, T.; Okuno, H.; Ishikawa, S.; Sawada, T.

    1982-01-01

    The off-shell t matrix is expressed as a sum of one nonseparable and one separable terms so that it is useful for applications to more-than-two body problems. All poles are involved in this one separable term. Both the nonseparable and the separable terms of the kernel G 0 t are regular at the origin. The nonseparable term of this kernel vanishes at large distances, while the separable term behaves asymptotically as the spherical Hankel function. These properties make our expression free from defects inherent in the Jost or the K-matrix expressions, and many applications are anticipated. As the application, a compact expression of the many-level formula is presented. Also the application is suggested to the breakup threebody problem based on the Faddeev equation. It is demonstrated that the breakup amplitude is expressed in a simple and physically interesting form and we can calculate it in coordinate space

  11. Dynamical basis set

    International Nuclear Information System (INIS)

    Blanco, M.; Heller, E.J.

    1985-01-01

    A new Cartesian basis set is defined that is suitable for the representation of molecular vibration-rotation bound states. The Cartesian basis functions are superpositions of semiclassical states generated through the use of classical trajectories that conform to the intrinsic dynamics of the molecule. Although semiclassical input is employed, the method becomes ab initio through the standard matrix diagonalization variational method. Special attention is given to classical-quantum correspondences for angular momentum. In particular, it is shown that the use of semiclassical information preferentially leads to angular momentum eigenstates with magnetic quantum number Vertical BarMVertical Bar equal to the total angular momentum J. The present method offers a reliable technique for representing highly excited vibrational-rotational states where perturbation techniques are no longer applicable

  12. Transfer matrix representation for periodic planar media

    Science.gov (United States)

    Parrinello, A.; Ghiringhelli, G. L.

    2016-06-01

    Sound transmission through infinite planar media characterized by in-plane periodicity is faced by exploiting the free wave propagation on the related unit cells. An appropriate through-thickness transfer matrix, relating a proper set of variables describing the acoustic field at the two external surfaces of the medium, is derived by manipulating the dynamic stiffness matrix related to a finite element model of the unit cell. The adoption of finite element models avoids analytical modeling or the simplification on geometry or materials. The obtained matrix is then used in a transfer matrix method context, making it possible to combine the periodic medium with layers of different nature and to treat both hard-wall and semi-infinite fluid termination conditions. A finite sequence of identical sub-layers through the thickness of the medium can be handled within the transfer matrix method, significantly decreasing the computational burden. Transfer matrices obtained by means of the proposed method are compared with analytical or equivalent models, in terms of sound transmission through barriers of different nature.

  13. Exactly soluble matrix models

    International Nuclear Information System (INIS)

    Raju Viswanathan, R.

    1991-09-01

    We study examples of one dimensional matrix models whose potentials possess an energy spectrum that can be explicitly determined. This allows for an exact solution in the continuum limit. Specifically, step-like potentials and the Morse potential are considered. The step-like potentials show no scaling behaviour and the Morse potential (which corresponds to a γ = -1 model) has the interesting feature that there are no quantum corrections to the scaling behaviour in the continuum limit. (author). 5 refs

  14. Inside the NIKE matrix

    OpenAIRE

    Brenner, Barbara; Schlegelmilch, Bodo B.; Ambos, Björn

    2013-01-01

    This case describes how Nike, a consumer goods company with an ever expanding portfolio and a tremendous brand value, manages the tradeoff between local responsiveness and global integration. In particular, the case highlights Nike's organizational structure that consists of a global matrix organization that is replicated at a regional level for the European market. While this organizational structure allows Nike to respond to local consumer tastes it also ensures that the company benefits f...

  15. A matrix contraction process

    Science.gov (United States)

    Wilkinson, Michael; Grant, John

    2018-03-01

    We consider a stochastic process in which independent identically distributed random matrices are multiplied and where the Lyapunov exponent of the product is positive. We continue multiplying the random matrices as long as the norm, ɛ, of the product is less than unity. If the norm is greater than unity we reset the matrix to a multiple of the identity and then continue the multiplication. We address the problem of determining the probability density function of the norm, \

  16. Matrix String Theory

    CERN Document Server

    Dijkgraaf, R; Verlinde, Herman L

    1997-01-01

    Via compactification on a circle, the matrix model of M-theory proposed by Banks et al suggests a concrete identification between the large N limit of two-dimensional N=8 supersymmetric Yang-Mills theory and type IIA string theory. In this paper we collect evidence that supports this identification. We explicitly identify the perturbative string states and their interactions, and describe the appearance of D-particle and D-membrane states.

  17. Matrix groups for undergraduates

    CERN Document Server

    Tapp, Kristopher

    2016-01-01

    Matrix groups touch an enormous spectrum of the mathematical arena. This textbook brings them into the undergraduate curriculum. It makes an excellent one-semester course for students familiar with linear and abstract algebra and prepares them for a graduate course on Lie groups. Matrix Groups for Undergraduates is concrete and example-driven, with geometric motivation and rigorous proofs. The story begins and ends with the rotations of a globe. In between, the author combines rigor and intuition to describe the basic objects of Lie theory: Lie algebras, matrix exponentiation, Lie brackets, maximal tori, homogeneous spaces, and roots. This second edition includes two new chapters that allow for an easier transition to the general theory of Lie groups. From reviews of the First Edition: This book could be used as an excellent textbook for a one semester course at university and it will prepare students for a graduate course on Lie groups, Lie algebras, etc. … The book combines an intuitive style of writing w...

  18. Extracellular matrix structure.

    Science.gov (United States)

    Theocharis, Achilleas D; Skandalis, Spyros S; Gialeli, Chrysostomi; Karamanos, Nikos K

    2016-02-01

    Extracellular matrix (ECM) is a non-cellular three-dimensional macromolecular network composed of collagens, proteoglycans/glycosaminoglycans, elastin, fibronectin, laminins, and several other glycoproteins. Matrix components bind each other as well as cell adhesion receptors forming a complex network into which cells reside in all tissues and organs. Cell surface receptors transduce signals into cells from ECM, which regulate diverse cellular functions, such as survival, growth, migration, and differentiation, and are vital for maintaining normal homeostasis. ECM is a highly dynamic structural network that continuously undergoes remodeling mediated by several matrix-degrading enzymes during normal and pathological conditions. Deregulation of ECM composition and structure is associated with the development and progression of several pathologic conditions. This article emphasizes in the complex ECM structure as to provide a better understanding of its dynamic structural and functional multipotency. Where relevant, the implication of the various families of ECM macromolecules in health and disease is also presented. Copyright © 2015 Elsevier B.V. All rights reserved.

  19. Cobalt magnetic nanoparticles embedded in carbon matrix: biofunctional validation

    Energy Technology Data Exchange (ETDEWEB)

    Krolow, Matheus Z., E-mail: matheuskrolow@ifsul.edu.br [Universidade Federal de Pelotas, Engenharia de Materiais, Centro de Desenvolvimento Tecnologico (Brazil); Monte, Leonardo G.; Remiao, Mariana H.; Hartleben, Claudia P.; Moreira, Angela N.; Dellagostin, Odir A. [Universidade Federal de Pelotas, Nucleo de Biotecnologia, Centro de Desenvolvimento Tecnologico (Brazil); Piva, Evandro [Universidade Federal de Pelotas, Faculdade de Odontologia (Brazil); Conceicao, Fabricio R. [Universidade Federal de Pelotas, Nucleo de Biotecnologia, Centro de Desenvolvimento Tecnologico (Brazil); Carreno, Neftali L. V. [Universidade Federal de Pelotas, Engenharia de Materiais, Centro de Desenvolvimento Tecnologico (Brazil)

    2012-09-15

    Carbon nanostructures and nanocomposites display versatile allotropic morphologies, physico-chemical properties and have a wide range of applications in mechanics, electronics, biotechnology, structural material, chemical processing, and energy management. In this study we report the synthesis, characterization, and biotechnological application of cobalt magnetic nanoparticles, with diameter approximately 15-40 nm, embedded in carbon structure (Co/C-MN). A single-step chemical process was used in the synthesis of the Co/C-MN. The Co/C-MN has presented superparamagnetic behavior at room temperature an essential property for immunoseparation assays carried out here. To stimulate interactions between proteins and Co/C-MN, this nanocomposite was functionalized with acrylic acid (AA). We have showed the bonding of different proteins onto Co/C-AA surface using immunofluorescence assay. A Co/C-AA coated with monoclonal antibody anti-pathogenic Leptospira spp. was able to capture leptospires, suggesting that it could be useful in immunoseparation assays.

  20. Cobalt magnetic nanoparticles embedded in carbon matrix: biofunctional validation

    International Nuclear Information System (INIS)

    Krolow, Matheus Z.; Monte, Leonardo G.; Remião, Mariana H.; Hartleben, Cláudia P.; Moreira, Ângela N.; Dellagostin, Odir A.; Piva, Evandro; Conceição, Fabricio R.; Carreño, Neftalí L. V.

    2012-01-01

    Carbon nanostructures and nanocomposites display versatile allotropic morphologies, physico-chemical properties and have a wide range of applications in mechanics, electronics, biotechnology, structural material, chemical processing, and energy management. In this study we report the synthesis, characterization, and biotechnological application of cobalt magnetic nanoparticles, with diameter approximately 15–40 nm, embedded in carbon structure (Co/C-MN). A single-step chemical process was used in the synthesis of the Co/C-MN. The Co/C-MN has presented superparamagnetic behavior at room temperature an essential property for immunoseparation assays carried out here. To stimulate interactions between proteins and Co/C-MN, this nanocomposite was functionalized with acrylic acid (AA). We have showed the bonding of different proteins onto Co/C-AA surface using immunofluorescence assay. A Co/C-AA coated with monoclonal antibody anti-pathogenic Leptospira spp. was able to capture leptospires, suggesting that it could be useful in immunoseparation assays.

  1. POLLA-NESC, Resonance Parameter R-Matrix to S-Matrix Conversion by Reich-Moore Method

    International Nuclear Information System (INIS)

    Saussure, G. de; Perez, R.B.

    1975-01-01

    1 - Description of problem or function: The program transforms a set of r-matrix nuclear resonance parameters into a set of equivalent s-matrix (or Kapur-Peierls) resonance parameters. 2 - Method of solution: The program utilizes the multilevel formalism of Reich and Moore and avoids diagonalization of the level matrix. The parameters are obtained by a direct partial fraction expansion of the Reich-Moore expression of the collision matrix. This approach appears simpler and faster when the number of fission channels is known and small. The method is particularly useful when a large number of levels must be considered because it does not require diagonalization of a large level matrix. 3 - Restrictions on the complexity of the problem: By DIMENSION statements, the program is limited to maxima of 100 levels and 5 channels

  2. Time delay correlations in chaotic scattering and random matrix approach

    International Nuclear Information System (INIS)

    Lehmann, N.; Savin, D.V.; Sokolov, V.V.; Sommers, H.J.

    1994-01-01

    We study the correlations in the time delay a model of chaotic resonance scattering based on the random matrix approach. Analytical formulae which are valid for arbitrary number of open channels and arbitrary coupling strength between resonances and channels are obtained by the supersymmetry method. The time delay correlation function, through being not a Lorentzian, is characterized, similar to that of the scattering matrix, by the gap between the cloud of complex poles of the S-matrix and the real energy axis. 28 refs.; 4 figs

  3. Matrix metalloproteinase activity assays: Importance of zymography.

    Science.gov (United States)

    Kupai, K; Szucs, G; Cseh, S; Hajdu, I; Csonka, C; Csont, T; Ferdinandy, P

    2010-01-01

    Matrix metalloproteinases (MMPs) are zinc-dependent endopeptidases capable of degrading extracellular matrix, including the basement membrane. MMPs are associated with various physiological processes such as morphogenesis, angiogenesis, and tissue repair. Moreover, due to the novel non-matrix related intra- and extracellular targets of MMPs, dysregulation of MMP activity has been implicated in a number of acute and chronic pathological processes, such as arthritis, acute myocardial infarction, chronic heart failure, chronic obstructive pulmonary disease, inflammation, and cancer metastasis. MMPs are considered as viable drug targets in the therapy of the above diseases. For the development of selective MMP inhibitor molecules, reliable methods are necessary for target validation and lead development. Here, we discuss the major methods used for MMP assays, focusing on substrate zymography. We highlight some problems frequently encountered during sample preparations, electrophoresis, and data analysis of zymograms. Zymography is a widely used technique to study extracellular matrix-degrading enzymes, such as MMPs, from tissue extracts, cell cultures, serum or urine. This simple and sensitive technique identifies MMPs by the degradation of their substrate and by their molecular weight and therefore helps to understand the widespread role of MMPs in different pathologies and cellular pathways. Copyright 2010 Elsevier Inc. All rights reserved.

  4. Standard Errors for Matrix Correlations.

    Science.gov (United States)

    Ogasawara, Haruhiko

    1999-01-01

    Derives the asymptotic standard errors and intercorrelations for several matrix correlations assuming multivariate normality for manifest variables and derives the asymptotic standard errors of the matrix correlations for two factor-loading matrices. (SLD)

  5. Extending multivariate distance matrix regression with an effect size measure and the asymptotic null distribution of the test statistic.

    Science.gov (United States)

    McArtor, Daniel B; Lubke, Gitta H; Bergeman, C S

    2017-12-01

    Person-centered methods are useful for studying individual differences in terms of (dis)similarities between response profiles on multivariate outcomes. Multivariate distance matrix regression (MDMR) tests the significance of associations of response profile (dis)similarities and a set of predictors using permutation tests. This paper extends MDMR by deriving and empirically validating the asymptotic null distribution of its test statistic, and by proposing an effect size for individual outcome variables, which is shown to recover true associations. These extensions alleviate the computational burden of permutation tests currently used in MDMR and render more informative results, thus making MDMR accessible to new research domains.

  6. Modulation and control of matrix converter for aerospace application

    Science.gov (United States)

    Kobravi, Keyhan

    In the context of modern aircraft systems, a major challenge is power conversion to supply the aircraft's electrical instruments. These instruments are energized through a fixed-frequency internal power grid. In an aircraft, the available sources of energy are a set of variable-speed generators which provide variable-frequency ac voltages. Therefore, to energize the internal power grid of an aircraft, the variable-frequency ac voltages should be converted to a fixed-frequency ac voltage. As a result, an ac to ac power conversion is required within an aircraft's power system. This thesis develops a Matrix Converter to energize the aircraft's internal power grid. The Matrix Converter provides a direct ac to ac power conversion. A major challenge of designing Matrix Converters for aerospace applications is to minimize the volume and weight of the converter. These parameters are minimized by increasing the switching frequency of the converter. To design a Matrix Converter operating at a high switching frequency, this thesis (i) develops a scheme to integrate fast semiconductor switches within the current available Matrix Converter topologies, i.e., MOSFET-based Matrix Converter, and (ii) develops a new modulation strategy for the Matrix Converter. This Matrix Converter and the new modulation strategy enables the operation of the converter at a switching-frequency of 40kHz. To provide a reliable source of energy, this thesis also develops a new methodology for robust control of Matrix Converter. To verify the performance of the proposed MOSFET-based Matrix Converter, modulation strategy, and control design methodology, various simulation and experimental results are presented. The experimental results are obtained under operating condition present in an aircraft. The experimental results verify the proposed Matrix Converter provides a reliable power conversion in an aircraft under extreme operating conditions. The results prove the superiority of the proposed Matrix

  7. Absorption properties of waste matrix materials

    Energy Technology Data Exchange (ETDEWEB)

    Briggs, J.B. [Idaho National Engineering Lab., Idaho Falls, ID (United States)

    1997-06-01

    This paper very briefly discusses the need for studies of the limiting critical concentration of radioactive waste matrix materials. Calculated limiting critical concentration values for some common waste materials are listed. However, for systems containing large quantities of waste materials, differences up to 10% in calculated k{sub eff} values are obtained by changing cross section data sets. Therefore, experimental results are needed to compare with calculation results for resolving these differences and establishing realistic biases.

  8. [A valid quality system for mental health care: from accountability and control in institutionalised settings to co-creation in small areas and a focus on community vital signs].

    Science.gov (United States)

    van Os, J; Delespaul, P H

    In a given year, around 25% of the Dutch population may experience significant mental health problems, much more than the mental health service can attend to, given a maximum capacity of 6% of the population per year. Due to the lack of a public mental health system, there is fierce competition over who gets to receive care from mental health services and little control over how the level of needs can be matched with the appropriate intensity of care. As a result, resources are being wasted and both overtreatment and undertreatment are prevalent. AIM: To propose a valid quality system that benefits the mental health of the entire population and does not simply attend to the symptoms of a strategically selected group. METHOD: Literature review from an epidemiological and public mental health perspective. RESULTS: In our view, a valid quality system for mental health care needs to focus on two distinct areas. The first area involves the analysis of about 20 quantitative population parameters or 'Community Vital Signs' (care consumption, pharmaco-epidemiological indicators, mortality, somatic morbidity, social care, housing, work, benefits, involuntary admissions). This analysis will reveal regional variation in the mental health of the entire population rather than in the relatively small, selected group receiving mental health care. The second area to which attention needs to be directed comprises a system of simple qualitative visits to mental health care institutions based on 10 quality parameters that currently remain invisible; these parameters will measure the impact at local community level. The focus of these will be on a transition from accountability and control in large institutions to provision of care in small areas that was co-designed with users and other stakeholders. CONCLUSION: A valid quality system for mental health care is within reach, provided it is combined with a novel system of public mental health and transition of care to a system of co

  9. Random matrix theory

    CERN Document Server

    Deift, Percy

    2009-01-01

    This book features a unified derivation of the mathematical theory of the three classical types of invariant random matrix ensembles-orthogonal, unitary, and symplectic. The authors follow the approach of Tracy and Widom, but the exposition here contains a substantial amount of additional material, in particular, facts from functional analysis and the theory of Pfaffians. The main result in the book is a proof of universality for orthogonal and symplectic ensembles corresponding to generalized Gaussian type weights following the authors' prior work. New, quantitative error estimates are derive

  10. Matrix vector analysis

    CERN Document Server

    Eisenman, Richard L

    2005-01-01

    This outstanding text and reference applies matrix ideas to vector methods, using physical ideas to illustrate and motivate mathematical concepts but employing a mathematical continuity of development rather than a physical approach. The author, who taught at the U.S. Air Force Academy, dispenses with the artificial barrier between vectors and matrices--and more generally, between pure and applied mathematics.Motivated examples introduce each idea, with interpretations of physical, algebraic, and geometric contexts, in addition to generalizations to theorems that reflect the essential structur

  11. Matrix Encryption Scheme

    Directory of Open Access Journals (Sweden)

    Abdelhakim Chillali

    2017-05-01

    Full Text Available In classical cryptography, the Hill cipher is a polygraphic substitution cipher based on linear algebra. In this work, we proposed a new problem applicable to the public key cryptography, based on the Matrices, called “Matrix discrete logarithm problem”, it uses certain elements formed by matrices whose coefficients are elements in a finite field. We have constructed an abelian group and, for the cryptographic part in this unreliable group, we then perform the computation corresponding to the algebraic equations, Returning the encrypted result to a receiver. Upon receipt of the result, the receiver can retrieve the sender’s clear message by performing the inverse calculation.

  12. Matrix string partition function

    CERN Document Server

    Kostov, Ivan K; Kostov, Ivan K.; Vanhove, Pierre

    1998-01-01

    We evaluate quasiclassically the Ramond partition function of Euclidean D=10 U(N) super Yang-Mills theory reduced to a two-dimensional torus. The result can be interpreted in terms of free strings wrapping the space-time torus, as expected from the point of view of Matrix string theory. We demonstrate that, when extrapolated to the ultraviolet limit (small area of the torus), the quasiclassical expressions reproduce exactly the recently obtained expression for the partition of the completely reduced SYM theory, including the overall numerical factor. This is an evidence that our quasiclassical calculation might be exact.

  13. Matrix algebra for linear models

    CERN Document Server

    Gruber, Marvin H J

    2013-01-01

    Matrix methods have evolved from a tool for expressing statistical problems to an indispensable part of the development, understanding, and use of various types of complex statistical analyses. This evolution has made matrix methods a vital part of statistical education. Traditionally, matrix methods are taught in courses on everything from regression analysis to stochastic processes, thus creating a fractured view of the topic. Matrix Algebra for Linear Models offers readers a unique, unified view of matrix analysis theory (where and when necessary), methods, and their applications. Written f

  14. Content validity and its estimation

    Directory of Open Access Journals (Sweden)

    Yaghmale F

    2003-04-01

    Full Text Available Background: Measuring content validity of instruments are important. This type of validity can help to ensure construct validity and give confidence to the readers and researchers about instruments. content validity refers to the degree that the instrument covers the content that it is supposed to measure. For content validity two judgments are necessary: the measurable extent of each item for defining the traits and the set of items that represents all aspects of the traits. Purpose: To develop a content valid scale for assessing experience with computer usage. Methods: First a review of 2 volumes of International Journal of Nursing Studies, was conducted with onlyI article out of 13 which documented content validity did so by a 4-point content validity index (CV! and the judgment of 3 experts. Then a scale with 38 items was developed. The experts were asked to rate each item based on relevance, clarity, simplicity and ambiguity on the four-point scale. Content Validity Index (CVI for each item was determined. Result: Of 38 items, those with CVIover 0.75 remained and the rest were discarded reSulting to 25-item scale. Conclusion: Although documenting content validity of an instrument may seem expensive in terms of time and human resources, its importance warrants greater attention when a valid assessment instrument is to be developed. Keywords: Content Validity, Measuring Content Validity

  15. Validity and validation of expert (Q)SAR systems.

    Science.gov (United States)

    Hulzebos, E; Sijm, D; Traas, T; Posthumus, R; Maslankiewicz, L

    2005-08-01

    At a recent workshop in Setubal (Portugal) principles were drafted to assess the suitability of (quantitative) structure-activity relationships ((Q)SARs) for assessing the hazards and risks of chemicals. In the present study we applied some of the Setubal principles to test the validity of three (Q)SAR expert systems and validate the results. These principles include a mechanistic basis, the availability of a training set and validation. ECOSAR, BIOWIN and DEREK for Windows have a mechanistic or empirical basis. ECOSAR has a training set for each QSAR. For half of the structural fragments the number of chemicals in the training set is >4. Based on structural fragments and log Kow, ECOSAR uses linear regression to predict ecotoxicity. Validating ECOSAR for three 'valid' classes results in predictivity of > or = 64%. BIOWIN uses (non-)linear regressions to predict the probability of biodegradability based on fragments and molecular weight. It has a large training set and predicts non-ready biodegradability well. DEREK for Windows predictions are supported by a mechanistic rationale and literature references. The structural alerts in this program have been developed with a training set of positive and negative toxicity data. However, to support the prediction only a limited number of chemicals in the training set is presented to the user. DEREK for Windows predicts effects by 'if-then' reasoning. The program predicts best for mutagenicity and carcinogenicity. Each structural fragment in ECOSAR and DEREK for Windows needs to be evaluated and validated separately.

  16. Characterization of supercapacitors matrix

    Energy Technology Data Exchange (ETDEWEB)

    Sakka, Monzer Al, E-mail: Monzer.Al.Sakka@vub.ac.b [Vrije Universiteit Brussel, pleinlaan 2, B-1050 Brussels (Belgium); FEMTO-ST Institute, ENISYS Department, FCLAB, UFC-UTBM, bat.F, 90010 Belfort (France); Gualous, Hamid, E-mail: Hamid.Gualous@unicaen.f [Laboratoire LUSAC, Universite de Caen Basse Normandie, Rue Louis Aragon - BP 78, 50130 Cherbourg-Octeville (France); Van Mierlo, Joeri [Vrije Universiteit Brussel, pleinlaan 2, B-1050 Brussels (Belgium)

    2010-10-30

    This paper treats supercapacitors matrix characterization. In order to cut off transient power peaks and to compensate for the intrinsic limitations in embedded sources, the use of supercapacitors as a storage system is quite suitable, because of their appropriate electrical characteristics (huge capacitance, small series resistance, high specific energy, high specific power), direct storage (energy ready for use), and easy control by power electronic conversion. This use requires supercapacitors modules where several cells connected in serial and/or in parallel, thus a bypass system to balance the charging or the discharging of supercapacitors is required. In the matrix of supercapacitors, six elements of three parallel BCAP0350 supercapacitors in serial connections have been considered. This topology permits to reduce the number of the bypass circuits and it can work in degraded mode. Actually, it allows the system to have more reliability by providing power continually to the load even when there are one or more cells failed. Simulation and experimental results are presented and discussed.

  17. Characterization of supercapacitors matrix

    International Nuclear Information System (INIS)

    Sakka, Monzer Al; Gualous, Hamid; Van Mierlo, Joeri

    2010-01-01

    This paper treats supercapacitors matrix characterization. In order to cut off transient power peaks and to compensate for the intrinsic limitations in embedded sources, the use of supercapacitors as a storage system is quite suitable, because of their appropriate electrical characteristics (huge capacitance, small series resistance, high specific energy, high specific power), direct storage (energy ready for use), and easy control by power electronic conversion. This use requires supercapacitors modules where several cells connected in serial and/or in parallel, thus a bypass system to balance the charging or the discharging of supercapacitors is required. In the matrix of supercapacitors, six elements of three parallel BCAP0350 supercapacitors in serial connections have been considered. This topology permits to reduce the number of the bypass circuits and it can work in degraded mode. Actually, it allows the system to have more reliability by providing power continually to the load even when there are one or more cells failed. Simulation and experimental results are presented and discussed.

  18. Definition of a matrix of the generalized parameters asymmetrical multiphase transmission lines

    Directory of Open Access Journals (Sweden)

    Suslov V.M.

    2005-12-01

    Full Text Available Idle time, without introduction of wave characteristics, algorithm of definition of a matrix of the generalized parameters asymmetrical multiphase transmission lines is offered. Definition of a matrix of parameters is based on a matrix primary specific of parameters of line and simple iterative procedure. The amount of iterations of iterative procedure is determined by a set error of performance of the resulted matrix ratio between separate blocks of a determined matrix. The given error is connected by close image of with a margin error determined matrix.

  19. Current Concerns in Validity Theory.

    Science.gov (United States)

    Kane, Michael

    Validity is concerned with the clarification and justification of the intended interpretations and uses of observed scores. It has not been easy to formulate a general methodology set of principles for validation, but progress has been made, especially as the field has moved from relatively limited criterion-related models to sophisticated…

  20. KBLAS: An Optimized Library for Dense Matrix-Vector Multiplication on GPU Accelerators

    KAUST Repository

    Abdelfattah, Ahmad

    2016-05-11

    KBLAS is an open-source, high-performance library that provides optimized kernels for a subset of Level 2 BLAS functionalities on CUDA-enabled GPUs. Since performance of dense matrix-vector multiplication is hindered by the overhead of memory accesses, a double-buffering optimization technique is employed to overlap data motion with computation. After identifying a proper set of tuning parameters, KBLAS efficiently runs on various GPU architectures while avoiding code rewriting and retaining compliance with the standard BLAS API. Another optimization technique allows ensuring coalesced memory access when dealing with submatrices, especially for high-level dense linear algebra algorithms. All KBLAS kernels have been leveraged to a multi-GPU environment, which requires the introduction of new APIs. Considering general matrices, KBLAS is very competitive with existing state-of-the-art kernels and provides a smoother performance across a wide range of matrix dimensions. Considering symmetric and Hermitian matrices, the KBLAS performance outperforms existing state-of-the-art implementations on all matrix sizes and achieves asymptotically up to 50% and 60% speedup against the best competitor on single GPU and multi-GPUs systems, respectively. Performance results also validate our performance model. A subset of KBLAS highperformance kernels have been integrated into NVIDIA\\'s standard BLAS implementation (cuBLAS) for larger dissemination, starting from version 6.0. © 2016 ACM.

  1. The impact of episodic nonequilibrium fracture-matrix flow on geological repository performance

    International Nuclear Information System (INIS)

    Buscheck, T.A.; Nitao, J.J.; Chestnut, D.A.

    1991-01-01

    Adequate representation of fracture-matrix interaction during episodic infiltration events is crucial in making valid hydrological predictions of repository performance at Yucca Mountain. Various approximations have been applied to represent fracture-matrix flow interaction, including the Equivalent Continuum Model (ECM), which assumes capillary equilibrium between fractures and matrix, and the Fracture-Matrix Model (FMM), which accounts for nonequilibrium fracture-matrix flow. We analyze the relative impact of matrix imbibition on episodic nonequilibrium fracture-matrix flow for the eight major hydrostratigraphic units in the unsaturated zone at Yucca Mountain. Comparisons are made between ECM and FMM predictions to determine the applicability of the ECM. The implications of nonequilibrium fracture-matrix flow on radionuclide transport are also discussed

  2. 1024 matrix image reconstruction: usefulness in high resolution chest CT

    International Nuclear Information System (INIS)

    Jeong, Sun Young; Chung, Myung Jin; Chong, Se Min; Sung, Yon Mi; Lee, Kyung Soo

    2006-01-01

    We tried to evaluate whether high resolution chest CT with a 1,024 matrix has a significant advantage in image quality compared to a 512 matrix. Each set of 512 and 1024 matrix high resolution chest CT scans with both 0.625 mm and 1.25 mm slice thickness were obtained from 26 patients. Seventy locations that contained twenty-four low density lesions without sharp boundary such as emphysema, and forty-six sharp linear densities such as linear fibrosis were selected; these were randomly displayed on a five mega pixel LCD monitor. All the images were masked for information concerning the matrix size and slice thickness. Two chest radiologists scored the image quality of each ar rowed lesion as follows: (1) undistinguishable, (2) poorly distinguishable, (3) fairly distinguishable, (4) well visible and (5) excellently visible. The scores were compared from the aspects of matrix size, slice thickness and the different observers by using ANOVA tests. The average and standard deviation of image quality were 3.09 (± .92) for the 0.625 mm x 512 matrix, 3.16 (± .84) for the 0.625 mm x 1024 matrix, 2.49 (± 1.02) for the 1.25 mm x 512 matrix, and 2.35 (± 1.02) for the 1.25 mm x 1024 matrix, respectively. The image quality on both matrices of the high resolution chest CT scans with a 0.625 mm slice thickness was significantly better than that on the 1.25 mm slice thickness (ρ < 0.001). However, the image quality on the 1024 matrix high resolution chest CT scans was not significantly different from that on the 512 matrix high resolution chest CT scans (ρ = 0.678). The interobserver variation between the two observers was not significant (ρ = 0.691). We think that 1024 matrix image reconstruction for high resolution chest CT may not be clinical useful

  3. Data fusion in metabolomics using coupled matrix and tensor factorizations

    DEFF Research Database (Denmark)

    Evrim, Acar Ataman; Bro, Rasmus; Smilde, Age Klaas

    2015-01-01

    of heterogeneous (i.e., in the form of higher order tensors and matrices) data sets with shared/unshared factors. In order to jointly analyze such heterogeneous data sets, we formulate data fusion as a coupled matrix and tensor factorization (CMTF) problem, which has already proved useful in many data mining...

  4. Not dead yet: the rise, fall and persistence of the BCG Matrix

    OpenAIRE

    Madsen, Dag Øivind

    2017-01-01

    The BCG Matrix was introduced almost 50 years ago, and is today considered one of the most iconic strategic planning techniques. Using management fashion theory as a theoretical lens, this paper examines the historical rise, fall and persistence of the BCG Matrix. The analysis highlights the role played by fashion-setting actors (e.g., consultants, business schools and business media) in the rise of the BCG Matrix. However, over time, portfolio planning models such as the BCG Matrix were atta...

  5. Ceramic matrix and resin matrix composites - A comparison

    Science.gov (United States)

    Hurwitz, Frances I.

    1987-01-01

    The underlying theory of continuous fiber reinforcement of ceramic matrix and resin matrix composites, their fabrication, microstructure, physical and mechanical properties are contrasted. The growing use of organometallic polymers as precursors to ceramic matrices is discussed as a means of providing low temperature processing capability without the fiber degradation encountered with more conventional ceramic processing techniques. Examples of ceramic matrix composites derived from particulate-filled, high char yield polymers and silsesquioxane precursors are provided.

  6. Ceramic matrix and resin matrix composites: A comparison

    Science.gov (United States)

    Hurwitz, Frances I.

    1987-01-01

    The underlying theory of continuous fiber reinforcement of ceramic matrix and resin matrix composites, their fabrication, microstructure, physical and mechanical properties are contrasted. The growing use of organometallic polymers as precursors to ceramic matrices is discussed as a means of providing low temperature processing capability without the fiber degradation encountered with more conventional ceramic processing techniques. Examples of ceramic matrix composites derived from particulate-filled, high char yield polymers and silsesquioxane precursors are provided.

  7. Validation suite for MCNP

    International Nuclear Information System (INIS)

    Mosteller, Russell D.

    2002-01-01

    Two validation suites, one for criticality and another for radiation shielding, have been defined and tested for the MCNP Monte Carlo code. All of the cases in the validation suites are based on experiments so that calculated and measured results can be compared in a meaningful way. The cases in the validation suites are described, and results from those cases are discussed. For several years, the distribution package for the MCNP Monte Carlo code1 has included an installation test suite to verify that MCNP has been installed correctly. However, the cases in that suite have been constructed primarily to test options within the code and to execute quickly. Consequently, they do not produce well-converged answers, and many of them are physically unrealistic. To remedy these deficiencies, sets of validation suites are being defined and tested for specific types of applications. All of the cases in the validation suites are based on benchmark experiments. Consequently, the results from the measurements are reliable and quantifiable, and calculated results can be compared with them in a meaningful way. Currently, validation suites exist for criticality and radiation-shielding applications.

  8. A matrix big bang

    International Nuclear Information System (INIS)

    Craps, Ben; Sethi, Savdeep; Verlinde, Erik

    2005-01-01

    The light-like linear dilaton background represents a particularly simple time-dependent 1/2 BPS solution of critical type-IIA superstring theory in ten dimensions. Its lift to M-theory, as well as its Einstein frame metric, are singular in the sense that the geometry is geodesically incomplete and the Riemann tensor diverges along a light-like subspace of codimension one. We study this background as a model for a big bang type singularity in string theory/M-theory. We construct the dual Matrix theory description in terms of a (1+1)-d supersymmetric Yang-Mills theory on a time-dependent world-sheet given by the Milne orbifold of (1+1)-d Minkowski space. Our model provides a framework in which the physics of the singularity appears to be under control

  9. A matrix big bang

    Energy Technology Data Exchange (ETDEWEB)

    Craps, Ben [Instituut voor Theoretische Fysica, Universiteit van Amsterdam, Valckenierstraat 65, 1018 XE Amsterdam (Netherlands); Sethi, Savdeep [Enrico Fermi Institute, University of Chicago, Chicago, IL 60637 (United States); Verlinde, Erik [Instituut voor Theoretische Fysica, Universiteit van Amsterdam, Valckenierstraat 65, 1018 XE Amsterdam (Netherlands)

    2005-10-15

    The light-like linear dilaton background represents a particularly simple time-dependent 1/2 BPS solution of critical type-IIA superstring theory in ten dimensions. Its lift to M-theory, as well as its Einstein frame metric, are singular in the sense that the geometry is geodesically incomplete and the Riemann tensor diverges along a light-like subspace of codimension one. We study this background as a model for a big bang type singularity in string theory/M-theory. We construct the dual Matrix theory description in terms of a (1+1)-d supersymmetric Yang-Mills theory on a time-dependent world-sheet given by the Milne orbifold of (1+1)-d Minkowski space. Our model provides a framework in which the physics of the singularity appears to be under control.

  10. Matrix metalloproteinases outside vertebrates.

    Science.gov (United States)

    Marino-Puertas, Laura; Goulas, Theodoros; Gomis-Rüth, F Xavier

    2017-11-01

    The matrix metalloproteinase (MMP) family belongs to the metzincin clan of zinc-dependent metallopeptidases. Due to their enormous implications in physiology and disease, MMPs have mainly been studied in vertebrates. They are engaged in extracellular protein processing and degradation, and present extensive paralogy, with 23 forms in humans. One characteristic of MMPs is a ~165-residue catalytic domain (CD), which has been structurally studied for 14 MMPs from human, mouse, rat, pig and the oral-microbiome bacterium Tannerella forsythia. These studies revealed close overall coincidence and characteristic structural features, which distinguish MMPs from other metzincins and give rise to a sequence pattern for their identification. Here, we reviewed the literature available on MMPs outside vertebrates and performed database searches for potential MMP CDs in invertebrates, plants, fungi, viruses, protists, archaea and bacteria. These and previous results revealed that MMPs are widely present in several copies in Eumetazoa and higher plants (Tracheophyta), but have just token presence in eukaryotic algae. A few dozen sequences were found in Ascomycota (within fungi) and in double-stranded DNA viruses infecting invertebrates (within viruses). In contrast, a few hundred sequences were found in archaea and >1000 in bacteria, with several copies for some species. Most of the archaeal and bacterial phyla containing potential MMPs are present in human oral and gut microbiomes. Overall, MMP-like sequences are present across all kingdoms of life, but their asymmetric distribution contradicts the vertical descent model from a eubacterial or archaeal ancestor. This article is part of a Special Issue entitled: Matrix Metalloproteinases edited by Rafael Fridman. Copyright © 2017 Elsevier B.V. All rights reserved.

  11. The SMOS Validation Campaign 2010 in the Upper Danube Catchment: A Data Set for Studies of Soil Moisture, Brightness Temperature, and Their Spatial Variability Over a Heterogeneous Land Surface

    DEFF Research Database (Denmark)

    T. dall' Amico, Johanna; Schlenz, Florian; Loew, Alexander

    2013-01-01

    resolutions from roughly 400 m to 2 km. The contemporaneous distributed ground measurements include surface soil moisture, a detailed land cover map, vegetation height, phenology, and biomass. Furthermore, several ground stations provide continuous measurements of soil moisture and soil temperature as well...... infrared and L-band passive microwave data were collected together with spatially distributed in situ measurements. Two airborne radiometers, EMIRAD and HUT-2D, were used during the campaigns providing two complementary sets of measurements at incidence angles from 0$^{circ}$ to 40$^{circ}$ and with ground...

  12. Matrix matters: differences of grand skink metapopulation parameters in native tussock grasslands and exotic pasture grasslands.

    Directory of Open Access Journals (Sweden)

    Konstanze Gebauer

    Full Text Available Modelling metapopulation dynamics is a potentially very powerful tool for conservation biologists. In recent years, scientists have broadened the range of variables incorporated into metapopulation modelling from using almost exclusively habitat patch size and isolation, to the inclusion of attributes of the matrix and habitat patch quality. We investigated the influence of habitat patch and matrix characteristics on the metapopulation parameters of a highly endangered lizard species, the New Zealand endemic grand skink (Oligosoma grande taking into account incomplete detectability. The predictive ability of the developed zxmetapopulation model was assessed through cross-validation of the data and with an independent data-set. Grand skinks occur on scattered rock-outcrops surrounded by indigenous tussock (bunch and pasture grasslands therefore implying a metapopulation structure. We found that the type of matrix surrounding the habitat patch was equally as important as the size of habitat patch for estimating occupancy, colonisation and extinction probabilities. Additionally, the type of matrix was more important than the physical distance between habitat patches for colonisation probabilities. Detection probability differed between habitat patches in the two matrix types and between habitat patches with different attributes such as habitat patch composition and abundance of vegetation on the outcrop. The developed metapopulation models can now be used for management decisions on area protection, monitoring, and the selection of translocation sites for the grand skink. Our study showed that it is important to incorporate not only habitat patch size and distance between habitat patches, but also those matrix type and habitat patch attributes which are vital in the ecology of the target species.

  13. Correction of failure in antenna array using matrix pencil technique

    International Nuclear Information System (INIS)

    Khan, SU; Rahim, MKA

    2017-01-01

    In this paper a non-iterative technique is developed for the correction of faulty antenna array based on matrix pencil technique (MPT). The failure of a sensor in antenna array can damage the radiation power pattern in terms of sidelobes level and nulls. In the developed technique, the radiation pattern of the array is sampled to form discrete power pattern information set. Then this information set can be arranged in the form of Hankel matrix (HM) and execute the singular value decomposition (SVD). By removing nonprincipal values, we obtain an optimum lower rank estimation of HM. This lower rank matrix corresponds to the corrected pattern. Then the proposed technique is employed to recover the weight excitation and position allocations from the estimated matrix. Numerical simulations confirm the efficiency of the proposed technique, which is compared with the available techniques in terms of sidelobes level and nulls. (paper)

  14. Phenomenology of the CKM matrix

    International Nuclear Information System (INIS)

    Nir, Y.

    1989-01-01

    The way in which an exact determination of the CKM matrix elements tests the standard Model is demonstrated by a two-generation example. The determination of matrix elements from meson semileptonic decays is explained, with an emphasis on the respective reliability of quark level and meson level calculations. The assumptions involved in the use of loop processes are described. Finally, the state of the art of the knowledge of the CKM matrix is presented. 19 refs., 2 figs

  15. On matrix fractional differential equations

    OpenAIRE

    Adem Kılıçman; Wasan Ajeel Ahmood

    2017-01-01

    The aim of this article is to study the matrix fractional differential equations and to find the exact solution for system of matrix fractional differential equations in terms of Riemann–Liouville using Laplace transform method and convolution product to the Riemann–Liouville fractional of matrices. Also, we show the theorem of non-homogeneous matrix fractional partial differential equation with some illustrative examples to demonstrate the effectiveness of the new methodology. The main objec...

  16. A Matrix Splitting Method for Composite Function Minimization

    KAUST Repository

    Yuan, Ganzhao

    2016-12-07

    Composite function minimization captures a wide spectrum of applications in both computer vision and machine learning. It includes bound constrained optimization and cardinality regularized optimization as special cases. This paper proposes and analyzes a new Matrix Splitting Method (MSM) for minimizing composite functions. It can be viewed as a generalization of the classical Gauss-Seidel method and the Successive Over-Relaxation method for solving linear systems in the literature. Incorporating a new Gaussian elimination procedure, the matrix splitting method achieves state-of-the-art performance. For convex problems, we establish the global convergence, convergence rate, and iteration complexity of MSM, while for non-convex problems, we prove its global convergence. Finally, we validate the performance of our matrix splitting method on two particular applications: nonnegative matrix factorization and cardinality regularized sparse coding. Extensive experiments show that our method outperforms existing composite function minimization techniques in term of both efficiency and efficacy.

  17. A Matrix Splitting Method for Composite Function Minimization

    KAUST Repository

    Yuan, Ganzhao; Zheng, Wei-Shi; Ghanem, Bernard

    2016-01-01

    Composite function minimization captures a wide spectrum of applications in both computer vision and machine learning. It includes bound constrained optimization and cardinality regularized optimization as special cases. This paper proposes and analyzes a new Matrix Splitting Method (MSM) for minimizing composite functions. It can be viewed as a generalization of the classical Gauss-Seidel method and the Successive Over-Relaxation method for solving linear systems in the literature. Incorporating a new Gaussian elimination procedure, the matrix splitting method achieves state-of-the-art performance. For convex problems, we establish the global convergence, convergence rate, and iteration complexity of MSM, while for non-convex problems, we prove its global convergence. Finally, we validate the performance of our matrix splitting method on two particular applications: nonnegative matrix factorization and cardinality regularized sparse coding. Extensive experiments show that our method outperforms existing composite function minimization techniques in term of both efficiency and efficacy.

  18. Diffraction measurements of residual stress in titanium matrix composites

    International Nuclear Information System (INIS)

    James, M.R.; Bourke, M.A.; Goldstone, J.A.; Lawson, A.C.

    1993-01-01

    Metal matrix composites develop residual strains after consolidation due to the thermal expansion mismatch between the reinforcement fiber and the matrix. X-ray and neutron diffraction measured values for the longitudinal residual stress in the matrix of four titanium MMCs are reported. For thick composites (> 6 plies) the surface stress measured by x-ray diffraction matches that determined by neutron diffraction and therefore represents the stress in the bulk region consisting of the fibers and matrix. For thin sheet composites, the surface values are lower than in the interior and increase as the outer rows of fibers are approached. While a rationale for the behavior in the thin sheet has yet to be developed, accounting for composite thickness is important when using x-ray measured values to validate analytic and finite element calculations of the residual stress state

  19. Neutrosophic Soft Matrix and its application to Decision Making

    Directory of Open Access Journals (Sweden)

    Tuhin Bera

    2017-12-01

    Full Text Available The motivation of this paper is to extend the concept of Neutrosophic soft matrix (NSM theory. Some basic definitions of classical matrix theory in the parlance of neutrosophic soft set theory have been presented with proper examples. Then, a theoretical studies of some traditional operations of NSM have been developed. Finally, a decision making theory has been proposed by developing an appropriate solution algorithm, namely, score function algorithm and it has been illustrated by suitable examples.

  20. Checklists for external validity

    DEFF Research Database (Denmark)

    Dyrvig, Anne-Kirstine; Kidholm, Kristian; Gerke, Oke

    2014-01-01

    to an implementation setting. In this paper, currently available checklists on external validity are identified, assessed and used as a basis for proposing a new improved instrument. METHOD: A systematic literature review was carried out in Pubmed, Embase and Cinahl on English-language papers without time restrictions....... The retrieved checklist items were assessed for (i) the methodology used in primary literature, justifying inclusion of each item; and (ii) the number of times each item appeared in checklists. RESULTS: Fifteen papers were identified, presenting a total of 21 checklists for external validity, yielding a total...... of 38 checklist items. Empirical support was considered the most valid methodology for item inclusion. Assessment of methodological justification showed that none of the items were supported empirically. Other kinds of literature justified the inclusion of 22 of the items, and 17 items were included...

  1. P-matrix in the quark compound bag model

    International Nuclear Information System (INIS)

    Kalashnikova, Yu.S.; Narodetskij, I.M.; Veselov, A.I.

    1983-01-01

    Meaning of the P-matrix analysis is discussed within the quark compound bag (QCB) model. The most general version of this model is considered including the arbitrary coupling between quark and hadronic channels and the arbitrary smearipg of the surface interection region. The behaviour of P-matrix poles as functions of matching radius r,L0 is discussed for r 0 > + . In conclusion are presented the parameters of an illustrative set of NN potentials that has been obtained from the P-matrix fit to experimental data

  2. Nonnegative Matrix Factorizations Performing Object Detection and Localization

    Directory of Open Access Journals (Sweden)

    G. Casalino

    2012-01-01

    Full Text Available We study the problem of detecting and localizing objects in still, gray-scale images making use of the part-based representation provided by nonnegative matrix factorizations. Nonnegative matrix factorization represents an emerging example of subspace methods, which is able to extract interpretable parts from a set of template image objects and then to additively use them for describing individual objects. In this paper, we present a prototype system based on some nonnegative factorization algorithms, which differ in the additional properties added to the nonnegative representation of data, in order to investigate if any additional constraint produces better results in general object detection via nonnegative matrix factorizations.

  3. An iterative method to invert the LTSn matrix

    Energy Technology Data Exchange (ETDEWEB)

    Cardona, A.V.; Vilhena, M.T. de [UFRGS, Porto Alegre (Brazil)

    1996-12-31

    Recently Vilhena and Barichello proposed the LTSn method to solve, analytically, the Discrete Ordinates Problem (Sn problem) in transport theory. The main feature of this method consist in the application of the Laplace transform to the set of Sn equations and solve the resulting algebraic system for the transport flux. Barichello solve the linear system containing the parameter s applying the definition of matrix invertion exploiting the structure of the LTSn matrix. In this work, it is proposed a new scheme to invert the LTSn matrix, decomposing it in blocks and recursively inverting this blocks.

  4. Matrix transformations and sequence spaces

    International Nuclear Information System (INIS)

    Nanda, S.

    1983-06-01

    In most cases the most general linear operator from one sequence space into another is actually given by an infinite matrix and therefore the theory of matrix transformations has always been of great interest in the study of sequence spaces. The study of general theory of matrix transformations was motivated by the special results in summability theory. This paper is a review article which gives almost all known results on matrix transformations. This also suggests a number of open problems for further study and will be very useful for research workers. (author)

  5. Multivariate Matrix-Exponential Distributions

    DEFF Research Database (Denmark)

    Bladt, Mogens; Nielsen, Bo Friis

    2010-01-01

    be written as linear combinations of the elements in the exponential of a matrix. For this reason we shall refer to multivariate distributions with rational Laplace transform as multivariate matrix-exponential distributions (MVME). The marginal distributions of an MVME are univariate matrix......-exponential distributions. We prove a characterization that states that a distribution is an MVME distribution if and only if all non-negative, non-null linear combinations of the coordinates have a univariate matrix-exponential distribution. This theorem is analog to a well-known characterization theorem...

  6. A matrix model for WZW

    International Nuclear Information System (INIS)

    Dorey, Nick; Tong, David; Turner, Carl

    2016-01-01

    We study a U(N) gauged matrix quantum mechanics which, in the large N limit, is closely related to the chiral WZW conformal field theory. This manifests itself in two ways. First, we construct the left-moving Kac-Moody algebra from matrix degrees of freedom. Secondly, we compute the partition function of the matrix model in terms of Schur and Kostka polynomials and show that, in the large N limit, it coincides with the partition function of the WZW model. This same matrix model was recently shown to describe non-Abelian quantum Hall states and the relationship to the WZW model can be understood in this framework.

  7. Time-oriented experimental design method to optimize hydrophilic matrix formulations with gelation kinetics and drug release profiles.

    Science.gov (United States)

    Shin, Sangmun; Choi, Du Hyung; Truong, Nguyen Khoa Viet; Kim, Nam Ah; Chu, Kyung Rok; Jeong, Seong Hoon

    2011-04-04

    A new experimental design methodology was developed by integrating the response surface methodology and the time series modeling. The major purposes were to identify significant factors in determining swelling and release rate from matrix tablets and their relative factor levels for optimizing the experimental responses. Properties of tablet swelling and drug release were assessed with ten factors and two default factors, a hydrophilic model drug (terazosin) and magnesium stearate, and compared with target values. The selected input control factors were arranged in a mixture simplex lattice design with 21 experimental runs. The obtained optimal settings for gelation were PEO, LH-11, Syloid, and Pharmacoat with weight ratios of 215.33 (88.50%), 5.68 (2.33%), 19.27 (7.92%), and 3.04 (1.25%), respectively. The optimal settings for drug release were PEO and citric acid with weight ratios of 191.99 (78.91%) and 51.32 (21.09%), respectively. Based on the results of matrix swelling and drug release, the optimal solutions, target values, and validation experiment results over time were similar and showed consistent patterns with very small biases. The experimental design methodology could be a very promising experimental design method to obtain maximum information with limited time and resources. It could also be very useful in formulation studies by providing a systematic and reliable screening method to characterize significant factors in the sustained release matrix tablet. Copyright © 2011 Elsevier B.V. All rights reserved.

  8. Generalized canonical analysis based on optimizing matrix correlations and a relation with IDIOSCAL

    NARCIS (Netherlands)

    Kiers, Henk A.L.; Cléroux, R.; Ten Berge, Jos M.F.

    1994-01-01

    Carroll's method for generalized canonical analysis of two or more sets of variables is shown to optimize the sum of squared inner-product matrix correlations between a consensus matrix and matrices with canonical variates for each set of variables. In addition, the method that analogously optimizes

  9. System Matrix Analysis for Computed Tomography Imaging

    Science.gov (United States)

    Flores, Liubov; Vidal, Vicent; Verdú, Gumersindo

    2015-01-01

    In practical applications of computed tomography imaging (CT), it is often the case that the set of projection data is incomplete owing to the physical conditions of the data acquisition process. On the other hand, the high radiation dose imposed on patients is also undesired. These issues demand that high quality CT images can be reconstructed from limited projection data. For this reason, iterative methods of image reconstruction have become a topic of increased research interest. Several algorithms have been proposed for few-view CT. We consider that the accurate solution of the reconstruction problem also depends on the system matrix that simulates the scanning process. In this work, we analyze the application of the Siddon method to generate elements of the matrix and we present results based on real projection data. PMID:26575482

  10. What is validation

    International Nuclear Information System (INIS)

    Clark, H.K.

    1985-01-01

    Criteria for establishing the validity of a computational method to be used in assessing nuclear criticality safety, as set forth in ''American Standard for Nuclear Criticality Safety in Operations with Fissionable Materials Outside Reactors,'' ANSI/ANS-8.1-1983, are examined and discussed. Application of the criteria is illustrated by describing the procedures followed in deriving subcritical limits that have been incorporated in the Standard

  11. Clay matrix voltammetry

    International Nuclear Information System (INIS)

    Perdicakis, Michel

    2012-01-01

    Document available in extended abstract form only. In many countries, it is planned that the long life highly radioactive nuclear spent fuel will be stored in deep argillaceous rocks. The sites selected for this purpose are anoxic and satisfy several recommendations as mechanical stability, low permeability and low redox potential. Pyrite (FeS 2 ), iron(II) carbonate, iron(II) bearing clays and organic matter that are present in very small amounts (about 1% w:w) in soils play a major role in their reactivity and are considered today as responsible for the low redox potential values of these sites. In this communication, we describe an electrochemical technique derived from 'Salt matrix voltammetry' and allowing the almost in-situ voltammetric characterization of air-sensitive samples of soils after the only addition of the minimum humidity required for electrolytic conduction. Figure 1 shows the principle of the developed technique. It consists in the entrapment of the clay sample between a graphite working electrode and a silver counter/quasi-reference electrode. The sample was previously humidified by passing a water saturated inert gas through the electrochemical cell. The technique leads to well-defined voltammetric responses of the electro-active components of the clays. Figure 2 shows a typical voltammogram relative to a Callovo-Oxfordian argillite sample from Bure, the French place planned for the underground nuclear waste disposal. During the direct scan, one can clearly distinguish the anodic voltammetric signals for the oxidation of the iron (II) species associated with the clay and the oxidation of pyrite. The reverse scan displays a small cathodic signal for the reduction of iron (III) associated with the clay that demonstrates that the majority of the previously oxidized iron (II) species were transformed into iron (III) oxides reducible at lower potentials. When a second voltammetric cycle is performed, one can notice that the signal for iron (II

  12. Ceramic matrix composite article and process of fabricating a ceramic matrix composite article

    Science.gov (United States)

    Cairo, Ronald Robert; DiMascio, Paul Stephen; Parolini, Jason Robert

    2016-01-12

    A ceramic matrix composite article and a process of fabricating a ceramic matrix composite are disclosed. The ceramic matrix composite article includes a matrix distribution pattern formed by a manifold and ceramic matrix composite plies laid up on the matrix distribution pattern, includes the manifold, or a combination thereof. The manifold includes one or more matrix distribution channels operably connected to a delivery interface, the delivery interface configured for providing matrix material to one or more of the ceramic matrix composite plies. The process includes providing the manifold, forming the matrix distribution pattern by transporting the matrix material through the manifold, and contacting the ceramic matrix composite plies with the matrix material.

  13. Validation of Serious Games

    Directory of Open Access Journals (Sweden)

    Katinka van der Kooij

    2015-09-01

    Full Text Available The application of games for behavioral change has seen a surge in popularity but evidence on the efficacy of these games is contradictory. Anecdotal findings seem to confirm their motivational value whereas most quantitative findings from randomized controlled trials (RCT are negative or difficult to interpret. One cause for the contradictory evidence could be that the standard RCT validation methods are not sensitive to serious games’ effects. To be able to adapt validation methods to the properties of serious games we need a framework that can connect properties of serious game design to the factors that influence the quality of quantitative research outcomes. The Persuasive Game Design model [1] is particularly suitable for this aim as it encompasses the full circle from game design to behavioral change effects on the user. We therefore use this model to connect game design features, such as the gamification method and the intended transfer effect, to factors that determine the conclusion validity of an RCT. In this paper we will apply this model to develop guidelines for setting up validation methods for serious games. This way, we offer game designers and researchers handles on how to develop tailor-made validation methods.

  14. Quality data validation: Comprehensive approach to environmental data validation

    International Nuclear Information System (INIS)

    Matejka, L.A. Jr.

    1993-01-01

    Environmental data validation consists of an assessment of three major areas: analytical method validation; field procedures and documentation review; evaluation of the level of achievement of data quality objectives based in part on PARCC parameters analysis and expected applications of data. A program utilizing matrix association of required levels of validation effort and analytical levels versus applications of this environmental data was developed in conjunction with DOE-ID guidance documents to implement actions under the Federal Facilities Agreement and Consent Order in effect at the Idaho National Engineering Laboratory. This was an effort to bring consistent quality to the INEL-wide Environmental Restoration Program and database in an efficient and cost-effective manner. This program, documenting all phases of the review process, is described here

  15. Strategy BMT Al-Ittihad Using Matrix IE, Matrix SWOT 8K, Matrix SPACE and Matrix TWOS

    Directory of Open Access Journals (Sweden)

    Nofrizal Nofrizal

    2018-03-01

    Full Text Available This research aims to formulate and select BMT Al-Ittihad Rumbai strategy to face the changing of business environment both from internal environment such as organization resources, finance, member and external business such as competitor, economy, politics and others. This research method used Analysis of EFAS, IFAS, IE Matrix, SWOT-8K Matrix, SPACE Matrix and TWOS Matrix. our hope from this research it can assist BMT Al-Ittihad in formulating and selecting strategies for the sustainability of BMT Al-Ittihad in the future. The sample in this research is using purposive sampling technique that is the manager and leader of BMT Al-IttihadRumbaiPekanbaru. The result of this research shows that the position of BMT Al-Ittihad using IE Matrix, SWOT-8K Matrix and SPACE Matrix is in growth position, stabilization and aggressive. The choice of strategy after using TWOS Matrix is market penetration, market development, vertical integration, horizontal integration, and stabilization (careful.

  16. How to Study a Matrix

    Science.gov (United States)

    Jairam, Dharmananda; Kiewra, Kenneth A.; Kauffman, Douglas F.; Zhao, Ruomeng

    2012-01-01

    This study investigated how best to study a matrix. Fifty-three participants studied a matrix topically (1 column at a time), categorically (1 row at a time), or in a unified way (all at once). Results revealed that categorical and unified study produced higher: (a) performance on relationship and fact tests, (b) study material satisfaction, and…

  17. Validation of the k0 standardization method in neutron activation analysis

    International Nuclear Information System (INIS)

    Kubesova, Marie

    2009-01-01

    The goal of this work was to validate the k 0 standardization method in neutron activation analysis for use by the Nuclear Physics Institute's NAA Laboratory. The precision and accuracy of the method were examined by using two types of reference materials: the one type comprised a set of synthetic materials and served to check the implementation of k 0 standardization, the other type consisted of matrix NIST SRMs comprising various different matrices. In general, a good agreement was obtained between the results of this work and the certified values, giving evidence of the accuracy of our results. In addition, the limits were evaluated for 61 elements

  18. Bulk metallic glass matrix composites

    International Nuclear Information System (INIS)

    Choi-Yim, H.; Johnson, W.L.

    1997-01-01

    Composites with a bulk metallic glass matrix were synthesized and characterized. This was made possible by the recent development of bulk metallic glasses that exhibit high resistance to crystallization in the undercooled liquid state. In this letter, experimental methods for processing metallic glass composites are introduced. Three different bulk metallic glass forming alloys were used as the matrix materials. Both ceramics and metals were introduced as reinforcement into the metallic glass. The metallic glass matrix remained amorphous after adding up to a 30 vol% fraction of particles or short wires. X-ray diffraction patterns of the composites show only peaks from the second phase particles superimposed on the broad diffuse maxima from the amorphous phase. Optical micrographs reveal uniformly distributed particles in the matrix. The glass transition of the amorphous matrix and the crystallization behavior of the composites were studied by calorimetric methods. copyright 1997 American Institute of Physics

  19. Machining of Metal Matrix Composites

    CERN Document Server

    2012-01-01

    Machining of Metal Matrix Composites provides the fundamentals and recent advances in the study of machining of metal matrix composites (MMCs). Each chapter is written by an international expert in this important field of research. Machining of Metal Matrix Composites gives the reader information on machining of MMCs with a special emphasis on aluminium matrix composites. Chapter 1 provides the mechanics and modelling of chip formation for traditional machining processes. Chapter 2 is dedicated to surface integrity when machining MMCs. Chapter 3 describes the machinability aspects of MMCs. Chapter 4 contains information on traditional machining processes and Chapter 5 is dedicated to the grinding of MMCs. Chapter 6 describes the dry cutting of MMCs with SiC particulate reinforcement. Finally, Chapter 7 is dedicated to computational methods and optimization in the machining of MMCs. Machining of Metal Matrix Composites can serve as a useful reference for academics, manufacturing and materials researchers, manu...

  20. Quantum mechanics in matrix form

    CERN Document Server

    Ludyk, Günter

    2018-01-01

    This book gives an introduction to quantum mechanics with the matrix method. Heisenberg's matrix mechanics is described in detail. The fundamental equations are derived by algebraic methods using matrix calculus. Only a brief description of Schrödinger's wave mechanics is given (in most books exclusively treated), to show their equivalence to Heisenberg's matrix  method. In the first part the historical development of Quantum theory by Planck, Bohr and Sommerfeld is sketched, followed by the ideas and methods of Heisenberg, Born and Jordan. Then Pauli's spin and exclusion principles are treated. Pauli's exclusion principle leads to the structure of atoms. Finally, Dirac´s relativistic quantum mechanics is shortly presented. Matrices and matrix equations are today easy to handle when implementing numerical algorithms using standard software as MAPLE and Mathematica.

  1. Sparse matrix test collections

    Energy Technology Data Exchange (ETDEWEB)

    Duff, I.

    1996-12-31

    This workshop will discuss plans for coordinating and developing sets of test matrices for the comparison and testing of sparse linear algebra software. We will talk of plans for the next release (Release 2) of the Harwell-Boeing Collection and recent work on improving the accessibility of this Collection and others through the World Wide Web. There will only be three talks of about 15 to 20 minutes followed by a discussion from the floor.

  2. Structured decomposition design of partial Mueller matrix polarimeters.

    Science.gov (United States)

    Alenin, Andrey S; Scott Tyo, J

    2015-07-01

    Partial Mueller matrix polarimeters (pMMPs) are active sensing instruments that probe a scattering process with a set of polarization states and analyze the scattered light with a second set of polarization states. Unlike conventional Mueller matrix polarimeters, pMMPs do not attempt to reconstruct the entire Mueller matrix. With proper choice of generator and analyzer states, a subset of the Mueller matrix space can be reconstructed with fewer measurements than that of the full Mueller matrix polarimeter. In this paper we consider the structure of the Mueller matrix and our ability to probe it using a reduced number of measurements. We develop analysis tools that allow us to relate the particular choice of generator and analyzer polarization states to the portion of Mueller matrix space that the instrument measures, as well as develop an optimization method that is based on balancing the signal-to-noise ratio of the resulting instrument with the ability of that instrument to accurately measure a particular set of desired polarization components with as few measurements as possible. In the process, we identify 10 classes of pMMP systems, for which the space coverage is immediately known. We demonstrate the theory with a numerical example that designs partial polarimeters for the task of monitoring the damage state of a material as presented earlier by Hoover and Tyo [Appl. Opt.46, 8364 (2007)10.1364/AO.46.008364APOPAI1559-128X]. We show that we can reduce the polarimeter to making eight measurements while still covering the Mueller matrix subspace spanned by the objects.

  3. Pseudo-Hermitian random matrix theory

    International Nuclear Information System (INIS)

    Srivastava, S.C.L.; Jain, S.R.

    2013-01-01

    Complex extension of quantum mechanics and the discovery of pseudo-unitarily invariant random matrix theory has set the stage for a number of applications of these concepts in physics. We briefly review the basic ideas and present applications to problems in statistical mechanics where new results have become possible. We have found it important to mention the precise directions where advances could be made if further results become available. (Copyright copyright 2013 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim)

  4. A random matrix approach to credit risk.

    Science.gov (United States)

    Münnix, Michael C; Schäfer, Rudi; Guhr, Thomas

    2014-01-01

    We estimate generic statistical properties of a structural credit risk model by considering an ensemble of correlation matrices. This ensemble is set up by Random Matrix Theory. We demonstrate analytically that the presence of correlations severely limits the effect of diversification in a credit portfolio if the correlations are not identically zero. The existence of correlations alters the tails of the loss distribution considerably, even if their average is zero. Under the assumption of randomly fluctuating correlations, a lower bound for the estimation of the loss distribution is provided.

  5. A random matrix approach to credit risk.

    Directory of Open Access Journals (Sweden)

    Michael C Münnix

    Full Text Available We estimate generic statistical properties of a structural credit risk model by considering an ensemble of correlation matrices. This ensemble is set up by Random Matrix Theory. We demonstrate analytically that the presence of correlations severely limits the effect of diversification in a credit portfolio if the correlations are not identically zero. The existence of correlations alters the tails of the loss distribution considerably, even if their average is zero. Under the assumption of randomly fluctuating correlations, a lower bound for the estimation of the loss distribution is provided.

  6. Matrix fluid chemistry experiment. Final report June 1998 - March 2003

    International Nuclear Information System (INIS)

    Smellie, John A.T.; Waber, H. Niklaus; Frape, Shaun K.

    2003-06-01

    The Matrix Fluid Chemistry Experiment set out to determine the composition and evolution of matrix pore fluids/waters in low permeable rock located at repository depths in the Aespoe Hard Rock Laboratory (HRL). Matrix pore fluids/waters can be highly saline in composition and, if accessible, may influence the near-field groundwater chemistry of a repository system. Characterising pore fluids/waters involved in-situ borehole sampling and analysis integrated with laboratory studies and experiments on rock matrix drill core material. Relating the rate of in-situ pore water accumulation during sampling to the measured rock porosity indicated a hydraulic conductivity of 10 -14 -10 -13 m/s for the rock matrix. This was in accordance with earlier estimated predictions. The sampled matrix pore water, brackish in type, mostly represents older palaeo- groundwater mixtures preserved in the rock matrix and dating back to at least the last glaciation. A component of matrix pore 'fluid' is also present. One borehole section suggests a younger groundwater component which has accessed the rock matrix during the experiment. There is little evidence that the salinity of the matrix pore waters has been influenced significantly by fluid inclusion populations hosted by quartz. Crush/leach, cation exchange, pore water diffusion and pore water displacement laboratory experiments were carried out to compare extracted/calculated matrix pore fluids/waters with in-situ sampling. Of these the pore water diffusion experiments appear to be the most promising approach and a recommended site characterisation protocol has been formulated. The main conclusions from the Matrix Fluid Chemistry Experiment are: Groundwater movement within the bedrock hosting the experimental site has been enhanced by increased hydraulic gradients generated by the presence of the tunnel, and to a much lesser extent by the borehole itself. Over experimental timescales ∼4 years) solute transport through the rock matrix

  7. Matrix fluid chemistry experiment. Final report June 1998 - March 2003

    Energy Technology Data Exchange (ETDEWEB)

    Smellie, John A.T. [Conterra AB, Luleaa (Sweden); Waber, H. Niklaus [Univ. of Bern (Switzerland). Inst. of Geology; Frape, Shaun K. [Univ. of Waterloo (Canada). Dept. of Earth Sciences

    2003-06-01

    The Matrix Fluid Chemistry Experiment set out to determine the composition and evolution of matrix pore fluids/waters in low permeable rock located at repository depths in the Aespoe Hard Rock Laboratory (HRL). Matrix pore fluids/waters can be highly saline in composition and, if accessible, may influence the near-field groundwater chemistry of a repository system. Characterising pore fluids/waters involved in-situ borehole sampling and analysis integrated with laboratory studies and experiments on rock matrix drill core material. Relating the rate of in-situ pore water accumulation during sampling to the measured rock porosity indicated a hydraulic conductivity of 10{sup -14}-10{sup -13} m/s for the rock matrix. This was in accordance with earlier estimated predictions. The sampled matrix pore water, brackish in type, mostly represents older palaeo- groundwater mixtures preserved in the rock matrix and dating back to at least the last glaciation. A component of matrix pore 'fluid' is also present. One borehole section suggests a younger groundwater component which has accessed the rock matrix during the experiment. There is little evidence that the salinity of the matrix pore waters has been influenced significantly by fluid inclusion populations hosted by quartz. Crush/leach, cation exchange, pore water diffusion and pore water displacement laboratory experiments were carried out to compare extracted/calculated matrix pore fluids/waters with in-situ sampling. Of these the pore water diffusion experiments appear to be the most promising approach and a recommended site characterisation protocol has been formulated. The main conclusions from the Matrix Fluid Chemistry Experiment are: Groundwater movement within the bedrock hosting the experimental site has been enhanced by increased hydraulic gradients generated by the presence of the tunnel, and to a much lesser extent by the borehole itself. Over experimental timescales {approx}4 years) solute transport

  8. ABCD Matrix Method a Case Study

    CERN Document Server

    Seidov, Zakir F; Yahalom, Asher

    2004-01-01

    In the Israeli Electrostatic Accelerator FEL, the distance between the accelerator's end and the wiggler's entrance is about 2.1 m, and 1.4 MeV electron beam is transported through this space using four similar quadrupoles (FODO-channel). The transfer matrix method (ABCD matrix method) was used for simulating the beam transport, a set of programs is written in the several programming languages (MATHEMATICA, MATLAB, MATCAD, MAPLE) and reasonable agreement is demonstrated between experimental results and simulations. Comparison of ABCD matrix method with the direct "numerical experiments" using EGUN, ELOP, and GPT programs with and without taking into account the space-charge effects showed the agreement to be good enough as well. Also the inverse problem of finding emittance of the electron beam at the S1 screen position (before FODO-channel), by using the spot image at S2 screen position (after FODO-channel) as function of quad currents, is considered. Spot and beam at both screens are described as tilted eel...

  9. The matrix of inspiration

    Science.gov (United States)

    Oehlmann, Dietmar; Ohlmann, Odile M.; Danzebrink, Hans U.

    2005-04-01

    perform this exchange, as a matrix, understood as source, of new ideas.

  10. Measuring methods of matrix diffusion

    International Nuclear Information System (INIS)

    Muurinen, A.; Valkiainen, M.

    1988-03-01

    In Finland the spent nuclear fuel is planned to be disposed of at large depths in crystalline bedrock. The radionuclides which are dissolved in the groundwater may be able to diffuse into the micropores of the porous rock matrix and thus be withdrawn from the flowing water in the fractures. This phenomenon is called matrix diffusion. A review over matrix diffusion is presented in the study. The main interest is directed to the diffusion of non-sorbing species. The review covers diffusion experiments and measurements of porosity, pore size, specific surface area and water permeability

  11. Maximal quantum Fisher information matrix

    International Nuclear Information System (INIS)

    Chen, Yu; Yuan, Haidong

    2017-01-01

    We study the existence of the maximal quantum Fisher information matrix in the multi-parameter quantum estimation, which bounds the ultimate precision limit. We show that when the maximal quantum Fisher information matrix exists, it can be directly obtained from the underlying dynamics. Examples are then provided to demonstrate the usefulness of the maximal quantum Fisher information matrix by deriving various trade-off relations in multi-parameter quantum estimation and obtaining the bounds for the scalings of the precision limit. (paper)

  12. Recording and Validation of Audiovisual Expressions by Faces and Voices

    Directory of Open Access Journals (Sweden)

    Sachiko Takagi

    2011-10-01

    Full Text Available This study aims to further examine the cross-cultural differences in multisensory emotion perception between Western and East Asian people. In this study, we recorded the audiovisual stimulus video of Japanese and Dutch actors saying neutral phrase with one of the basic emotions. Then we conducted a validation experiment of the stimuli. In the first part (facial expression, participants watched a silent video of actors and judged what kind of emotion the actor is expressing by choosing among 6 options (ie, happiness, anger, disgust, sadness, surprise, and fear. In the second part (vocal expression, they listened to the audio part of the same videos without video images while the task was the same. We analyzed their categorization responses based on accuracy and confusion matrix and created a controlled audiovisual stimulus set.

  13. Neutrino mass matrix: Inverted hierarchy and CP violation

    International Nuclear Information System (INIS)

    Frigerio, Michele; Smirnov, Alexei Yu.

    2003-01-01

    We reconstruct the neutrino mass matrix in the flavor basis, in the case of an inverted mass hierarchy (ordering), using all available experimental data on neutrino masses and oscillations. We analyze the dependence of the matrix elements m αβ on the CP violating Dirac δ and Majorana ρ and σ phases, for different values of the absolute mass scale. We find that the present data admit various structures of the mass matrix: (i) hierarchical structures with a set of small (zero) elements; (ii) structures with equalities among various groups of elements: e-row and/or μτ-block elements, diagonal and/or off-diagonal elements; (iii) 'democratic' structure. We find the values of phases for which these structures are realized. The mass matrix elements can anticorrelate with flavor: inverted partial or complete flavor alignment is possible. For various structures of the mass matrix we identify the possible underlying symmetry. We find that the mass matrix can be reconstructed completely only in particular cases, provided that the absolute scale of the mass is measured. Generally, the freedom related to the Majorana phase σ will not be removed, thus admitting various types of mass matrix

  14. Matrix of transmission in structural dynamics

    International Nuclear Information System (INIS)

    Mukherjee, S.

    1975-01-01

    Within the last few years numerous papers have been published on the subject of matrix method in elasto-mechanics. 'Matrix of Transmission' is one of the methods in this field which has gained considerable attention in recent years. The basic philosophy adopted in this method is based on the idea of breaking up a complicated system into component parts with simple elastic and dynamic properties which can be readily expressed in matrix form. These component matrices are considered as building blocks, which are fitted together according to a set of predetermined rules which then provide the static and dynamic properties of the entire system. A common type of system occuring in engineering practice consists of a number of elements linked together end to end in the form of a chain. The 'Transfer Matrix' is ideally suited for such a system, because only successive multiplication is necessary to connect these elements together. The number of degrees of freedom and intermediate conditions present no difficulty. Although the 'Transfer Matrix' method is suitable for the treatment of branched and coupled systems its application to systems which do not have predominant chain topology is not effective. Apart from the requirement that the system be linearely elastic, no other restrictions are made. In this paper, it is intended to give a general outline and theoretical formulation of 'Transfer Matrix' and then its application to actual problems in structural dynamics related to seismic analysis. The natural frequencies of a freely vibrating elastic system can be found by applying proper end conditions. The end conditions will yield the frequency determinate to zero. By using a suitable numerical method, the natural frequencies and mode shapes are determined by making a frequency sweep within the range of interest. Results of an analysis of a typical nuclear building by this method show very close agreement with the results obtained by using ASKA and SAP IV program. Therefore

  15. The Performance Analysis Based on SAR Sample Covariance Matrix

    Directory of Open Access Journals (Sweden)

    Esra Erten

    2012-03-01

    Full Text Available Multi-channel systems appear in several fields of application in science. In the Synthetic Aperture Radar (SAR context, multi-channel systems may refer to different domains, as multi-polarization, multi-interferometric or multi-temporal data, or even a combination of them. Due to the inherent speckle phenomenon present in SAR images, the statistical description of the data is almost mandatory for its utilization. The complex images acquired over natural media present in general zero-mean circular Gaussian characteristics. In this case, second order statistics as the multi-channel covariance matrix fully describe the data. For practical situations however, the covariance matrix has to be estimated using a limited number of samples, and this sample covariance matrix follow the complex Wishart distribution. In this context, the eigendecomposition of the multi-channel covariance matrix has been shown in different areas of high relevance regarding the physical properties of the imaged scene. Specifically, the maximum eigenvalue of the covariance matrix has been frequently used in different applications as target or change detection, estimation of the dominant scattering mechanism in polarimetric data, moving target indication, etc. In this paper, the statistical behavior of the maximum eigenvalue derived from the eigendecomposition of the sample multi-channel covariance matrix in terms of multi-channel SAR images is simplified for SAR community. Validation is performed against simulated data and examples of estimation and detection problems using the analytical expressions are as well given.

  16. Teaching the Extracellular Matrix and Introducing Online Databases within a Multidisciplinary Course with i-Cell-MATRIX: A Student-Centered Approach

    Science.gov (United States)

    Sousa, Joao Carlos; Costa, Manuel Joao; Palha, Joana Almeida

    2010-01-01

    The biochemistry and molecular biology of the extracellular matrix (ECM) is difficult to convey to students in a classroom setting in ways that capture their interest. The understanding of the matrix's roles in physiological and pathological conditions study will presumably be hampered by insufficient knowledge of its molecular structure.…

  17. A test matrix sequencer for research test facility automation

    Science.gov (United States)

    Mccartney, Timothy P.; Emery, Edward F.

    1990-01-01

    The hardware and software configuration of a Test Matrix Sequencer, a general purpose test matrix profiler that was developed for research test facility automation at the NASA Lewis Research Center, is described. The system provides set points to controllers and contact closures to data systems during the course of a test. The Test Matrix Sequencer consists of a microprocessor controlled system which is operated from a personal computer. The software program, which is the main element of the overall system is interactive and menu driven with pop-up windows and help screens. Analog and digital input/output channels can be controlled from a personal computer using the software program. The Test Matrix Sequencer provides more efficient use of aeronautics test facilities by automating repetitive tasks that were once done manually.

  18. Matrix product operators, matrix product states, and ab initio density matrix renormalization group algorithms

    Science.gov (United States)

    Chan, Garnet Kin-Lic; Keselman, Anna; Nakatani, Naoki; Li, Zhendong; White, Steven R.

    2016-07-01

    Current descriptions of the ab initio density matrix renormalization group (DMRG) algorithm use two superficially different languages: an older language of the renormalization group and renormalized operators, and a more recent language of matrix product states and matrix product operators. The same algorithm can appear dramatically different when written in the two different vocabularies. In this work, we carefully describe the translation between the two languages in several contexts. First, we describe how to efficiently implement the ab initio DMRG sweep using a matrix product operator based code, and the equivalence to the original renormalized operator implementation. Next we describe how to implement the general matrix product operator/matrix product state algebra within a pure renormalized operator-based DMRG code. Finally, we discuss two improvements of the ab initio DMRG sweep algorithm motivated by matrix product operator language: Hamiltonian compression, and a sum over operators representation that allows for perfect computational parallelism. The connections and correspondences described here serve to link the future developments with the past and are important in the efficient implementation of continuing advances in ab initio DMRG and related algorithms.

  19. Joint-2D-SL0 Algorithm for Joint Sparse Matrix Reconstruction

    Directory of Open Access Journals (Sweden)

    Dong Zhang

    2017-01-01

    Full Text Available Sparse matrix reconstruction has a wide application such as DOA estimation and STAP. However, its performance is usually restricted by the grid mismatch problem. In this paper, we revise the sparse matrix reconstruction model and propose the joint sparse matrix reconstruction model based on one-order Taylor expansion. And it can overcome the grid mismatch problem. Then, we put forward the Joint-2D-SL0 algorithm which can solve the joint sparse matrix reconstruction problem efficiently. Compared with the Kronecker compressive sensing method, our proposed method has a higher computational efficiency and acceptable reconstruction accuracy. Finally, simulation results validate the superiority of the proposed method.

  20. On matrix fractional differential equations

    Directory of Open Access Journals (Sweden)

    Adem Kılıçman

    2017-01-01

    Full Text Available The aim of this article is to study the matrix fractional differential equations and to find the exact solution for system of matrix fractional differential equations in terms of Riemann–Liouville using Laplace transform method and convolution product to the Riemann–Liouville fractional of matrices. Also, we show the theorem of non-homogeneous matrix fractional partial differential equation with some illustrative examples to demonstrate the effectiveness of the new methodology. The main objective of this article is to discuss the Laplace transform method based on operational matrices of fractional derivatives for solving several kinds of linear fractional differential equations. Moreover, we present the operational matrices of fractional derivatives with Laplace transform in many applications of various engineering systems as control system. We present the analytical technique for solving fractional-order, multi-term fractional differential equation. In other words, we propose an efficient algorithm for solving fractional matrix equation.