WorldWideScience

Sample records for set validation matrix

  1. S.E.T., CSNI Separate Effects Test Facility Validation Matrix

    International Nuclear Information System (INIS)

    1997-01-01

    1 - Description of test facility: The SET matrix of experiments is suitable for the developmental assessment of thermal-hydraulics transient system computer codes by selecting individual tests from selected facilities, relevant to each phenomena. Test facilities differ from one another in geometrical dimensions, geometrical configuration and operating capabilities or conditions. Correlation between SET facility and phenomena were calculated on the basis of suitability for model validation (which means that a facility is designed in such a way as to stimulate the phenomena assumed to occur in a plant and is sufficiently instrumented); limited suitability for model variation (which means that a facility is designed in such a way as to stimulate the phenomena assumed to occur in a plant but has problems associated with imperfect scaling, different test fluids or insufficient instrumentation); and unsuitability for model validation. 2 - Description of test: Whereas integral experiments are usually designed to follow the behaviour of a reactor system in various off-normal or accident transients, separate effects tests focus on the behaviour of a single component, or on the characteristics of one thermal-hydraulic phenomenon. The construction of a separate effects test matrix is an attempt to collect together the best sets of openly available test data for code validation, assessment and improvement, from the wide range of experiments that have been carried out world-wide in the field of thermal hydraulics. In all, 2094 tests are included in the SET matrix

  2. Evaluation of the separate effects tests (SET) validation matrix

    International Nuclear Information System (INIS)

    1996-11-01

    This work is the result of a one year extended mandate which has been given by the CSNI on the request of the PWG 2 and the Task Group on Thermal Hydraulic System Behaviour (TG THSB) in late 1994. The aim was to evaluate the SET validation matrix in order to define the real needs for further experimental work. The statistical evaluation tables of the SET matrix provide an overview of the data base including the parameter ranges covered for each phenomenon and selected parameters, and questions posed to obtain answers concerning the need for additional experimental data with regard to the objective of nuclear power plant safety. A global view of the data base is first presented focussing on areas lacking in data and on hot topics. A new systematic evaluation has been done based on the authors technical judgments and giving evaluation tables. In these tables, global and indicative information are included. Four main parameters have been chosen as the most important and relevant parameters: a state parameter given by the operating pressure of the tests, a flow parameter expressed as mass flux, mass flow rate or volumetric flow rate in the tests, a geometrical parameter provided through a typical dimension expressed by a diameter, an equivalent diameter (hydraulic or heated) or a cross sectional area of the test sections, and an energy or heat transfer parameter given as the fluid temperature, the heat flux or the heat transfer surface temperature of the tests

  3. Overview of CSNI separate effects tests validation matrix

    Energy Technology Data Exchange (ETDEWEB)

    Aksan, N. [Paul Scherrer Institute, Villigen (Switzerland); Auria, F.D. [Univ. of Pisa (Italy); Glaeser, H. [Gesellschaft fuer anlagen und Reaktorsicherheit, (GRS), Garching (Germany)] [and others

    1995-09-01

    An internationally agreed separate effects test (SET) Validation Matrix for thermal-hydraulic system codes has been established by a sub-group of the Task Group on Thermal Hydraulic System Behaviour as requested by the OECD/NEA Committee on Safety of Nuclear Installations (SCNI) Principal Working Group No. 2 on Coolant System Behaviour. The construction of such a Matrix is an attempt to collect together in a systematic way the best sets of openly available test data for code validation, assessment and improvement and also for quantitative code assessment with respect to quantification of uncertainties to the modeling of individual phenomena by the codes. The methodology, that has been developed during the process of establishing CSNI-SET validation matrix, was an important outcome of the work on SET matrix. In addition, all the choices which have been made from the 187 identified facilities covering the 67 phenomena will be investigated together with some discussions on the data base.

  4. Validation matrix for the assessment of thermal-hydraulic codes for VVER LOCA and transients. A report by the OECD support group on the VVER thermal-hydraulic code validation matrix

    International Nuclear Information System (INIS)

    2001-06-01

    This report deals with an internationally agreed experimental test facility matrix for the validation of best estimate thermal-hydraulic computer codes applied for the analysis of VVER reactor primary systems in accident and transient conditions. Firstly, the main physical phenomena that occur during the considered accidents are identified, test types are specified, and test facilities that supplement the CSNI CCVMs and are suitable for reproducing these aspects are selected. Secondly, a list of selected experiments carried out in these facilities has been set down. The criteria to achieve the objectives are outlined. The construction of VVER Thermal-Hydraulic Code Validation Matrix follows the logic of the CSNI Code Validation Matrices (CCVM). Similar to the CCVM it is an attempt to collect together in a systematic way the best sets of available test data for VVER specific code validation, assessment and improvement, including quantitative assessment of uncertainties in the modelling of phenomena by the codes. In addition to this objective, it is an attempt to record information which has been generated in countries operating VVER reactors over the last 20 years so that it is more accessible to present and future workers in that field than would otherwise be the case. (authors)

  5. In-vessel core degradation code validation matrix

    International Nuclear Information System (INIS)

    Haste, T.J.; Adroguer, B.; Gauntt, R.O.; Martinez, J.A.; Ott, L.J.; Sugimoto, J.; Trambauer, K.

    1996-01-01

    The objective of the current Validation Matrix is to define a basic set of experiments, for which comparison of the measured and calculated parameters forms a basis for establishing the accuracy of test predictions, covering the full range of in-vessel core degradation phenomena expected in light water reactor severe accident transients. The scope of the review covers PWR and BWR designs of Western origin: the coverage of phenomena extends from the initial heat-up through to the introduction of melt into the lower plenum. Concerning fission product behaviour, the effect of core degradation on fission product release is considered. The report provides brief overviews of the main LWR severe accident sequences and of the dominant phenomena involved. The experimental database is summarised. These data are cross-referenced against a condensed set of the phenomena and test condition headings presented earlier, judging the results against a set of selection criteria and identifying key tests of particular value. The main conclusions and recommendations are listed. (K.A.)

  6. Containment Code Validation Matrix

    International Nuclear Information System (INIS)

    Chin, Yu-Shan; Mathew, P.M.; Glowa, Glenn; Dickson, Ray; Liang, Zhe; Leitch, Brian; Barber, Duncan; Vasic, Aleks; Bentaib, Ahmed; Journeau, Christophe; Malet, Jeanne; Studer, Etienne; Meynet, Nicolas; Piluso, Pascal; Gelain, Thomas; Michielsen, Nathalie; Peillon, Samuel; Porcheron, Emmanuel; Albiol, Thierry; Clement, Bernard; Sonnenkalb, Martin; Klein-Hessling, Walter; Arndt, Siegfried; Weber, Gunter; Yanez, Jorge; Kotchourko, Alexei; Kuznetsov, Mike; Sangiorgi, Marco; Fontanet, Joan; Herranz, Luis; Garcia De La Rua, Carmen; Santiago, Aleza Enciso; Andreani, Michele; Paladino, Domenico; Dreier, Joerg; Lee, Richard; Amri, Abdallah

    2014-01-01

    The Committee on the Safety of Nuclear Installations (CSNI) formed the CCVM (Containment Code Validation Matrix) task group in 2002. The objective of this group was to define a basic set of available experiments for code validation, covering the range of containment (ex-vessel) phenomena expected in the course of light and heavy water reactor design basis accidents and beyond design basis accidents/severe accidents. It was to consider phenomena relevant to pressurised heavy water reactor (PHWR), pressurised water reactor (PWR) and boiling water reactor (BWR) designs of Western origin as well as of Eastern European VVER types. This work would complement the two existing CSNI validation matrices for thermal hydraulic code validation (NEA/CSNI/R(1993)14) and In-vessel core degradation (NEA/CSNI/R(2001)21). The report initially provides a brief overview of the main features of a PWR, BWR, CANDU and VVER reactors. It also provides an overview of the ex-vessel corium retention (core catcher). It then provides a general overview of the accident progression for light water and heavy water reactors. The main focus is to capture most of the phenomena and safety systems employed in these reactor types and to highlight the differences. This CCVM contains a description of 127 phenomena, broken down into 6 categories: - Containment Thermal-hydraulics Phenomena; - Hydrogen Behaviour (Combustion, Mitigation and Generation) Phenomena; - Aerosol and Fission Product Behaviour Phenomena; - Iodine Chemistry Phenomena; - Core Melt Distribution and Behaviour in Containment Phenomena; - Systems Phenomena. A synopsis is provided for each phenomenon, including a description, references for further information, significance for DBA and SA/BDBA and a list of experiments that may be used for code validation. The report identified 213 experiments, broken down into the same six categories (as done for the phenomena). An experiment synopsis is provided for each test. Along with a test description

  7. 45 CFR 162.1011 - Valid code sets.

    Science.gov (United States)

    2010-10-01

    ... 45 Public Welfare 1 2010-10-01 2010-10-01 false Valid code sets. 162.1011 Section 162.1011 Public... ADMINISTRATIVE REQUIREMENTS Code Sets § 162.1011 Valid code sets. Each code set is valid within the dates specified by the organization responsible for maintaining that code set. ...

  8. RELAP-7 Software Verification and Validation Plan: Requirements Traceability Matrix (RTM) Part 1 – Physics and numerical methods

    Energy Technology Data Exchange (ETDEWEB)

    Choi, Yong Joon [Idaho National Lab. (INL), Idaho Falls, ID (United States); Yoo, Jun Soo [Idaho National Lab. (INL), Idaho Falls, ID (United States); Smith, Curtis Lee [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2015-09-01

    This INL plan comprehensively describes the Requirements Traceability Matrix (RTM) on main physics and numerical method of the RELAP-7. The plan also describes the testing-based software verification and validation (SV&V) process—a set of specially designed software models used to test RELAP-7.

  9. CSNI Integral test facility validation matrix for the assessment of thermal-hydraulic codes for LWR LOCA and transients

    International Nuclear Information System (INIS)

    1996-07-01

    This report deals with an internationally agreed integral test facility (ITF) matrix for the validation of best estimate thermal-hydraulic computer codes. Firstly, the main physical phenomena that occur during the considered accidents are identified, test types are specified, and test facilities suitable for reproducing these aspects are selected. Secondly, a life of selected experiments carried out in these facilities has been set down. The criteria to achieve the objectives are outlined. The construction of such a matrix is an attempt to collect together in a systematic way the best sets of openly available test data for code validation, assessment and improvement, including quantitative assessment of uncertainties in the modelling of phenomena by the codes. In addition to this objective, it is an attempt to record information which has been generated around the world over the last 20 years so that it is more accessible to present and future workers in that field than would otherwise be the case

  10. In-vessel core degradation code validation matrix update 1996-1999. Report by an OECD/NEA group of experts

    International Nuclear Information System (INIS)

    2001-02-01

    In 1991 the Committee on the Safety of Nuclear Installations (CSNI) issued a State-of-the-Art Report (SOAR) on In-Vessel Core Degradation in Light Water Reactor (LWR) Severe Accidents. Based on the recommendations of this report a Validation Matrix for severe accident modelling codes was produced. Experiments performed up to the end of 1993 were considered for this validation matrix. To include recent experiments and to enlarge the scope, an update was formally inaugurated in January 1999 by the Task Group on Degraded Core Cooling, a sub-group of Principal Working Group 2 (PWG-2) on Coolant System Behaviour, and a selection of writing group members was commissioned. The present report documents the results of this study. The objective of the Validation Matrix is to define a basic set of experiments, for which comparison of the measured and calculated parameters forms a basis for establishing the accuracy of test predictions, covering the full range of in-vessel core degradation phenomena expected in light water reactor severe accident transients. The emphasis is on integral experiments, where interactions amongst key phenomena as well as the phenomena themselves are explored; however separate-effects experiments are also considered especially where these extend the parameter ranges to cover those expected in postulated LWR severe accident transients. As well as covering PWR and BWR designs of Western origin, the scope of the review has been extended to Eastern European (VVER) types. Similarly, the coverage of phenomena has been extended, starting as before from the initial heat-up but now proceeding through the in-core stage to include introduction of melt into the lower plenum and further to core coolability and retention to the lower plenum, with possible external cooling. Items of a purely thermal hydraulic nature involving no core degradation are excluded, having been covered in other validation matrix studies. Concerning fission product behaviour, the effect

  11. Development and validation of a job exposure matrix for physical risk factors in low back pain.

    Science.gov (United States)

    Solovieva, Svetlana; Pehkonen, Irmeli; Kausto, Johanna; Miranda, Helena; Shiri, Rahman; Kauppinen, Timo; Heliövaara, Markku; Burdorf, Alex; Husgafvel-Pursiainen, Kirsti; Viikari-Juntura, Eira

    2012-01-01

    The aim was to construct and validate a gender-specific job exposure matrix (JEM) for physical exposures to be used in epidemiological studies of low back pain (LBP). We utilized two large Finnish population surveys, one to construct the JEM and another to test matrix validity. The exposure axis of the matrix included exposures relevant to LBP (heavy physical work, heavy lifting, awkward trunk posture and whole body vibration) and exposures that increase the biomechanical load on the low back (arm elevation) or those that in combination with other known risk factors could be related to LBP (kneeling or squatting). Job titles with similar work tasks and exposures were grouped. Exposure information was based on face-to-face interviews. Validity of the matrix was explored by comparing the JEM (group-based) binary measures with individual-based measures. The predictive validity of the matrix against LBP was evaluated by comparing the associations of the group-based (JEM) exposures with those of individual-based exposures. The matrix includes 348 job titles, representing 81% of all Finnish job titles in the early 2000s. The specificity of the constructed matrix was good, especially in women. The validity measured with kappa-statistic ranged from good to poor, being fair for most exposures. In men, all group-based (JEM) exposures were statistically significantly associated with one-month prevalence of LBP. In women, four out of six group-based exposures showed an association with LBP. The gender-specific JEM for physical exposures showed relatively high specificity without compromising sensitivity. The matrix can therefore be considered as a valid instrument for exposure assessment in large-scale epidemiological studies, when more precise but more labour-intensive methods are not feasible. Although the matrix was based on Finnish data we foresee that it could be applicable, with some modifications, in other countries with a similar level of technology.

  12. Development and validation of a job exposure matrix for physical risk factors in low back pain.

    Directory of Open Access Journals (Sweden)

    Svetlana Solovieva

    Full Text Available OBJECTIVES: The aim was to construct and validate a gender-specific job exposure matrix (JEM for physical exposures to be used in epidemiological studies of low back pain (LBP. MATERIALS AND METHODS: We utilized two large Finnish population surveys, one to construct the JEM and another to test matrix validity. The exposure axis of the matrix included exposures relevant to LBP (heavy physical work, heavy lifting, awkward trunk posture and whole body vibration and exposures that increase the biomechanical load on the low back (arm elevation or those that in combination with other known risk factors could be related to LBP (kneeling or squatting. Job titles with similar work tasks and exposures were grouped. Exposure information was based on face-to-face interviews. Validity of the matrix was explored by comparing the JEM (group-based binary measures with individual-based measures. The predictive validity of the matrix against LBP was evaluated by comparing the associations of the group-based (JEM exposures with those of individual-based exposures. RESULTS: The matrix includes 348 job titles, representing 81% of all Finnish job titles in the early 2000s. The specificity of the constructed matrix was good, especially in women. The validity measured with kappa-statistic ranged from good to poor, being fair for most exposures. In men, all group-based (JEM exposures were statistically significantly associated with one-month prevalence of LBP. In women, four out of six group-based exposures showed an association with LBP. CONCLUSIONS: The gender-specific JEM for physical exposures showed relatively high specificity without compromising sensitivity. The matrix can therefore be considered as a valid instrument for exposure assessment in large-scale epidemiological studies, when more precise but more labour-intensive methods are not feasible. Although the matrix was based on Finnish data we foresee that it could be applicable, with some modifications, in

  13. Discriminant Validity of the WISC-IV Culture-Language Interpretive Matrix

    Science.gov (United States)

    Styck, Kara M.; Watkins, Marley W.

    2014-01-01

    The Culture-Language Interpretive Matrix (C-LIM) was developed to help practitioners determine the validity of test scores obtained from students who are culturally and linguistically different from the normative group of a test. The present study used an idiographic approach to investigate the diagnostic utility of the C-LIM for the Wechsler…

  14. Automatic Generation of Validated Specific Epitope Sets

    Directory of Open Access Journals (Sweden)

    Sebastian Carrasco Pro

    2015-01-01

    Full Text Available Accurate measurement of B and T cell responses is a valuable tool to study autoimmunity, allergies, immunity to pathogens, and host-pathogen interactions and assist in the design and evaluation of T cell vaccines and immunotherapies. In this context, it is desirable to elucidate a method to select validated reference sets of epitopes to allow detection of T and B cells. However, the ever-growing information contained in the Immune Epitope Database (IEDB and the differences in quality and subjects studied between epitope assays make this task complicated. In this study, we develop a novel method to automatically select reference epitope sets according to a categorization system employed by the IEDB. From the sets generated, three epitope sets (EBV, mycobacteria and dengue were experimentally validated by detection of T cell reactivity ex vivo from human donors. Furthermore, a web application that will potentially be implemented in the IEDB was created to allow users the capacity to generate customized epitope sets.

  15. Generation of the covariance matrix for a set of nuclear data produced by collapsing a larger parent set through the weighted averaging of equivalent data points

    International Nuclear Information System (INIS)

    Smith, D.L.

    1987-01-01

    A method is described for generating the covariance matrix of a set of experimental nuclear data which has been collapsed in size by the averaging of equivalent data points belonging to a larger parent data set. It is assumed that the data values and covariance matrix for the parent set are provided. The collapsed set is obtained by a proper weighted-averaging procedure based on the method of least squares. It is then shown by means of the law of error propagation that the elements of the covariance matrix for the collapsed set are linear combinations of elements from the parent set covariance matrix. The coefficients appearing in these combinations are binary products of the same coefficients which appear as weighting factors in the data collapsing procedure. As an example, the procedure is applied to a collection of recently-measured integral neutron-fission cross-section ratios. (orig.)

  16. Minimal set of auxiliary fields and S-matrix for extended supergravity

    Energy Technology Data Exchange (ETDEWEB)

    Fradkin, E S; Vasiliev, M A [Physical Lebedev Institute - Moscow

    1979-05-19

    Minimal set of auxiliary fields for linearized SO(2) supergravity and one-parameter extension of the minimal auxiliary fields in the SO(1) supergravity are constructed. The expression for the S-matrix in SO(2) supergravity are given.

  17. Shrinkage covariance matrix approach based on robust trimmed mean in gene sets detection

    Science.gov (United States)

    Karjanto, Suryaefiza; Ramli, Norazan Mohamed; Ghani, Nor Azura Md; Aripin, Rasimah; Yusop, Noorezatty Mohd

    2015-02-01

    Microarray involves of placing an orderly arrangement of thousands of gene sequences in a grid on a suitable surface. The technology has made a novelty discovery since its development and obtained an increasing attention among researchers. The widespread of microarray technology is largely due to its ability to perform simultaneous analysis of thousands of genes in a massively parallel manner in one experiment. Hence, it provides valuable knowledge on gene interaction and function. The microarray data set typically consists of tens of thousands of genes (variables) from just dozens of samples due to various constraints. Therefore, the sample covariance matrix in Hotelling's T2 statistic is not positive definite and become singular, thus it cannot be inverted. In this research, the Hotelling's T2 statistic is combined with a shrinkage approach as an alternative estimation to estimate the covariance matrix to detect significant gene sets. The use of shrinkage covariance matrix overcomes the singularity problem by converting an unbiased to an improved biased estimator of covariance matrix. Robust trimmed mean is integrated into the shrinkage matrix to reduce the influence of outliers and consequently increases its efficiency. The performance of the proposed method is measured using several simulation designs. The results are expected to outperform existing techniques in many tested conditions.

  18. Setting research priorities by applying the combined approach matrix.

    Science.gov (United States)

    Ghaffar, Abdul

    2009-04-01

    Priority setting in health research is a dynamic process. Different organizations and institutes have been working in the field of research priority setting for many years. In 1999 the Global Forum for Health Research presented a research priority setting tool called the Combined Approach Matrix or CAM. Since its development, the CAM has been successfully applied to set research priorities for diseases, conditions and programmes at global, regional and national levels. This paper briefly explains the CAM methodology and how it could be applied in different settings, giving examples and describing challenges encountered in the process of setting research priorities and providing recommendations for further work in this field. The construct and design of the CAM is explained along with different steps needed, including planning and organization of a priority-setting exercise and how it could be applied in different settings. The application of the CAM are described by using three examples. The first concerns setting research priorities for a global programme, the second describes application at the country level and the third setting research priorities for diseases. Effective application of the CAM in different and diverse environments proves its utility as a tool for setting research priorities. Potential challenges encountered in the process of research priority setting are discussed and some recommendations for further work in this field are provided.

  19. Reliability and Validity of 10 Different Standard Setting Procedures.

    Science.gov (United States)

    Halpin, Glennelle; Halpin, Gerald

    Research indicating that different cut-off points result from the use of different standard-setting techniques leaves decision makers with a disturbing dilemma: Which standard-setting method is best? This investigation of the reliability and validity of 10 different standard-setting approaches was designed to provide information that might help…

  20. Use of the recognition heuristic depends on the domain's recognition validity, not on the recognition validity of selected sets of objects.

    Science.gov (United States)

    Pohl, Rüdiger F; Michalkiewicz, Martha; Erdfelder, Edgar; Hilbig, Benjamin E

    2017-07-01

    According to the recognition-heuristic theory, decision makers solve paired comparisons in which one object is recognized and the other not by recognition alone, inferring that recognized objects have higher criterion values than unrecognized ones. However, success-and thus usefulness-of this heuristic depends on the validity of recognition as a cue, and adaptive decision making, in turn, requires that decision makers are sensitive to it. To this end, decision makers could base their evaluation of the recognition validity either on the selected set of objects (the set's recognition validity), or on the underlying domain from which the objects were drawn (the domain's recognition validity). In two experiments, we manipulated the recognition validity both in the selected set of objects and between domains from which the sets were drawn. The results clearly show that use of the recognition heuristic depends on the domain's recognition validity, not on the set's recognition validity. In other words, participants treat all sets as roughly representative of the underlying domain and adjust their decision strategy adaptively (only) with respect to the more general environment rather than the specific items they are faced with.

  1. Numerical modelling of transdermal delivery from matrix systems: parametric study and experimental validation with silicone matrices.

    Science.gov (United States)

    Snorradóttir, Bergthóra S; Jónsdóttir, Fjóla; Sigurdsson, Sven Th; Másson, Már

    2014-08-01

    A model is presented for transdermal drug delivery from single-layered silicone matrix systems. The work is based on our previous results that, in particular, extend the well-known Higuchi model. Recently, we have introduced a numerical transient model describing matrix systems where the drug dissolution can be non-instantaneous. Furthermore, our model can describe complex interactions within a multi-layered matrix and the matrix to skin boundary. The power of the modelling approach presented here is further illustrated by allowing the possibility of a donor solution. The model is validated by a comparison with experimental data, as well as validating the parameter values against each other, using various configurations with donor solution, silicone matrix and skin. Our results show that the model is a good approximation to real multi-layered delivery systems. The model offers the ability of comparing drug release for ibuprofen and diclofenac, which cannot be analysed by the Higuchi model because the dissolution in the latter case turns out to be limited. The experiments and numerical model outlined in this study could also be adjusted to more general formulations, which enhances the utility of the numerical model as a design tool for the development of drug-loaded matrices for trans-membrane and transdermal delivery. © 2014 Wiley Periodicals, Inc. and the American Pharmacists Association.

  2. Kohn-Sham potentials from electron densities using a matrix representation within finite atomic orbital basis sets

    Science.gov (United States)

    Zhang, Xing; Carter, Emily A.

    2018-01-01

    We revisit the static response function-based Kohn-Sham (KS) inversion procedure for determining the KS effective potential that corresponds to a given target electron density within finite atomic orbital basis sets. Instead of expanding the potential in an auxiliary basis set, we directly update the potential in its matrix representation. Through numerical examples, we show that the reconstructed density rapidly converges to the target density. Preliminary results are presented to illustrate the possibility of obtaining a local potential in real space from the optimized potential in its matrix representation. We have further applied this matrix-based KS inversion approach to density functional embedding theory. A proof-of-concept study of a solvated proton transfer reaction demonstrates the method's promise.

  3. Evaluation of the confusion matrix method in the validation of an automated system for measuring feeding behaviour of cattle.

    Science.gov (United States)

    Ruuska, Salla; Hämäläinen, Wilhelmiina; Kajava, Sari; Mughal, Mikaela; Matilainen, Pekka; Mononen, Jaakko

    2018-03-01

    The aim of the present study was to evaluate empirically confusion matrices in device validation. We compared the confusion matrix method to linear regression and error indices in the validation of a device measuring feeding behaviour of dairy cattle. In addition, we studied how to extract additional information on classification errors with confusion probabilities. The data consisted of 12 h behaviour measurements from five dairy cows; feeding and other behaviour were detected simultaneously with a device and from video recordings. The resulting 216 000 pairs of classifications were used to construct confusion matrices and calculate performance measures. In addition, hourly durations of each behaviour were calculated and the accuracy of measurements was evaluated with linear regression and error indices. All three validation methods agreed when the behaviour was detected very accurately or inaccurately. Otherwise, in the intermediate cases, the confusion matrix method and error indices produced relatively concordant results, but the linear regression method often disagreed with them. Our study supports the use of confusion matrix analysis in validation since it is robust to any data distribution and type of relationship, it makes a stringent evaluation of validity, and it offers extra information on the type and sources of errors. Copyright © 2018 Elsevier B.V. All rights reserved.

  4. The Visual Matrix Method: Imagery and Affect in a Group-Based Research Setting

    Directory of Open Access Journals (Sweden)

    Lynn Froggett

    2015-07-01

    Full Text Available The visual matrix is a method for researching shared experience, stimulated by sensory material relevant to a research question. It is led by imagery, visualization and affect, which in the matrix take precedence over discourse. The method enables the symbolization of imaginative and emotional material, which might not otherwise be articulated and allows "unthought" dimensions of experience to emerge into consciousness in a participatory setting. We describe the process of the matrix with reference to the study "Public Art and Civic Engagement" (FROGGETT, MANLEY, ROY, PRIOR & DOHERTY, 2014 in which it was developed and tested. Subsequently, examples of its use in other contexts are provided. Both the matrix and post-matrix discussions are described, as is the interpretive process that follows. Theoretical sources are highlighted: its origins in social dreaming; the atemporal, associative nature of the thinking during and after the matrix which we describe through the Deleuzian idea of the rhizome; and the hermeneutic analysis which draws from object relations theory and the Lorenzerian tradition of scenic understanding. The matrix has been conceptualized as a "scenic rhizome" to account for its distinctive quality and hybrid origins in research practice. The scenic rhizome operates as a "third" between participants and the "objects" of contemplation. We suggest that some of the drawbacks of other group-based methods are avoided in the visual matrix—namely the tendency for inter-personal dynamics to dominate the event. URN: http://nbn-resolving.de/urn:nbn:de:0114-fqs150369

  5. Good validity of the international spinal cord injury quality of life basic data set

    DEFF Research Database (Denmark)

    Post, M W M; Adriaansen, J J E; Charlifue, S

    2016-01-01

    STUDY DESIGN: Cross-sectional validation study. OBJECTIVES: To examine the construct and concurrent validity of the International Spinal Cord Injury (SCI) Quality of Life (QoL) Basic Data Set. SETTING: Dutch community. PARTICIPANTS: People 28-65 years of age, who obtained their SCI between 18...... and 35 years of age, were at least 10 years post SCI and were wheelchair users in daily life.Measure(s):The International SCI QoL Basic Data Set consists of three single items on satisfaction with life as a whole, physical health and psychological health (0=complete dissatisfaction; 10=complete...... and psychological health (0.70). CONCLUSIONS: This first validity study of the International SCI QoL Basic Data Set shows that it appears valid for persons with SCI....

  6. Green's matrix for a second-order self-adjoint matrix differential operator

    International Nuclear Information System (INIS)

    Sisman, Tahsin Cagri; Tekin, Bayram

    2010-01-01

    A systematic construction of the Green's matrix for a second-order self-adjoint matrix differential operator from the linearly independent solutions of the corresponding homogeneous differential equation set is carried out. We follow the general approach of extracting the Green's matrix from the Green's matrix of the corresponding first-order system. This construction is required in the cases where the differential equation set cannot be turned to an algebraic equation set via transform techniques.

  7. The development and validation of the Closed-set Mandarin Sentence (CMS) test.

    Science.gov (United States)

    Tao, Duo-Duo; Fu, Qian-Jie; Galvin, John J; Yu, Ya-Feng

    2017-09-01

    Matrix-styled sentence tests offer a closed-set paradigm that may be useful when evaluating speech intelligibility. Ideally, sentence test materials should reflect the distribution of phonemes within the target language. We developed and validated the Closed-set Mandarin Sentence (CMS) test to assess Mandarin speech intelligibility in noise. CMS test materials were selected to be familiar words and to represent the natural distribution of vowels, consonants, and lexical tones found in Mandarin Chinese. Ten key words in each of five categories (Name, Verb, Number, Color, and Fruit) were produced by a native Mandarin talker, resulting in a total of 50 words that could be combined to produce 100,000 unique sentences. Normative data were collected in 10 normal-hearing, adult Mandarin-speaking Chinese listeners using a closed-set test paradigm. Two test runs were conducted for each subject, and 20 sentences per run were randomly generated while ensuring that each word was presented only twice in each run. First, the level of the words in each category were adjusted to produce equal intelligibility in noise. Test-retest reliability for word-in-sentence recognition was excellent according to Cronbach's alpha (0.952). After the category level adjustments, speech reception thresholds (SRTs) for sentences in noise, defined as the signal-to-noise ratio (SNR) that produced 50% correct whole sentence recognition, were adaptively measured by adjusting the SNR according to the correctness of response. The mean SRT was -7.9 (SE=0.41) and -8.1 (SE=0.34) dB for runs 1 and 2, respectively. The mean standard deviation across runs was 0.93 dB, and paired t-tests showed no significant difference between runs 1 and 2 (p=0.74) despite random sentences being generated for each run and each subject. The results suggest that the CMS provides large stimulus set with which to repeatedly and reliably measure Mandarin-speaking listeners' speech understanding in noise using a closed-set paradigm.

  8. Assessing the validity of commercial and municipal food environment data sets in Vancouver, Canada.

    Science.gov (United States)

    Daepp, Madeleine Ig; Black, Jennifer

    2017-10-01

    The present study assessed systematic bias and the effects of data set error on the validity of food environment measures in two municipal and two commercial secondary data sets. Sensitivity, positive predictive value (PPV) and concordance were calculated by comparing two municipal and two commercial secondary data sets with ground-truthed data collected within 800 m buffers surrounding twenty-six schools. Logistic regression examined associations of sensitivity and PPV with commercial density and neighbourhood socio-economic deprivation. Kendall's τ estimated correlations between density and proximity of food outlets near schools constructed with secondary data sets v. ground-truthed data. Vancouver, Canada. Food retailers located within 800 m of twenty-six schools RESULTS: All data sets scored relatively poorly across validity measures, although, overall, municipal data sets had higher levels of validity than did commercial data sets. Food outlets were more likely to be missing from municipal health inspections lists and commercial data sets in neighbourhoods with higher commercial density. Still, both proximity and density measures constructed from all secondary data sets were highly correlated (Kendall's τ>0·70) with measures constructed from ground-truthed data. Despite relatively low levels of validity in all secondary data sets examined, food environment measures constructed from secondary data sets remained highly correlated with ground-truthed data. Findings suggest that secondary data sets can be used to measure the food environment, although estimates should be treated with caution in areas with high commercial density.

  9. Good validity of the international spinal cord injury quality of life basic data set

    NARCIS (Netherlands)

    Post, M. W. M.; Adriaansen, J. J. E.; Charlifue, S.; Biering-Sorensen, F.; van Asbeck, F. W. A.

    Study design: Cross-sectional validation study. Objectives: To examine the construct and concurrent validity of the International Spinal Cord Injury (SCI) Quality of Life (QoL) Basic Data Set. Setting: Dutch community. Participants: People 28-65 years of age, who obtained their SCI between 18 and 35

  10. Development and validation of an Argentine set of facial expressions of emotion.

    Science.gov (United States)

    Vaiman, Marcelo; Wagner, Mónica Anna; Caicedo, Estefanía; Pereno, Germán Leandro

    2017-02-01

    Pictures of facial expressions of emotion are used in a wide range of experiments. The last decade has seen an increase in the number of studies presenting local sets of emotion stimuli. However, only a few existing sets contain pictures of Latin Americans, despite the growing attention emotion research is receiving in this region. Here we present the development and validation of the Universidad Nacional de Cordoba, Expresiones de Emociones Faciales (UNCEEF), a Facial Action Coding System (FACS)-verified set of pictures of Argentineans expressing the six basic emotions, plus neutral expressions. FACS scores, recognition rates, Hu scores, and discrimination indices are reported. Evidence of convergent validity was obtained using the Pictures of Facial Affect in an Argentine sample. However, recognition accuracy was greater for UNCEEF. The importance of local sets of emotion pictures is discussed.

  11. Dynamic Matrix Rank

    DEFF Research Database (Denmark)

    Frandsen, Gudmund Skovbjerg; Frandsen, Peter Frands

    2009-01-01

    We consider maintaining information about the rank of a matrix under changes of the entries. For n×n matrices, we show an upper bound of O(n1.575) arithmetic operations and a lower bound of Ω(n) arithmetic operations per element change. The upper bound is valid when changing up to O(n0.575) entries...... in a single column of the matrix. We also give an algorithm that maintains the rank using O(n2) arithmetic operations per rank one update. These bounds appear to be the first nontrivial bounds for the problem. The upper bounds are valid for arbitrary fields, whereas the lower bound is valid for algebraically...... closed fields. The upper bound for element updates uses fast rectangular matrix multiplication, and the lower bound involves further development of an earlier technique for proving lower bounds for dynamic computation of rational functions....

  12. Numerical Aspects of Atomic Physics: Helium Basis Sets and Matrix Diagonalization

    Science.gov (United States)

    Jentschura, Ulrich; Noble, Jonathan

    2014-03-01

    We present a matrix diagonalization algorithm for complex symmetric matrices, which can be used in order to determine the resonance energies of auto-ionizing states of comparatively simple quantum many-body systems such as helium. The algorithm is based in multi-precision arithmetic and proceeds via a tridiagonalization of the complex symmetric (not necessarily Hermitian) input matrix using generalized Householder transformations. Example calculations involving so-called PT-symmetric quantum systems lead to reference values which pertain to the imaginary cubic perturbation (the imaginary cubic anharmonic oscillator). We then proceed to novel basis sets for the helium atom and present results for Bethe logarithms in hydrogen and helium, obtained using the enhanced numerical techniques. Some intricacies of ``canned'' algorithms such as those used in LAPACK will be discussed. Our algorithm, for complex symmetric matrices such as those describing cubic resonances after complex scaling, is faster than LAPACK's built-in routines, for specific classes of input matrices. It also offer flexibility in terms of the calculation of the so-called implicit shift, which is used in order to ``pivot'' the system toward the convergence to diagonal form. We conclude with a wider overview.

  13. Validation of Correction Algorithms for Near-IR Analysis of Human Milk in an Independent Sample Set-Effect of Pasteurization.

    Science.gov (United States)

    Kotrri, Gynter; Fusch, Gerhard; Kwan, Celia; Choi, Dasol; Choi, Arum; Al Kafi, Nisreen; Rochow, Niels; Fusch, Christoph

    2016-02-26

    Commercial infrared (IR) milk analyzers are being increasingly used in research settings for the macronutrient measurement of breast milk (BM) prior to its target fortification. These devices, however, may not provide reliable measurement if not properly calibrated. In the current study, we tested a correction algorithm for a Near-IR milk analyzer (Unity SpectraStar, Brookfield, CT, USA) for fat and protein measurements, and examined the effect of pasteurization on the IR matrix and the stability of fat, protein, and lactose. Measurement values generated through Near-IR analysis were compared against those obtained through chemical reference methods to test the correction algorithm for the Near-IR milk analyzer. Macronutrient levels were compared between unpasteurized and pasteurized milk samples to determine the effect of pasteurization on macronutrient stability. The correction algorithm generated for our device was found to be valid for unpasteurized and pasteurized BM. Pasteurization had no effect on the macronutrient levels and the IR matrix of BM. These results show that fat and protein content can be accurately measured and monitored for unpasteurized and pasteurized BM. Of additional importance is the implication that donated human milk, generally low in protein content, has the potential to be target fortified.

  14. Signal-to-noise assessment for diffusion tensor imaging with single data set and validation using a difference image method with data from a multicenter study

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Zhiyue J., E-mail: jerry.wang@childrens.com [Department of Radiology, Children' s Medical Center, Dallas, Texas 75235 and Department of Radiology, University of Texas Southwestern Medical Center, Dallas, Texas 75390 (United States); Chia, Jonathan M. [Clinical Science, Philips Healthcare, Cleveland, Ohio 44143 (United States); Ahmed, Shaheen; Rollins, Nancy K. [Department of Radiology, Children' s Medical Center, Dallas, TX 75235 and Department of Radiology, University of Texas Southwestern Medical Center, Dallas, TX 75390 (United States)

    2014-09-15

    Purpose: To describe a quantitative method for determination of SNR that extracts the local noise level using a single diffusion data set. Methods: Brain data sets came from a multicenter study (eight sites; three MR vendors). Data acquisition protocol required b = 0, 700 s/mm{sup 2}, fov = 256 × 256 mm{sup 2}, acquisition matrix size 128 × 128, reconstruction matrix size 256 × 256; 30 gradient encoding directions and voxel size 2 × 2 × 2 mm{sup 3}. Regions-of-interest (ROI) were placed manually on the b = 0 image volume on transverse slices, and signal was recorded as the mean value of the ROI. The noise level from the ROI was evaluated using Fourier Transform based Butterworth high-pass filtering. Patients were divided into two groups, one for filter parameter optimization (N = 17) and one for validation (N = 10). Six white matter areas (the genu and splenium of corpus callosum, right and left centrum semiovale, right and left anterior corona radiata) were analyzed. The Bland–Altman method was used to compare the resulting SNR with that from the difference image method. The filter parameters were optimized for each brain area, and a set of “global” parameters was also obtained, which represent an average of all regions. Results: The Bland–Altman analysis on the validation group using “global” filter parameters revealed that the 95% limits of agreement of percent bias between the SNR obtained with the new and the reference methods were −15.5% (median of the lower limit, range [−24.1%, −8.9%]) and 14.5% (median of the higher limits, range [12.7%, 18.0%]) for the 6 brain areas. Conclusions: An FT-based high-pass filtering method can be used for local area SNR assessment using only one DTI data set. This method could be used to evaluate SNR for patient studies in a multicenter setting.

  15. Validation of the TRUST tool in a Greek perioperative setting.

    Science.gov (United States)

    Chatzea, Vasiliki-Eirini; Sifaki-Pistolla, Dimitra; Dey, Nilanjan; Melidoniotis, Evangelos

    2017-06-01

    The aim of this study was to translate, culturally adapt and validate the TRUST questionnaire in a Greek perioperative setting. The TRUST questionnaire assesses the relationship between trust and performance. The study assessed the levels of trust and performance in the surgery and anaesthesiology department during a very stressful period for Greece (economic crisis) and offered a user friendly and robust assessment tool. The study concludes that the Greek version of the TRUST questionnaire is a reliable and valid instrument for measuring team performance among Greek perioperative teams. Copyright the Association for Perioperative Practice.

  16. Development and characterization of a snapshot Mueller matrix polarimeter for the determination of cervical cancer risk in the low resource setting

    Science.gov (United States)

    Ramella-Roman, Jessica C.; Gonzalez, Mariacarla; Chue-Sang, Joseph; Montejo, Karla; Krup, Karl; Srinivas, Vijaya; DeHoog, Edward; Madhivanan, Purnima

    2018-04-01

    Mueller Matrix polarimetry can provide useful information about the function and structure of the extracellular matrix. Mueller Matrix systems are sophisticated and costly optical tools that have been used primarily in the laboratory or in hospital settings. Here we introduce a low-cost snapshot Mueller Matrix polarimeter that that does not require external power, has no moving parts, and can acquire a full Mueller Matrix in less than 50 milliseconds. We utilized this technology in the study of cervical cancer in Mysore India, yet the system could be translated in multiple diagnostic applications.

  17. The provisional matrix: setting the stage for tissue repair outcomes.

    Science.gov (United States)

    Barker, Thomas H; Engler, Adam J

    2017-07-01

    Since its conceptualization in the 1980s, the provisional matrix has often been characterized as a simple fibrin-containing scaffold for wound healing that supports the nascent blood clot and is functionally distinct from the basement membrane. However subsequent advances have shown that this matrix is far from passive, with distinct compositional differences as the wound matures, and providing an active role for wound remodeling. Here we review the stages of this matrix, provide an update on the state of our understanding of provisional matrix, and present some of the outstanding issues related to the provisional matrix, its components, and their assembly and use in vivo. Copyright © 2017. Published by Elsevier B.V.

  18. Matrix Sampling of Items in Large-Scale Assessments

    Directory of Open Access Journals (Sweden)

    Ruth A. Childs

    2003-07-01

    Full Text Available Matrix sampling of items -' that is, division of a set of items into different versions of a test form..-' is used by several large-scale testing programs. Like other test designs, matrixed designs have..both advantages and disadvantages. For example, testing time per student is less than if each..student received all the items, but the comparability of student scores may decrease. Also,..curriculum coverage is maintained, but reporting of scores becomes more complex. In this paper,..matrixed designs are compared with more traditional designs in nine categories of costs:..development costs, materials costs, administration costs, educational costs, scoring costs,..reliability costs, comparability costs, validity costs, and reporting costs. In choosing among test..designs, a testing program should examine the costs in light of its mandate(s, the content of the..tests, and the financial resources available, among other considerations.

  19. Review and evaluation of performance measures for survival prediction models in external validation settings

    Directory of Open Access Journals (Sweden)

    M. Shafiqur Rahman

    2017-04-01

    Full Text Available Abstract Background When developing a prediction model for survival data it is essential to validate its performance in external validation settings using appropriate performance measures. Although a number of such measures have been proposed, there is only limited guidance regarding their use in the context of model validation. This paper reviewed and evaluated a wide range of performance measures to provide some guidelines for their use in practice. Methods An extensive simulation study based on two clinical datasets was conducted to investigate the performance of the measures in external validation settings. Measures were selected from categories that assess the overall performance, discrimination and calibration of a survival prediction model. Some of these have been modified to allow their use with validation data, and a case study is provided to describe how these measures can be estimated in practice. The measures were evaluated with respect to their robustness to censoring and ease of interpretation. All measures are implemented, or are straightforward to implement, in statistical software. Results Most of the performance measures were reasonably robust to moderate levels of censoring. One exception was Harrell’s concordance measure which tended to increase as censoring increased. Conclusions We recommend that Uno’s concordance measure is used to quantify concordance when there are moderate levels of censoring. Alternatively, Gönen and Heller’s measure could be considered, especially if censoring is very high, but we suggest that the prediction model is re-calibrated first. We also recommend that Royston’s D is routinely reported to assess discrimination since it has an appealing interpretation. The calibration slope is useful for both internal and external validation settings and recommended to report routinely. Our recommendation would be to use any of the predictive accuracy measures and provide the corresponding predictive

  20. All the mathematics in the world: logical validity and classical set theory

    Directory of Open Access Journals (Sweden)

    David Charles McCarty

    2017-12-01

    Full Text Available A recognizable topological model construction shows that any consistent principles of classical set theory, including the validity of the law of the excluded third, together with a standard class theory, do not suffice to demonstrate the general validity of the law of the excluded third. This result calls into question the classical mathematician's ability to offer solid justifications for the logical principles he or she favors.

  1. Moving faces, looking places: validation of the Amsterdam Dynamic Facial Expression Set (ADFES)

    NARCIS (Netherlands)

    van der Schalk, J.; Hawk, S.T.; Fischer, A.H.; Doosje, B.

    2011-01-01

    We report two studies validating a new standardized set of filmed emotion expressions, the Amsterdam Dynamic Facial Expression Set (ADFES). The ADFES is distinct from existing datasets in that it includes a face-forward version and two different head-turning versions (faces turning toward and away

  2. The Impact of Goal Setting and Empowerment on Governmental Matrix Organizations

    Science.gov (United States)

    1993-09-01

    shared. In a study of matrix management, Eduardo Vasconcellos further describes various matrix structures in the Galbraith model. In a functional...Technology/LAR, Wright-Patterson AFB OH, 1992. Vasconcellos , Eduardo . "A Model For a Better Understanding of the Matrix Structure," IEEE Transactions on...project matrix, the project manager maintains more influence and the structure lies to the right-of center ( Vasconcellos , 1979:58). Different Types of

  3. Older adult mistreatment risk screening: contribution to the validation of a screening tool in a domestic setting.

    Science.gov (United States)

    Lindenbach, Jeannette M; Larocque, Sylvie; Lavoie, Anne-Marise; Garceau, Marie-Luce

    2012-06-01

    ABSTRACTThe hidden nature of older adult mistreatment renders its detection in the domestic setting particularly challenging. A validated screening instrument that can provide a systematic assessment of risk factors can facilitate this detection. One such instrument, the "expanded Indicators of Abuse" tool, has been previously validated in the Hebrew language in a hospital setting. The present study has contributed to the validation of the "e-IOA" in an English-speaking community setting in Ontario, Canada. It consisted of two phases: (a) a content validity review and adaptation of the instrument by experts throughout Ontario, and (b) an inter-rater reliability assessment by home visiting nurses. The adaptation, the "Mistreatment of Older Adult Risk Factors" tool, offers a comprehensive tool for screening in the home setting. This instrument is significant to professional practice as practitioners working with older adults will be better equipped to assess for risk of mistreatment.

  4. Validation of the PHEEM instrument in a Danish hospital setting

    DEFF Research Database (Denmark)

    Aspegren, Knut; Bastholt, Lars; Bested, K.M.

    2007-01-01

    The Postgraduate Hospital Educational Environment Measure (PHEEM) has been translated into Danish and then validated with good internal consistency by 342 Danish junior and senior hospital doctors. Four of the 40 items are culturally dependent in the Danish hospital setting. Factor analysis...... demonstrated that seven items are interconnected. This information can be used to shorten the instrument by perhaps another three items...

  5. New angular quadrature sets: effect on the conditioning number of the LTSN two dimensional transport matrix

    International Nuclear Information System (INIS)

    Hauser, Eliete Biasotto; Romero, Debora Angrizano

    2009-01-01

    The main objective of this work is to utilize a new angular quadrature sets based on Legendre and Chebyshev polynomials, and to analyse their effects on the number of LTS N matrix conditioning for the problem of discrete coordinates of neutron transport with two dimension cartesian geometry with isotropic scattering, and an energy group, in non multiplicative homogenous domains

  6. An Ethical Issue Scale for Community Pharmacy Setting (EISP): Development and Validation.

    Science.gov (United States)

    Crnjanski, Tatjana; Krajnovic, Dusanka; Tadic, Ivana; Stojkov, Svetlana; Savic, Mirko

    2016-04-01

    Many problems that arise when providing pharmacy services may contain some ethical components and the aims of this study were to develop and validate a scale that could assess difficulties of ethical issues, as well as the frequency of those occurrences in everyday practice of community pharmacists. Development and validation of the scale was conducted in three phases: (1) generating items for the initial survey instrument after qualitative analysis; (2) defining the design and format of the instrument; (3) validation of the instrument. The constructed Ethical Issue scale for community pharmacy setting has two parts containing the same 16 items for assessing the difficulty and frequency thereof. The results of the 171 completely filled out scales were analyzed (response rate 74.89%). The Cronbach's α value of the part of the instrument that examines difficulties of the ethical situations was 0.83 and for the part of the instrument that examined frequency of the ethical situations was 0.84. Test-retest reliability for both parts of the instrument was satisfactory with all Interclass correlation coefficient (ICC) values above 0.6, (for the part that examines severity ICC = 0.809, for the part that examines frequency ICC = 0.929). The 16-item scale, as a self assessment tool, demonstrated a high degree of content, criterion, and construct validity and test-retest reliability. The results support its use as a research tool to asses difficulty and frequency of ethical issues in community pharmacy setting. The validated scale needs to be further employed on a larger sample of pharmacists.

  7. On the validity of cosmological Fisher matrix forecasts

    International Nuclear Information System (INIS)

    Wolz, Laura; Kilbinger, Martin; Weller, Jochen; Giannantonio, Tommaso

    2012-01-01

    We present a comparison of Fisher matrix forecasts for cosmological probes with Monte Carlo Markov Chain (MCMC) posterior likelihood estimation methods. We analyse the performance of future Dark Energy Task Force (DETF) stage-III and stage-IV dark-energy surveys using supernovae, baryon acoustic oscillations and weak lensing as probes. We concentrate in particular on the dark-energy equation of state parameters w 0 and w a . For purely geometrical probes, and especially when marginalising over w a , we find considerable disagreement between the two methods, since in this case the Fisher matrix can not reproduce the highly non-elliptical shape of the likelihood function. More quantitatively, the Fisher method underestimates the marginalized errors for purely geometrical probes between 30%-70%. For cases including structure formation such as weak lensing, we find that the posterior probability contours from the Fisher matrix estimation are in good agreement with the MCMC contours and the forecasted errors only changing on the 5% level. We then explore non-linear transformations resulting in physically-motivated parameters and investigate whether these parameterisations exhibit a Gaussian behaviour. We conclude that for the purely geometrical probes and, more generally, in cases where it is not known whether the likelihood is close to Gaussian, the Fisher matrix is not the appropriate tool to produce reliable forecasts

  8. A note on the estimation of the Pareto efficient set for multiobjective matrix permutation problems.

    Science.gov (United States)

    Brusco, Michael J; Steinley, Douglas

    2012-02-01

    There are a number of important problems in quantitative psychology that require the identification of a permutation of the n rows and columns of an n × n proximity matrix. These problems encompass applications such as unidimensional scaling, paired-comparison ranking, and anti-Robinson forms. The importance of simultaneously incorporating multiple objective criteria in matrix permutation applications is well recognized in the literature; however, to date, there has been a reliance on weighted-sum approaches that transform the multiobjective problem into a single-objective optimization problem. Although exact solutions to these single-objective problems produce supported Pareto efficient solutions to the multiobjective problem, many interesting unsupported Pareto efficient solutions may be missed. We illustrate the limitation of the weighted-sum approach with an example from the psychological literature and devise an effective heuristic algorithm for estimating both the supported and unsupported solutions of the Pareto efficient set. © 2011 The British Psychological Society.

  9. Validation of the Thermo Scientific SureTect Escherichia coli O157:H7 Real-Time PCR Assay for Raw Beef and Produce Matrixes.

    Science.gov (United States)

    Cloke, Jonathan; Crowley, Erin; Bird, Patrick; Bastin, Ben; Flannery, Jonathan; Agin, James; Goins, David; Clark, Dorn; Radcliff, Roy; Wickstrand, Nina; Kauppinen, Mikko

    2015-01-01

    The Thermo Scientific™ SureTect™ Escherichia coli O157:H7 Assay is a new real-time PCR assay which has been validated through the AOAC Research Institute (RI) Performance Tested Methods(SM) program for raw beef and produce matrixes. This validation study specifically validated the assay with 375 g 1:4 and 1:5 ratios of raw ground beef and raw beef trim in comparison to the U.S. Department of Agriculture, Food Safety Inspection Service, Microbiology Laboratory Guidebook (USDS-FSIS/MLG) reference method and 25 g bagged spinach and fresh apple juice at a ratio of 1:10, in comparison to the reference method detailed in the International Organization for Standardization 16654:2001 reference method. For raw beef matrixes, the validation of both 1:4 and 1:5 allows user flexibility with the enrichment protocol, although which of these two ratios chosen by the laboratory should be based on specific test requirements. All matrixes were analyzed by Thermo Fisher Scientific, Microbiology Division, Vantaa, Finland, and Q Laboratories Inc, Cincinnati, Ohio, in the method developer study. Two of the matrixes (raw ground beef at both 1:4 and 1:5 ratios) and bagged spinach were additionally analyzed in the AOAC-RI controlled independent laboratory study, which was conducted by Marshfield Food Safety, Marshfield, Wisconsin. Using probability of detection statistical analysis, no significant difference was demonstrated by the SureTect kit in comparison to the USDA FSIS reference method for raw beef matrixes, or with the ISO reference method for matrixes of bagged spinach and apple juice. Inclusivity and exclusivity testing was conducted with 58 E. coli O157:H7 and 54 non-E. coli O157:H7 isolates, respectively, which demonstrated that the SureTect assay was able to detect all isolates of E. coli O157:H7 analyzed. In addition, all but one of the nontarget isolates were correctly interpreted as negative by the SureTect Software. The single isolate giving a positive result was an E

  10. Robust Face Recognition via Multi-Scale Patch-Based Matrix Regression.

    Directory of Open Access Journals (Sweden)

    Guangwei Gao

    Full Text Available In many real-world applications such as smart card solutions, law enforcement, surveillance and access control, the limited training sample size is the most fundamental problem. By making use of the low-rank structural information of the reconstructed error image, the so-called nuclear norm-based matrix regression has been demonstrated to be effective for robust face recognition with continuous occlusions. However, the recognition performance of nuclear norm-based matrix regression degrades greatly in the face of the small sample size problem. An alternative solution to tackle this problem is performing matrix regression on each patch and then integrating the outputs from all patches. However, it is difficult to set an optimal patch size across different databases. To fully utilize the complementary information from different patch scales for the final decision, we propose a multi-scale patch-based matrix regression scheme based on which the ensemble of multi-scale outputs can be achieved optimally. Extensive experiments on benchmark face databases validate the effectiveness and robustness of our method, which outperforms several state-of-the-art patch-based face recognition algorithms.

  11. Structural exploration for the refinement of anticancer matrix metalloproteinase-2 inhibitor designing approaches through robust validated multi-QSARs

    Science.gov (United States)

    Adhikari, Nilanjan; Amin, Sk. Abdul; Saha, Achintya; Jha, Tarun

    2018-03-01

    Matrix metalloproteinase-2 (MMP-2) is a promising pharmacological target for designing potential anticancer drugs. MMP-2 plays critical functions in apoptosis by cleaving the DNA repair enzyme namely poly (ADP-ribose) polymerase (PARP). Moreover, MMP-2 expression triggers the vascular endothelial growth factor (VEGF) having a positive influence on tumor size, invasion, and angiogenesis. Therefore, it is an urgent need to develop potential MMP-2 inhibitors without any toxicity but better pharmacokinetic property. In this article, robust validated multi-quantitative structure-activity relationship (QSAR) modeling approaches were attempted on a dataset of 222 MMP-2 inhibitors to explore the important structural and pharmacophoric requirements for higher MMP-2 inhibition. Different validated regression and classification-based QSARs, pharmacophore mapping and 3D-QSAR techniques were performed. These results were challenged and subjected to further validation to explain 24 in house MMP-2 inhibitors to judge the reliability of these models further. All these models were individually validated internally as well as externally and were supported and validated by each other. These results were further justified by molecular docking analysis. Modeling techniques adopted here not only helps to explore the necessary structural and pharmacophoric requirements but also for the overall validation and refinement techniques for designing potential MMP-2 inhibitors.

  12. Microscopically based energy density functionals for nuclei using the density matrix expansion. II. Full optimization and validation

    Science.gov (United States)

    Navarro Pérez, R.; Schunck, N.; Dyhdalo, A.; Furnstahl, R. J.; Bogner, S. K.

    2018-05-01

    Background: Energy density functional methods provide a generic framework to compute properties of atomic nuclei starting from models of nuclear potentials and the rules of quantum mechanics. Until now, the overwhelming majority of functionals have been constructed either from empirical nuclear potentials such as the Skyrme or Gogny forces, or from systematic gradient-like expansions in the spirit of the density functional theory for atoms. Purpose: We seek to obtain a usable form of the nuclear energy density functional that is rooted in the modern theory of nuclear forces. We thus consider a functional obtained from the density matrix expansion of local nuclear potentials from chiral effective field theory. We propose a parametrization of this functional carefully calibrated and validated on selected ground-state properties that is suitable for large-scale calculations of nuclear properties. Methods: Our energy functional comprises two main components. The first component is a non-local functional of the density and corresponds to the direct part (Hartree term) of the expectation value of local chiral potentials on a Slater determinant. Contributions to the mean field and the energy of this term are computed by expanding the spatial, finite-range components of the chiral potential onto Gaussian functions. The second component is a local functional of the density and is obtained by applying the density matrix expansion to the exchange part (Fock term) of the expectation value of the local chiral potential. We apply the UNEDF2 optimization protocol to determine the coupling constants of this energy functional. Results: We obtain a set of microscopically constrained functionals for local chiral potentials from leading order up to next-to-next-to-leading order with and without three-body forces and contributions from Δ excitations. These functionals are validated on the calculation of nuclear and neutron matter, nuclear mass tables, single-particle shell structure

  13. Validation of the Amsterdam Dynamic Facial Expression Set--Bath Intensity Variations (ADFES-BIV: A Set of Videos Expressing Low, Intermediate, and High Intensity Emotions.

    Directory of Open Access Journals (Sweden)

    Tanja S H Wingenbach

    Full Text Available Most of the existing sets of facial expressions of emotion contain static photographs. While increasing demand for stimuli with enhanced ecological validity in facial emotion recognition research has led to the development of video stimuli, these typically involve full-blown (apex expressions. However, variations of intensity in emotional facial expressions occur in real life social interactions, with low intensity expressions of emotions frequently occurring. The current study therefore developed and validated a set of video stimuli portraying three levels of intensity of emotional expressions, from low to high intensity. The videos were adapted from the Amsterdam Dynamic Facial Expression Set (ADFES and termed the Bath Intensity Variations (ADFES-BIV. A healthy sample of 92 people recruited from the University of Bath community (41 male, 51 female completed a facial emotion recognition task including expressions of 6 basic emotions (anger, happiness, disgust, fear, surprise, sadness and 3 complex emotions (contempt, embarrassment, pride that were expressed at three different intensities of expression and neutral. Accuracy scores (raw and unbiased (Hu hit rates were calculated, as well as response times. Accuracy rates above chance level of responding were found for all emotion categories, producing an overall raw hit rate of 69% for the ADFES-BIV. The three intensity levels were validated as distinct categories, with higher accuracies and faster responses to high intensity expressions than intermediate intensity expressions, which had higher accuracies and faster responses than low intensity expressions. To further validate the intensities, a second study with standardised display times was conducted replicating this pattern. The ADFES-BIV has greater ecological validity than many other emotion stimulus sets and allows for versatile applications in emotion research. It can be retrieved free of charge for research purposes from the

  14. Validation of the Amsterdam Dynamic Facial Expression Set--Bath Intensity Variations (ADFES-BIV): A Set of Videos Expressing Low, Intermediate, and High Intensity Emotions.

    Science.gov (United States)

    Wingenbach, Tanja S H; Ashwin, Chris; Brosnan, Mark

    2016-01-01

    Most of the existing sets of facial expressions of emotion contain static photographs. While increasing demand for stimuli with enhanced ecological validity in facial emotion recognition research has led to the development of video stimuli, these typically involve full-blown (apex) expressions. However, variations of intensity in emotional facial expressions occur in real life social interactions, with low intensity expressions of emotions frequently occurring. The current study therefore developed and validated a set of video stimuli portraying three levels of intensity of emotional expressions, from low to high intensity. The videos were adapted from the Amsterdam Dynamic Facial Expression Set (ADFES) and termed the Bath Intensity Variations (ADFES-BIV). A healthy sample of 92 people recruited from the University of Bath community (41 male, 51 female) completed a facial emotion recognition task including expressions of 6 basic emotions (anger, happiness, disgust, fear, surprise, sadness) and 3 complex emotions (contempt, embarrassment, pride) that were expressed at three different intensities of expression and neutral. Accuracy scores (raw and unbiased (Hu) hit rates) were calculated, as well as response times. Accuracy rates above chance level of responding were found for all emotion categories, producing an overall raw hit rate of 69% for the ADFES-BIV. The three intensity levels were validated as distinct categories, with higher accuracies and faster responses to high intensity expressions than intermediate intensity expressions, which had higher accuracies and faster responses than low intensity expressions. To further validate the intensities, a second study with standardised display times was conducted replicating this pattern. The ADFES-BIV has greater ecological validity than many other emotion stimulus sets and allows for versatile applications in emotion research. It can be retrieved free of charge for research purposes from the corresponding author.

  15. The Child Affective Facial Expression (CAFE) set: validity and reliability from untrained adults.

    Science.gov (United States)

    LoBue, Vanessa; Thrasher, Cat

    2014-01-01

    Emotional development is one of the largest and most productive areas of psychological research. For decades, researchers have been fascinated by how humans respond to, detect, and interpret emotional facial expressions. Much of the research in this area has relied on controlled stimulus sets of adults posing various facial expressions. Here we introduce a new stimulus set of emotional facial expressions into the domain of research on emotional development-The Child Affective Facial Expression set (CAFE). The CAFE set features photographs of a racially and ethnically diverse group of 2- to 8-year-old children posing for six emotional facial expressions-angry, fearful, sad, happy, surprised, and disgusted-and a neutral face. In the current work, we describe the set and report validity and reliability data on the set from 100 untrained adult participants.

  16. Study of the validity of a job-exposure matrix for psychosocial work factors: results from the national French SUMER survey.

    Science.gov (United States)

    Niedhammer, Isabelle; Chastang, Jean-François; Levy, David; David, Simone; Degioanni, Stéphanie; Theorell, Töres

    2008-10-01

    To construct and evaluate the validity of a job-exposure matrix (JEM) for psychosocial work factors defined by Karasek's model using national representative data of the French working population. National sample of 24,486 men and women who filled in the Job Content Questionnaire (JCQ) by Karasek measuring the scores of psychological demands, decision latitude, and social support (individual scores) in 2003 (response rate 96.5%). Median values of the three scores in the total sample of men and women were used to define high demands, low latitude, and low support (individual binary exposures). Job title was defined by both occupation and economic activity that were coded using detailed national classifications (PCS and NAF/NACE). Two JEM measures were calculated from the individual scores of demands, latitude and support for each job title: JEM scores (mean of the individual score) and JEM binary exposures (JEM score dichotomized at the median). The analysis of the variance of the individual scores of demands, latitude, and support explained by occupations and economic activities, of the correlation and agreement between individual measures and JEM measures, and of the sensitivity and specificity of JEM exposures, as well as the study of the associations with self-reported health showed a low validity of JEM measures for psychological demands and social support, and a relatively higher validity for decision latitude compared with individual measures. Job-exposure matrix measure for decision latitude might be used as a complementary exposure assessment. Further research is needed to evaluate the validity of JEM for psychosocial work factors.

  17. Support Vector Data Description Model to Map Specific Land Cover with Optimal Parameters Determined from a Window-Based Validation Set

    Directory of Open Access Journals (Sweden)

    Jinshui Zhang

    2017-04-01

    Full Text Available This paper developed an approach, the window-based validation set for support vector data description (WVS-SVDD, to determine optimal parameters for support vector data description (SVDD model to map specific land cover by integrating training and window-based validation sets. Compared to the conventional approach where the validation set included target and outlier pixels selected visually and randomly, the validation set derived from WVS-SVDD constructed a tightened hypersphere because of the compact constraint by the outlier pixels which were located neighboring to the target class in the spectral feature space. The overall accuracies for wheat and bare land achieved were as high as 89.25% and 83.65%, respectively. However, target class was underestimated because the validation set covers only a small fraction of the heterogeneous spectra of the target class. The different window sizes were then tested to acquire more wheat pixels for validation set. The results showed that classification accuracy increased with the increasing window size and the overall accuracies were higher than 88% at all window size scales. Moreover, WVS-SVDD showed much less sensitivity to the untrained classes than the multi-class support vector machine (SVM method. Therefore, the developed method showed its merits using the optimal parameters, tradeoff coefficient (C and kernel width (s, in mapping homogeneous specific land cover.

  18. POLLA-NESC, Resonance Parameter R-Matrix to S-Matrix Conversion by Reich-Moore Method

    International Nuclear Information System (INIS)

    Saussure, G. de; Perez, R.B.

    1975-01-01

    1 - Description of problem or function: The program transforms a set of r-matrix nuclear resonance parameters into a set of equivalent s-matrix (or Kapur-Peierls) resonance parameters. 2 - Method of solution: The program utilizes the multilevel formalism of Reich and Moore and avoids diagonalization of the level matrix. The parameters are obtained by a direct partial fraction expansion of the Reich-Moore expression of the collision matrix. This approach appears simpler and faster when the number of fission channels is known and small. The method is particularly useful when a large number of levels must be considered because it does not require diagonalization of a large level matrix. 3 - Restrictions on the complexity of the problem: By DIMENSION statements, the program is limited to maxima of 100 levels and 5 channels

  19. The Child Affective Facial Expression (CAFE Set: Validity and Reliability from Untrained Adults

    Directory of Open Access Journals (Sweden)

    Vanessa eLoBue

    2015-01-01

    Full Text Available Emotional development is one of the largest and most productive areas of psychological research. For decades, researchers have been fascinated by how humans respond to, detect, and interpret emotional facial expressions. Much of the research in this area has relied on controlled stimulus sets of adults posing various facial expressions. Here we introduce a new stimulus set of emotional facial expressions into the domain of research on emotional development—The Child Affective Facial Expression set (CAFE. The CAFE set features photographs of a racially and ethnically diverse group of 2- to 8-year-old children posing for 6 emotional facial expressions—angry, fearful, sad, happy, surprised, and disgusted—and a neutral face. In the current work, we describe the set and report validity and reliability data on the set from 100 untrained adult participants.

  20. Validation of the Amsterdam Dynamic Facial Expression Set – Bath Intensity Variations (ADFES-BIV): A Set of Videos Expressing Low, Intermediate, and High Intensity Emotions

    Science.gov (United States)

    Wingenbach, Tanja S. H.

    2016-01-01

    Most of the existing sets of facial expressions of emotion contain static photographs. While increasing demand for stimuli with enhanced ecological validity in facial emotion recognition research has led to the development of video stimuli, these typically involve full-blown (apex) expressions. However, variations of intensity in emotional facial expressions occur in real life social interactions, with low intensity expressions of emotions frequently occurring. The current study therefore developed and validated a set of video stimuli portraying three levels of intensity of emotional expressions, from low to high intensity. The videos were adapted from the Amsterdam Dynamic Facial Expression Set (ADFES) and termed the Bath Intensity Variations (ADFES-BIV). A healthy sample of 92 people recruited from the University of Bath community (41 male, 51 female) completed a facial emotion recognition task including expressions of 6 basic emotions (anger, happiness, disgust, fear, surprise, sadness) and 3 complex emotions (contempt, embarrassment, pride) that were expressed at three different intensities of expression and neutral. Accuracy scores (raw and unbiased (Hu) hit rates) were calculated, as well as response times. Accuracy rates above chance level of responding were found for all emotion categories, producing an overall raw hit rate of 69% for the ADFES-BIV. The three intensity levels were validated as distinct categories, with higher accuracies and faster responses to high intensity expressions than intermediate intensity expressions, which had higher accuracies and faster responses than low intensity expressions. To further validate the intensities, a second study with standardised display times was conducted replicating this pattern. The ADFES-BIV has greater ecological validity than many other emotion stimulus sets and allows for versatile applications in emotion research. It can be retrieved free of charge for research purposes from the corresponding author

  1. AOAC Official MethodSM Matrix Extension Validation Study of Assurance GDSTM for the Detection of Salmonella in Selected Spices.

    Science.gov (United States)

    Feldsine, Philip; Kaur, Mandeep; Shah, Khyati; Immerman, Amy; Jucker, Markus; Lienau, Andrew

    2015-01-01

    Assurance GDSTM for Salmonella Tq has been validated according to the AOAC INTERNATIONAL Methods Committee Guidelines for Validation of Microbiological Methods for Food and Environmental Surfaces for the detection of selected foods and environmental surfaces (Official Method of AnalysisSM 2009.03, Performance Tested MethodSM No. 050602). The method also completed AFNOR validation (following the ISO 16140 standard) compared to the reference method EN ISO 6579. For AFNOR, GDS was given a scope covering all human food, animal feed stuff, and environmental surfaces (Certificate No. TRA02/12-01/09). Results showed that Assurance GDS for Salmonella (GDS) has high sensitivity and is equivalent to the reference culture methods for the detection of motile and non-motile Salmonella. As part of the aforementioned validations, inclusivity and exclusivity studies, stability, and ruggedness studies were also conducted. Assurance GDS has 100% inclusivity and exclusivity among the 100 Salmonella serovars and 35 non-Salmonella organisms analyzed. To add to the scope of the Assurance GDS for Salmonella method, a matrix extension study was conducted, following the AOAC guidelines, to validate the application of the method for selected spices, specifically curry powder, cumin powder, and chili powder, for the detection of Salmonella.

  2. Fuzzy vulnerability matrix

    International Nuclear Information System (INIS)

    Baron, Jorge H.; Rivera, S.S.

    2000-01-01

    The so-called vulnerability matrix is used in the evaluation part of the probabilistic safety assessment for a nuclear power plant, during the containment event trees calculations. This matrix is established from what is knows as Numerical Categories for Engineering Judgement. This matrix is usually established with numerical values obtained with traditional arithmetic using the set theory. The representation of this matrix with fuzzy numbers is much more adequate, due to the fact that the Numerical Categories for Engineering Judgement are better represented with linguistic variables, such as 'highly probable', 'probable', 'impossible', etc. In the present paper a methodology to obtain a Fuzzy Vulnerability Matrix is presented, starting from the recommendations on the Numerical Categories for Engineering Judgement. (author)

  3. Affordances in the home environment for motor development: Validity and reliability for the use in daycare setting.

    Science.gov (United States)

    Müller, Alessandra Bombarda; Valentini, Nadia Cristina; Bandeira, Paulo Felipe Ribeiro

    2017-05-01

    The range of stimuli provided by physical space, toys and care practices contributes to the motor, cognitive and social development of children. However, assessing the quality of child education environments is a challenge, and can be considered a health promotion initiative. This study investigated the validity of the criterion, content, construct and reliability of the Affordances in the Home Environment for Motor Development - Infant Scale (AHEMD-IS), version 3-18 months, for the use in daycare settings. Content validation was conducted with the participation of seven motor development and health care experts; and, face validity by 20 specialists in health and education. The results indicate the suitability of the adapted AHEMD-IS, evidencing its validity for the daycare setting a potential tool to assess the opportunities that the collective context offers to child development. Copyright © 2017 Elsevier Inc. All rights reserved.

  4. A Normalized Transfer Matrix Method for the Free Vibration of Stepped Beams: Comparison with Experimental and FE(3D Methods

    Directory of Open Access Journals (Sweden)

    Tamer Ahmed El-Sayed

    2017-01-01

    Full Text Available The exact solution for multistepped Timoshenko beam is derived using a set of fundamental solutions. This set of solutions is derived to normalize the solution at the origin of the coordinates. The start, end, and intermediate boundary conditions involve concentrated masses and linear and rotational elastic supports. The beam start, end, and intermediate equations are assembled using the present normalized transfer matrix (NTM. The advantage of this method is that it is quicker than the standard method because the size of the complete system coefficient matrix is 4 × 4. In addition, during the assembly of this matrix, there are no inverse matrix steps required. The validity of this method is tested by comparing the results of the current method with the literature. Then the validity of the exact stepped analysis is checked using experimental and FE(3D methods. The experimental results for stepped beams with single step and two steps, for sixteen different test samples, are in excellent agreement with those of the three-dimensional finite element FE(3D. The comparison between the NTM method and the finite element method results shows that the modal percentage deviation is increased when a beam step location coincides with a peak point in the mode shape. Meanwhile, the deviation decreases when a beam step location coincides with a straight portion in the mode shape.

  5. POLYMAT-C: a comprehensive SPSS program for computing the polychoric correlation matrix.

    Science.gov (United States)

    Lorenzo-Seva, Urbano; Ferrando, Pere J

    2015-09-01

    We provide a free noncommercial SPSS program that implements procedures for (a) obtaining the polychoric correlation matrix between a set of ordered categorical measures, so that it can be used as input for the SPSS factor analysis (FA) program; (b) testing the null hypothesis of zero population correlation for each element of the matrix by using appropriate simulation procedures; (c) obtaining valid and accurate confidence intervals via bootstrap resampling for those correlations found to be significant; and (d) performing, if necessary, a smoothing procedure that makes the matrix amenable to any FA estimation procedure. For the main purpose (a), the program uses a robust unified procedure that allows four different types of estimates to be obtained at the user's choice. Overall, we hope the program will be a very useful tool for the applied researcher, not only because it provides an appropriate input matrix for FA, but also because it allows the researcher to carefully check the appropriateness of the matrix for this purpose. The SPSS syntax, a short manual, and data files related to this article are available as Supplemental materials that are available for download with this article.

  6. Analysis and classification of data sets for calibration and validation of agro-ecosystem models

    DEFF Research Database (Denmark)

    Kersebaum, K C; Boote, K J; Jorgenson, J S

    2015-01-01

    Experimental field data are used at different levels of complexity to calibrate, validate and improve agro-ecosystem models to enhance their reliability for regional impact assessment. A methodological framework and software are presented to evaluate and classify data sets into four classes regar...

  7. Matrix method for acoustic levitation simulation.

    Science.gov (United States)

    Andrade, Marco A B; Perez, Nicolas; Buiochi, Flavio; Adamowski, Julio C

    2011-08-01

    A matrix method is presented for simulating acoustic levitators. A typical acoustic levitator consists of an ultrasonic transducer and a reflector. The matrix method is used to determine the potential for acoustic radiation force that acts on a small sphere in the standing wave field produced by the levitator. The method is based on the Rayleigh integral and it takes into account the multiple reflections that occur between the transducer and the reflector. The potential for acoustic radiation force obtained by the matrix method is validated by comparing the matrix method results with those obtained by the finite element method when using an axisymmetric model of a single-axis acoustic levitator. After validation, the method is applied in the simulation of a noncontact manipulation system consisting of two 37.9-kHz Langevin-type transducers and a plane reflector. The manipulation system allows control of the horizontal position of a small levitated sphere from -6 mm to 6 mm, which is done by changing the phase difference between the two transducers. The horizontal position of the sphere predicted by the matrix method agrees with the horizontal positions measured experimentally with a charge-coupled device camera. The main advantage of the matrix method is that it allows simulation of non-symmetric acoustic levitators without requiring much computational effort.

  8. Validation of the Essentials of Magnetism II in Chinese critical care settings.

    Science.gov (United States)

    Bai, Jinbing; Hsu, Lily; Zhang, Qing

    2015-05-01

    To translate and evaluate the psychometric properties of the Essentials of Magnetism II tool (EOM II) for Chinese nurses in critical care settings. The EOM II is a reliable and valid scale for measuring the healthy work environment (HWE) for nurses in Western countries, however, it has not been validated among Chinese nurses. The translation of the EOM II followed internationally recognized guidelines. The Chinese version of the Essentials of Magnetism II tool (C-EOM II) was reviewed by an expert panel for culturally semantic equivalence and content validity. Then, 706 nurses from 28 intensive care units (ICUs) affiliated with 14 tertiary hospitals participated in this study. The reliability of the C-EOM II was assessed using the Cronbach's alpha coefficient; the content validity of this scale was assessed using the content validity index (CVI); and the construct validity was assessed using the confirmatory factor analysis (CFA). The C-EOM II showed excellent content validity with a CVI of 0·92. All the subscales of the C-EOM II were significantly correlated with overall nurse job satisfaction and nurse-assessed quality of care. The CFA showed that the C-EOM II was composed of 45 items with nine factors, accounting for 46·51% of the total variance. Cronbach's alpha coefficients for these factors ranged from 0·56 to 0·89. The C-EOM II is a promising scale to assess the HWE for Chinese ICU nurses. Nursing administrators and health care policy-makers can use the C-EOM II to evaluate clinical work environment so that a healthier work environment can be created and sustained for staff nurses. © 2013 British Association of Critical Care Nurses.

  9. UpSet: Visualization of Intersecting Sets

    Science.gov (United States)

    Lex, Alexander; Gehlenborg, Nils; Strobelt, Hendrik; Vuillemot, Romain; Pfister, Hanspeter

    2016-01-01

    Understanding relationships between sets is an important analysis task that has received widespread attention in the visualization community. The major challenge in this context is the combinatorial explosion of the number of set intersections if the number of sets exceeds a trivial threshold. In this paper we introduce UpSet, a novel visualization technique for the quantitative analysis of sets, their intersections, and aggregates of intersections. UpSet is focused on creating task-driven aggregates, communicating the size and properties of aggregates and intersections, and a duality between the visualization of the elements in a dataset and their set membership. UpSet visualizes set intersections in a matrix layout and introduces aggregates based on groupings and queries. The matrix layout enables the effective representation of associated data, such as the number of elements in the aggregates and intersections, as well as additional summary statistics derived from subset or element attributes. Sorting according to various measures enables a task-driven analysis of relevant intersections and aggregates. The elements represented in the sets and their associated attributes are visualized in a separate view. Queries based on containment in specific intersections, aggregates or driven by attribute filters are propagated between both views. We also introduce several advanced visual encodings and interaction methods to overcome the problems of varying scales and to address scalability. UpSet is web-based and open source. We demonstrate its general utility in multiple use cases from various domains. PMID:26356912

  10. Basic Laparoscopic Skills Assessment Study: Validation and Standard Setting among Canadian Urology Trainees.

    Science.gov (United States)

    Lee, Jason Y; Andonian, Sero; Pace, Kenneth T; Grober, Ethan

    2017-06-01

    As urology training programs move to a competency based medical education model, iterative assessments with objective standards will be required. To develop a valid set of technical skills standards we initiated a national skills assessment study focusing initially on laparoscopic skills. Between February 2014 and March 2016 the basic laparoscopic skill of Canadian urology trainees and attending urologists was assessed using 4 standardized tasks from the AUA (American Urological Association) BLUS (Basic Laparoscopic Urological Surgery) curriculum, including peg transfer, pattern cutting, suturing and knot tying, and vascular clip applying. All performances were video recorded and assessed using 3 methods, including time and error based scoring, expert global rating scores and C-SATS (Crowd-Sourced Assessments of Technical Skill Global Rating Scale), a novel, crowd sourced assessment platform. Different methods of standard setting were used to develop pass-fail cut points. Six attending urologists and 99 trainees completed testing. Reported laparoscopic experience and training level correlated with performance (p standard setting methods to define pass-fail cut points for all 4 AUA BLUS tasks. The 4 AUA BLUS tasks demonstrated good construct validity evidence for use in assessing basic laparoscopic skill. Performance scores using the novel C-SATS platform correlated well with traditional time-consuming methods of assessment. Various standard setting methods were used to develop pass-fail cut points for educators to use when making formative and summative assessments of basic laparoscopic skill. Copyright © 2017 American Urological Association Education and Research, Inc. Published by Elsevier Inc. All rights reserved.

  11. Non-negative Matrix Factorization for Binary Data

    DEFF Research Database (Denmark)

    Larsen, Jacob Søgaard; Clemmensen, Line Katrine Harder

    We propose the Logistic Non-negative Matrix Factorization for decomposition of binary data. Binary data are frequently generated in e.g. text analysis, sensory data, market basket data etc. A common method for analysing non-negative data is the Non-negative Matrix Factorization, though...... this is in theory not appropriate for binary data, and thus we propose a novel Non-negative Matrix Factorization based on the logistic link function. Furthermore we generalize the method to handle missing data. The formulation of the method is compared to a previously proposed method (Tome et al., 2015). We compare...... the performance of the Logistic Non-negative Matrix Factorization to Least Squares Non-negative Matrix Factorization and Kullback-Leibler (KL) Non-negative Matrix Factorization on sets of binary data: a synthetic dataset, a set of student comments on their professors collected in a binary term-document matrix...

  12. Implementation and automated validation of the minimal Z' model in FeynRules

    International Nuclear Information System (INIS)

    Basso, L.; Christensen, N.D.; Duhr, C.; Fuks, B.; Speckner, C.

    2012-01-01

    We describe the implementation of a well-known class of U(1) gauge models, the 'minimal' Z' models, in FeynRules. We also describe a new automated validation tool for FeynRules models which is controlled by a web interface and allows the user to run a complete set of 2 → 2 processes on different matrix element generators, different gauges, and compare between them all. If existing, the comparison with independent implementations is also possible. This tool has been used to validate our implementation of the 'minimal' Z' models. (authors)

  13. Construct Validity and Reliability of Structured Assessment of endoVascular Expertise in a Simulated Setting

    DEFF Research Database (Denmark)

    Bech, B; Lönn, L; Falkenberg, M

    2011-01-01

    Objectives To study the construct validity and reliability of a novel endovascular global rating scale, Structured Assessment of endoVascular Expertise (SAVE). Design A Clinical, experimental study. Materials Twenty physicians with endovascular experiences ranging from complete novices to highly....... Validity was analysed by correlating experience with performance results. Reliability was analysed according to generalisability theory. Results The mean score on the 29 items of the SAVE scale correlated well with clinical experience (R = 0.84, P ... with clinical experience (R = -0.53, P validity and reliability of assessment with the SAVE scale was high when applied to performances in a simulation setting with advanced realism. No ceiling effect...

  14. GSMA: Gene Set Matrix Analysis, An Automated Method for Rapid Hypothesis Testing of Gene Expression Data

    Directory of Open Access Journals (Sweden)

    Chris Cheadle

    2007-01-01

    Full Text Available Background: Microarray technology has become highly valuable for identifying complex global changes in gene expression patterns. The assignment of functional information to these complex patterns remains a challenging task in effectively interpreting data and correlating results from across experiments, projects and laboratories. Methods which allow the rapid and robust evaluation of multiple functional hypotheses increase the power of individual researchers to data mine gene expression data more efficiently.Results: We have developed (gene set matrix analysis GSMA as a useful method for the rapid testing of group-wise up- or downregulation of gene expression simultaneously for multiple lists of genes (gene sets against entire distributions of gene expression changes (datasets for single or multiple experiments. The utility of GSMA lies in its flexibility to rapidly poll gene sets related by known biological function or as designated solely by the end-user against large numbers of datasets simultaneously.Conclusions: GSMA provides a simple and straightforward method for hypothesis testing in which genes are tested by groups across multiple datasets for patterns of expression enrichment.

  15. Multiple graph regularized nonnegative matrix factorization

    KAUST Repository

    Wang, Jim Jing-Yan

    2013-10-01

    Non-negative matrix factorization (NMF) has been widely used as a data representation method based on components. To overcome the disadvantage of NMF in failing to consider the manifold structure of a data set, graph regularized NMF (GrNMF) has been proposed by Cai et al. by constructing an affinity graph and searching for a matrix factorization that respects graph structure. Selecting a graph model and its corresponding parameters is critical for this strategy. This process is usually carried out by cross-validation or discrete grid search, which are time consuming and prone to overfitting. In this paper, we propose a GrNMF, called MultiGrNMF, in which the intrinsic manifold is approximated by a linear combination of several graphs with different models and parameters inspired by ensemble manifold regularization. Factorization metrics and linear combination coefficients of graphs are determined simultaneously within a unified object function. They are alternately optimized in an iterative algorithm, thus resulting in a novel data representation algorithm. Extensive experiments on a protein subcellular localization task and an Alzheimer\\'s disease diagnosis task demonstrate the effectiveness of the proposed algorithm. © 2013 Elsevier Ltd. All rights reserved.

  16. Validation of the Care-Related Quality of Life Instrument in different study settings: findings from The Older Persons and Informal Caregivers Survey Minimum DataSet (TOPICS-MDS).

    Science.gov (United States)

    Lutomski, J E; van Exel, N J A; Kempen, G I J M; Moll van Charante, E P; den Elzen, W P J; Jansen, A P D; Krabbe, P F M; Steunenberg, B; Steyerberg, E W; Olde Rikkert, M G M; Melis, R J F

    2015-05-01

    Validity is a contextual aspect of a scale which may differ across sample populations and study protocols. The objective of our study was to validate the Care-Related Quality of Life Instrument (CarerQol) across two different study design features, sampling framework (general population vs. different care settings) and survey mode (interview vs. written questionnaire). Data were extracted from The Older Persons and Informal Caregivers Minimum DataSet (TOPICS-MDS, www.topics-mds.eu ), a pooled public-access data set with information on >3,000 informal caregivers throughout the Netherlands. Meta-correlations and linear mixed models between the CarerQol's seven dimensions (CarerQol-7D) and caregiver's level of happiness (CarerQol-VAS) and self-rated burden (SRB) were performed. The CarerQol-7D dimensions were correlated to the CarerQol-VAS and SRB in the pooled data set and the subgroups. The strength of correlations between CarerQol-7D dimensions and SRB was weaker among caregivers who were interviewed versus those who completed a written questionnaire. The directionality of associations between the CarerQol-VAS, SRB and the CarerQol-7D dimensions in the multivariate model supported the construct validity of the CarerQol in the pooled population. Significant interaction terms were observed in several dimensions of the CarerQol-7D across sampling frame and survey mode, suggesting meaningful differences in reporting levels. Although good scientific practice emphasises the importance of re-evaluating instrument properties in individual research studies, our findings support the validity and applicability of the CarerQol instrument in a variety of settings. Due to minor differential reporting, pooling CarerQol data collected using mixed administration modes should be interpreted with caution; for TOPICS-MDS, meta-analytic techniques may be warranted.

  17. Impacts of Sample Design for Validation Data on the Accuracy of Feedforward Neural Network Classification

    Directory of Open Access Journals (Sweden)

    Giles M. Foody

    2017-08-01

    Full Text Available Validation data are often used to evaluate the performance of a trained neural network and used in the selection of a network deemed optimal for the task at-hand. Optimality is commonly assessed with a measure, such as overall classification accuracy. The latter is often calculated directly from a confusion matrix showing the counts of cases in the validation set with particular labelling properties. The sample design used to form the validation set can, however, influence the estimated magnitude of the accuracy. Commonly, the validation set is formed with a stratified sample to give balanced classes, but also via random sampling, which reflects class abundance. It is suggested that if the ultimate aim is to accurately classify a dataset in which the classes do vary in abundance, a validation set formed via random, rather than stratified, sampling is preferred. This is illustrated with the classification of simulated and remotely-sensed datasets. With both datasets, statistically significant differences in the accuracy with which the data could be classified arose from the use of validation sets formed via random and stratified sampling (z = 2.7 and 1.9 for the simulated and real datasets respectively, for both p < 0.05%. The accuracy of the classifications that used a stratified sample in validation were smaller, a result of cases of an abundant class being commissioned into a rarer class. Simple means to address the issue are suggested.

  18. The Outcome and Assessment Information Set (OASIS): A Review of Validity and Reliability

    Science.gov (United States)

    O’CONNOR, MELISSA; DAVITT, JOAN K.

    2015-01-01

    The Outcome and Assessment Information Set (OASIS) is the patient-specific, standardized assessment used in Medicare home health care to plan care, determine reimbursement, and measure quality. Since its inception in 1999, there has been debate over the reliability and validity of the OASIS as a research tool and outcome measure. A systematic literature review of English-language articles identified 12 studies published in the last 10 years examining the validity and reliability of the OASIS. Empirical findings indicate the validity and reliability of the OASIS range from low to moderate but vary depending on the item studied. Limitations in the existing research include: nonrepresentative samples; inconsistencies in methods used, items tested, measurement, and statistical procedures; and the changes to the OASIS itself over time. The inconsistencies suggest that these results are tentative at best; additional research is needed to confirm the value of the OASIS for measuring patient outcomes, research, and quality improvement. PMID:23216513

  19. Revising the retrieval technique of a long-term stratospheric HNO{sub 3} data set. From a constrained matrix inversion to the optimal estimation algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Fiorucci, I.; Muscari, G. [Istituto Nazionale di Geofisica e Vulcanologia, Rome (Italy); De Zafra, R.L. [State Univ. of New York, Stony Brook, NY (United States). Dept. of Physics and Astronomy

    2011-07-01

    The Ground-Based Millimeter-wave Spectrometer (GBMS) was designed and built at the State University of New York at Stony Brook in the early 1990s and since then has carried out many measurement campaigns of stratospheric O{sub 3}, HNO{sub 3}, CO and N{sub 2}O at polar and mid-latitudes. Its HNO{sub 3} data set shed light on HNO{sub 3} annual cycles over the Antarctic continent and contributed to the validation of both generations of the satellite-based JPL Microwave Limb Sounder (MLS). Following the increasing need for long-term data sets of stratospheric constituents, we resolved to establish a long-term GMBS observation site at the Arctic station of Thule (76.5 N, 68.8 W), Greenland, beginning in January 2009, in order to track the long- and short-term interactions between the changing climate and the seasonal processes tied to the ozone depletion phenomenon. Furthermore, we updated the retrieval algorithm adapting the Optimal Estimation (OE) method to GBMS spectral data in order to conform to the standard of the Network for the Detection of Atmospheric Composition Change (NDACC) microwave group, and to provide our retrievals with a set of averaging kernels that allow more straightforward comparisons with other data sets. The new OE algorithm was applied to GBMS HNO{sub 3} data sets from 1993 South Pole observations to date, in order to produce HNO{sub 3} version 2 (v2) profiles. A sample of results obtained at Antarctic latitudes in fall and winter and at mid-latitudes is shown here. In most conditions, v2 inversions show a sensitivity (i.e., sum of column elements of the averaging kernel matrix) of 100{+-}20% from 20 to 45 km altitude, with somewhat worse (better) sensitivity in the Antarctic winter lower (upper) stratosphere. The 1{sigma} uncertainty on HNO{sub 3} v2 mixing ratio vertical profiles depends on altitude and is estimated at {proportional_to}15% or 0.3 ppbv, whichever is larger. Comparisons of v2 with former (v1) GBMS HNO{sub 3} vertical profiles

  20. Antibody Selection for Cancer Target Validation of FSH-Receptor in Immunohistochemical Settings

    Directory of Open Access Journals (Sweden)

    Nina Moeker

    2017-10-01

    Full Text Available Background: The follicle-stimulating hormone (FSH-receptor (FSHR has been reported to be an attractive target for antibody therapy in human cancer. However, divergent immunohistochemical (IHC findings have been reported for FSHR expression in tumor tissues, which could be due to the specificity of the antibodies used. Methods: Three frequently used antibodies (sc-7798, sc-13935, and FSHR323 were validated for their suitability in an immunohistochemical study for FSHR expression in different tissues. As quality control, two potential therapeutic anti-hFSHR Ylanthia® antibodies (Y010913, Y010916 were used. The specificity criteria for selection of antibodies were binding to native hFSHR of different sources, and no binding to non-related proteins. The ability of antibodies to stain the paraffin-embedded Flp-In Chinese hamster ovary (CHO/FSHR cells was tested after application of different epitope retrieval methods. Results: From the five tested anti-hFSHR antibodies, only Y010913, Y010916, and FSHR323 showed specific binding to native, cell-presented hFSHR. Since Ylanthia® antibodies were selected to specifically recognize native FSHR, as required for a potential therapeutic antibody candidate, FSHR323 was the only antibody to detect the receptor in IHC/histochemical settings on transfected cells, and at markedly lower, physiological concentrations (ex., in Sertoli cells of human testes. The pattern of FSH323 staining noticed for ovarian, prostatic, and renal adenocarcinomas indicated that FSHR was expressed mainly in the peripheral tumor blood vessels. Conclusion: Of all published IHC antibodies tested, only antibody FSHR323 proved suitable for target validation of hFSHR in an IHC setting for cancer. Our studies could not confirm the previously reported FSHR overexpression in ovarian and prostate cancer cells. Instead, specific overexpression in peripheral tumor blood vessels could be confirmed after thorough validation of the antibodies used.

  1. Matrix thermalization

    International Nuclear Information System (INIS)

    Craps, Ben; Evnin, Oleg; Nguyen, Kévin

    2017-01-01

    Matrix quantum mechanics offers an attractive environment for discussing gravitational holography, in which both sides of the holographic duality are well-defined. Similarly to higher-dimensional implementations of holography, collapsing shell solutions in the gravitational bulk correspond in this setting to thermalization processes in the dual quantum mechanical theory. We construct an explicit, fully nonlinear supergravity solution describing a generic collapsing dilaton shell, specify the holographic renormalization prescriptions necessary for computing the relevant boundary observables, and apply them to evaluating thermalizing two-point correlation functions in the dual matrix theory.

  2. Matrix thermalization

    Science.gov (United States)

    Craps, Ben; Evnin, Oleg; Nguyen, Kévin

    2017-02-01

    Matrix quantum mechanics offers an attractive environment for discussing gravitational holography, in which both sides of the holographic duality are well-defined. Similarly to higher-dimensional implementations of holography, collapsing shell solutions in the gravitational bulk correspond in this setting to thermalization processes in the dual quantum mechanical theory. We construct an explicit, fully nonlinear supergravity solution describing a generic collapsing dilaton shell, specify the holographic renormalization prescriptions necessary for computing the relevant boundary observables, and apply them to evaluating thermalizing two-point correlation functions in the dual matrix theory.

  3. Matrix thermalization

    Energy Technology Data Exchange (ETDEWEB)

    Craps, Ben [Theoretische Natuurkunde, Vrije Universiteit Brussel (VUB), and International Solvay Institutes, Pleinlaan 2, B-1050 Brussels (Belgium); Evnin, Oleg [Department of Physics, Faculty of Science, Chulalongkorn University, Thanon Phayathai, Pathumwan, Bangkok 10330 (Thailand); Theoretische Natuurkunde, Vrije Universiteit Brussel (VUB), and International Solvay Institutes, Pleinlaan 2, B-1050 Brussels (Belgium); Nguyen, Kévin [Theoretische Natuurkunde, Vrije Universiteit Brussel (VUB), and International Solvay Institutes, Pleinlaan 2, B-1050 Brussels (Belgium)

    2017-02-08

    Matrix quantum mechanics offers an attractive environment for discussing gravitational holography, in which both sides of the holographic duality are well-defined. Similarly to higher-dimensional implementations of holography, collapsing shell solutions in the gravitational bulk correspond in this setting to thermalization processes in the dual quantum mechanical theory. We construct an explicit, fully nonlinear supergravity solution describing a generic collapsing dilaton shell, specify the holographic renormalization prescriptions necessary for computing the relevant boundary observables, and apply them to evaluating thermalizing two-point correlation functions in the dual matrix theory.

  4. The fitness for purpose of analytical methods applied to fluorimetric uranium determination in water matrix

    International Nuclear Information System (INIS)

    Grinman, Ana; Giustina, Daniel; Mondini, Julia; Diodat, Jorge

    2008-01-01

    Full text: This paper describes the steps which should be followed by a laboratory in order to validate the fluorimetric method for natural uranium in water matrix. The validation of an analytical method is a necessary requirement prior accreditation under Standard norm ISO/IEC 17025, of a non normalized method. Different analytical techniques differ in a sort of variables to be validated. Depending on the chemical process, measurement technique, matrix type, data fitting and measurement efficiency, a laboratory must set up experiments to verify reliability of data, through the application of several statistical tests and by participating in Quality Programs (QP) organized by reference laboratories such as the National Institute of Standards and Technology (NIST), National Physics Laboratory (NPL), or Environmental Measurements Laboratory (EML). However, the participation in QP not only involves international reference laboratories, but also, the national ones which are able to prove proficiency to the Argentinean Accreditation Board. The parameters that the ARN laboratory had to validate in the fluorimetric method to fit in accordance with Eurachem guide and IUPAC definitions, are: Detection Limit, Quantification Limit, Precision, Intra laboratory Precision, Reproducibility Limit, Repeatability Limit, Linear Range and Robustness. Assays to fit the above parameters were designed on the bases of statistics requirements, and a detailed data treatment is presented together with the respective tests in order to show the parameters validated. As a final conclusion, the uranium determination by fluorimetry is a reliable method for direct measurement to meet radioprotection requirements in water matrix, within its linear range which is fixed every time a calibration is carried out at the beginning of the analysis. The detection limit ( depending on blank standard deviation and slope) varies between 3 ug U and 5 ug U which yields minimum detectable concentrations (MDC) of

  5. A multiple criteria decision making for raking alternatives using preference relation matrix based on intuitionistic fuzzy sets

    Directory of Open Access Journals (Sweden)

    Mehdi Bahramloo

    2013-10-01

    Full Text Available Ranking various alternatives has been under investigation and there are literally various methods and techniques for making a decision based on various criteria. One of the primary concerns on ranking methodologies such as analytical hierarchy process (AHP is that decision makers cannot express his/her feeling in crisp form. Therefore, we need to use linguistic terms to receive the relative weights for comparing various alternatives. In this paper, we discuss ranking different alternatives based on the implementation of preference relation matrix based on intuitionistic fuzzy sets.

  6. Explicit Covariance Matrix for Particle Measurement Precision

    CERN Document Server

    Karimäki, Veikko

    1997-01-01

    We derive explicit and precise formulae for 3 by 3 error matrix of the particle transverse momentum, direction and impact parameter. The error matrix elements are expressed as functions of up to fourth order statistical moments of the measured coordinates. The formulae are valid for any curvature and track length in case of negligible multiple scattering.

  7. The impact of crowd noise on officiating in Muay Thai: achieving external validity in an experimental setting

    Directory of Open Access Journals (Sweden)

    Tony D Myers

    2012-09-01

    Full Text Available Numerous factors have been proposed to explain the home advantage in sport. Several authors have suggested that a partisan home crowd enhances home advantage and that this is at least in part a consequence of their influence on officiating. However, while experimental studies examining this phenomenon have high levels of internal validity (since only the ‘crowd noise’ intervention is allowed to vary, they suffer from a lack of external validity, with decision-making in a laboratory setting typically bearing little resemblance to decision-making in live sports settings. Conversely, observational and quasi-experimental studies with high levels of external validity suffer from low levels of internal validity as countless factors besides crowd noise vary. The present study provides a unique opportunity to address these criticisms, by conducting a controlled experiment on the impact of crowd noise on officiating in a live tournament setting. Seventeen qualified judges officiated on thirty Thai boxing bouts in a live international tournament setting featuring ‘home’ and ‘away’ boxers. In each bout, judges were randomised into a ‘noise’ (live sound or ‘no crowd noise’ (noise cancelling headphones and white noise condition, resulting in 59 judgements in the ‘no crowd noise’ and 61 in the ‘crowd noise’ condition. The results provide the first experimental evidence of the impact of live crowd noise on officials in sport. A cross-classified statistical model indicated that crowd noise had a statistically significant impact, equating to just over half a point per bout (in the context of five round bouts with the ‘ten point must’ scoring system shared with professional boxing. The practical significance of the findings, their implications for officiating and for the future conduct of crowd noise studies are discussed.

  8. The impact of crowd noise on officiating in muay thai: achieving external validity in an experimental setting.

    Science.gov (United States)

    Myers, Tony; Balmer, Nigel

    2012-01-01

    Numerous factors have been proposed to explain the home advantage in sport. Several authors have suggested that a partisan home crowd enhances home advantage and that this is at least in part a consequence of their influence on officiating. However, while experimental studies examining this phenomenon have high levels of internal validity (since only the "crowd noise" intervention is allowed to vary), they suffer from a lack of external validity, with decision-making in a laboratory setting typically bearing little resemblance to decision-making in live sports settings. Conversely, observational and quasi-experimental studies with high levels of external validity suffer from low levels of internal validity as countless factors besides crowd noise vary. The present study provides a unique opportunity to address these criticisms, by conducting a controlled experiment on the impact of crowd noise on officiating in a live tournament setting. Seventeen qualified judges officiated on thirty Thai boxing bouts in a live international tournament setting featuring "home" and "away" boxers. In each bout, judges were randomized into a "noise" (live sound) or "no crowd noise" (noise-canceling headphones and white noise) condition, resulting in 59 judgments in the "no crowd noise" and 61 in the "crowd noise" condition. The results provide the first experimental evidence of the impact of live crowd noise on officials in sport. A cross-classified statistical model indicated that crowd noise had a statistically significant impact, equating to just over half a point per bout (in the context of five round bouts with the "10-point must" scoring system shared with professional boxing). The practical significance of the findings, their implications for officiating and for the future conduct of crowd noise studies are discussed.

  9. Setting and validating the pass/fail score for the NBDHE.

    Science.gov (United States)

    Tsai, Tsung-Hsun; Dixon, Barbara Leatherman

    2013-04-01

    This report describes the overall process used for setting the pass/fail score for the National Board Dental Hygiene Examination (NBDHE). The Objective Standard Setting (OSS) method was used for setting the pass/fail score for the NBDHE. The OSS method requires a panel of experts to determine the criterion items and proportion of these items that minimally competent candidates would answer correctly, the percentage of mastery and the confidence level of the error band. A panel of 11 experts was selected by the Joint Commission on National Dental Examinations (Joint Commission). Panel members represented geographic distribution across the U.S. and had the following characteristics: full-time dental hygiene practitioners with experience in areas of preventive, periodontal, geriatric and special needs care, and full-time dental hygiene educators with experience in areas of scientific basis for dental hygiene practice, provision of clinical dental hygiene services and community health/research principles. Utilizing the expert panel's judgments, the pass/fail score was set and then the score scale was established using the Rasch measurement model. Statistical and psychometric analysis shows the actual failure rate and the OSS failure rate are reasonably consistent (2.4% vs. 2.8%). The analysis also showed the lowest error of measurement, an index of the precision at the pass/fail score point and that the highest reliability (0.97) are achieved at the pass/fail score point. The pass/fail score is a valid guide for making decisions about candidates for dental hygiene licensure. This new standard was reviewed and approved by the Joint Commission and was implemented beginning in 2011.

  10. GRIMHX verification and validation action matrix summary

    International Nuclear Information System (INIS)

    Trumble, E.F.

    1991-12-01

    WSRC-RP-90-026, Certification Plan for Reactor Analysis Computer Codes, describes a series of action items to be completed for certification of reactor analysis computer codes used in Technical Specifications development and for other safety and production support calculations. Validation and verification of the code is an integral part of this process. This document identifies the work performed and documentation generated to satisfy these action items for the Reactor Physics computer code GRIMHX. Each action item is discussed with the justification for its completion. Specific details of the work performed are not included in this document but are found in the references. The publication of this document signals the validation and verification effort for the GRIMHX code is completed

  11. Development of natural matrix reference materials for monitoring environmental radioactivity

    International Nuclear Information System (INIS)

    Holmes, A.S.; Houlgate, P.R.; Pang, S.; Brookman, B.

    1992-01-01

    The Department of the Environment commissioned the Laboratory of the Government Chemist to carry out a contract on natural matrix reference materials. A survey of current availability of such materials in the western world, along with the UK's need, was conducted. Four suitable matrices were identified for production and validation. Due to a number of unforeseen problems with the collection, processing and validation of the materials, the production of the four identified reference materials was not completed in the allocated period of time. In the future production of natural matrix reference materials the time required, the cost and the problems encountered should not be underestimated. Certified natural matrix reference materials are a vital part of traceability in analytical science and without them there is no absolute method of checking the validity of measurement in the field of radiochemical analysis. (author)

  12. Fully Decentralized Semi-supervised Learning via Privacy-preserving Matrix Completion.

    Science.gov (United States)

    Fierimonte, Roberto; Scardapane, Simone; Uncini, Aurelio; Panella, Massimo

    2016-08-26

    Distributed learning refers to the problem of inferring a function when the training data are distributed among different nodes. While significant work has been done in the contexts of supervised and unsupervised learning, the intermediate case of Semi-supervised learning in the distributed setting has received less attention. In this paper, we propose an algorithm for this class of problems, by extending the framework of manifold regularization. The main component of the proposed algorithm consists of a fully distributed computation of the adjacency matrix of the training patterns. To this end, we propose a novel algorithm for low-rank distributed matrix completion, based on the framework of diffusion adaptation. Overall, the distributed Semi-supervised algorithm is efficient and scalable, and it can preserve privacy by the inclusion of flexible privacy-preserving mechanisms for similarity computation. The experimental results and comparison on a wide range of standard Semi-supervised benchmarks validate our proposal.

  13. Prediction of MHC class II binding affinity using SMM-align, a novel stabilization matrix alignment method

    DEFF Research Database (Denmark)

    Nielsen, Morten; Lundegaard, Claus; Lund, Ole

    2007-01-01

    the correct alignment of a peptide in the binding groove a crucial part of identifying the core of an MHC class II binding motif. Here, we present a novel stabilization matrix alignment method, SMM-align, that allows for direct prediction of peptide:MHC binding affinities. The predictive performance...... of the method is validated on a large MHC class II benchmark data set covering 14 HLA-DR (human MHC) and three mouse H2-IA alleles. RESULTS: The predictive performance of the SMM-align method was demonstrated to be superior to that of the Gibbs sampler, TEPITOPE, SVRMHC, and MHCpred methods. Cross validation...... between peptide data set obtained from different sources demonstrated that direct incorporation of peptide length potentially results in over-fitting of the binding prediction method. Focusing on amino terminal peptide flanking residues (PFR), we demonstrate a consistent gain in predictive performance...

  14. In-depth, high-accuracy proteomics of sea urchin tooth organic matrix

    Directory of Open Access Journals (Sweden)

    Mann Matthias

    2008-12-01

    Full Text Available Abstract Background The organic matrix contained in biominerals plays an important role in regulating mineralization and in determining biomineral properties. However, most components of biomineral matrices remain unknown at present. In sea urchin tooth, which is an important model for developmental biology and biomineralization, only few matrix components have been identified. The recent publication of the Strongylocentrotus purpuratus genome sequence rendered possible not only the identification of genes potentially coding for matrix proteins, but also the direct identification of proteins contained in matrices of skeletal elements by in-depth, high-accuracy proteomic analysis. Results We identified 138 proteins in the matrix of tooth powder. Only 56 of these proteins were previously identified in the matrices of test (shell and spine. Among the novel components was an interesting group of five proteins containing alanine- and proline-rich neutral or basic motifs separated by acidic glycine-rich motifs. In addition, four of the five proteins contained either one or two predicted Kazal protease inhibitor domains. The major components of tooth matrix were however largely identical to the set of spicule matrix proteins and MSP130-related proteins identified in test (shell and spine matrix. Comparison of the matrices of crushed teeth to intact teeth revealed a marked dilution of known intracrystalline matrix proteins and a concomitant increase in some intracellular proteins. Conclusion This report presents the most comprehensive list of sea urchin tooth matrix proteins available at present. The complex mixture of proteins identified may reflect many different aspects of the mineralization process. A comparison between intact tooth matrix, presumably containing odontoblast remnants, and crushed tooth matrix served to differentiate between matrix components and possible contributions of cellular remnants. Because LC-MS/MS-based methods directly

  15. Urine specimen validity test for drug abuse testing in workplace and court settings.

    Science.gov (United States)

    Lin, Shin-Yu; Lee, Hei-Hwa; Lee, Jong-Feng; Chen, Bai-Hsiun

    2018-01-01

    In recent decades, urine drug testing in the workplace has become common in many countries in the world. There have been several studies concerning the use of the urine specimen validity test (SVT) for drug abuse testing administered in the workplace. However, very little data exists concerning the urine SVT on drug abuse tests from court specimens, including dilute, substituted, adulterated, and invalid tests. We investigated 21,696 submitted urine drug test samples for SVT from workplace and court settings in southern Taiwan over 5 years. All immunoassay screen-positive urine specimen drug tests were confirmed by gas chromatography/mass spectrometry. We found that the mean 5-year prevalence of tampering (dilute, substituted, or invalid tests) in urine specimens from the workplace and court settings were 1.09% and 3.81%, respectively. The mean 5-year percentage of dilute, substituted, and invalid urine specimens from the workplace were 89.2%, 6.8%, and 4.1%, respectively. The mean 5-year percentage of dilute, substituted, and invalid urine specimens from the court were 94.8%, 1.4%, and 3.8%, respectively. No adulterated cases were found among the workplace or court samples. The most common drug identified from the workplace specimens was amphetamine, followed by opiates. The most common drug identified from the court specimens was ketamine, followed by amphetamine. We suggest that all urine specimens taken for drug testing from both the workplace and court settings need to be tested for validity. Copyright © 2017. Published by Elsevier B.V.

  16. The use of questionnaires in colour research in real-life settings : In search of validity and methodological pitfalls

    NARCIS (Netherlands)

    Bakker, I.C.; van der Voordt, Theo; Vink, P.; de Boon, J

    2014-01-01

    This research discusses the validity of applying questionnaires in colour research in real life settings.
    In the literature the conclusions concerning the influences of colours on human performance and well-being are often conflicting. This can be caused by the artificial setting of the test

  17. Texture zeros in neutrino mass matrix

    Energy Technology Data Exchange (ETDEWEB)

    Dziewit, B., E-mail: bartosz.dziewit@us.edu.pl; Holeczek, J., E-mail: jacek.holeczek@us.edu.pl; Richter, M., E-mail: monikarichter18@gmail.com [University of Silesia, Institute of Physics (Poland); Zajac, S., E-mail: s.zajac@uksw.edu.pl [Cardinal Stefan Wyszyński University in Warsaw, Faculty of Mathematics and Natural Studies (Poland); Zralek, M., E-mail: marek.zralek@us.edu.pl [University of Silesia, Institute of Physics (Poland)

    2017-03-15

    The Standard Model does not explain the hierarchy problem. Before the discovery of nonzero lepton mixing angle θ{sub 13} high hopes in explanation of the shape of the lepton mixing matrix were combined with non-Abelian symmetries. Nowadays, assuming one Higgs doublet, it is unlikely that this is still valid. Texture zeroes, that are combined with abelian symmetries, are intensively studied. The neutrino mass matrix is a natural way to study such symmetries.

  18. Experimental validation of lead cross sections for scale and MCNP

    International Nuclear Information System (INIS)

    Henrikson, D.J.

    1995-01-01

    Moving spent nuclear fuel between facilities often requires the use of lead-shielded casks. Criticality safety that is based upon calculations requires experimental validation of the fuel matrix and lead cross section libraries. A series of critical experiments using a high-enriched uranium-aluminum fuel element with a variety of reflectors, including lead, has been identified. Twenty-one configurations were evaluated in this study. The fuel element was modelled for KENO V.a and MCNP 4a using various cross section sets. The experiments addressed in this report can be used to validate lead-reflected calculations. Factors influencing calculated k eff which require further study include diameters of styrofoam inserts and homogenization

  19. Physical validation issue of the NEPTUNE two-phase modelling: validation plan to be adopted, experimental programs to be set up and associated instrumentation techniques developed

    International Nuclear Information System (INIS)

    Pierre Peturaud; Eric Hervieu

    2005-01-01

    Full text of publication follows: A long-term joint development program for the next generation of nuclear reactors simulation tools has been launched in 2001 by EDF (Electricite de France) and CEA (Commissariat a l'Energie Atomique). The NEPTUNE Project constitutes the Thermal-Hydraulics part of this comprehensive program. Along with the underway development of this new two-phase flow software platform, the physical validation of the involved modelling is a crucial issue, whatever the modelling scale is, and the present paper deals with this issue. After a brief recall about the NEPTUNE platform, the general validation strategy to be adopted is first of all clarified by means of three major features: (i) physical validation in close connection with the concerned industrial applications, (ii) involving (as far as possible) a two-step process successively focusing on dominant separate models and assessing the whole modelling capability, (iii) thanks to the use of relevant data with respect to the validation aims. Based on this general validation process, a four-step generic work approach has been defined; it includes: (i) a thorough analysis of the concerned industrial applications to identify the key physical phenomena involved and associated dominant basic models, (ii) an assessment of these models against the available validation pieces of information, to specify the additional validation needs and define dedicated validation plans, (iii) an inventory and assessment of existing validation data (with respect to the requirements specified in the previous task) to identify the actual needs for new validation data, (iv) the specification of the new experimental programs to be set up to provide the needed new data. This work approach has been applied to the NEPTUNE software, focusing on 8 high priority industrial applications, and it has resulted in the definition of (i) the validation plan and experimental programs to be set up for the open medium 3D modelling

  20. Intermediate coupling collision strengths from LS coupled R-matrix elements

    International Nuclear Information System (INIS)

    Clark, R.E.H.

    1978-01-01

    Fine structure collision strength for transitions between two groups of states in intermediate coupling and with inclusion of configuration mixing are obtained from LS coupled reactance matrix elements (R-matrix elements) and a set of mixing coefficients. The LS coupled R-matrix elements are transformed to pair coupling using Wigner 6-j coefficients. From these pair coupled R-matrix elements together with a set of mixing coefficients, R-matrix elements are obtained which include the intermediate coupling and configuration mixing effects. Finally, from the latter R-matrix elements, collision strengths for fine structure transitions are computed (with inclusion of both intermediate coupling and configuration mixing). (Auth.)

  1. Development of a clinical diagnostic matrix for characterizing inherited epidermolysis bullosa.

    Science.gov (United States)

    Yenamandra, V K; Moss, C; Sreenivas, V; Khan, M; Sivasubbu, S; Sharma, V K; Sethuraman, G

    2017-06-01

    Accurately diagnosing the subtype of epidermolysis bullosa (EB) is critical for management and genetic counselling. Modern laboratory techniques are largely inaccessible in developing countries, where the diagnosis remains clinical and often inaccurate. To develop a simple clinical diagnostic tool to aid in the diagnosis and subtyping of EB. We developed a matrix indicating presence or absence of a set of distinctive clinical features (as rows) for the nine most prevalent EB subtypes (as columns). To test an individual patient, presence or absence of these features was compared with the findings expected in each of the nine subtypes to see which corresponded best. If two or more diagnoses scored equally, the diagnosis with the greatest number of specific features was selected. The matrix was tested using findings from 74 genetically characterized patients with EB aged > 6 months by an investigator blinded to molecular diagnosis. For concordance, matrix diagnoses were compared with molecular diagnoses. Overall, concordance between the matrix and molecular diagnoses for the four major types of EB was 91·9%, with a kappa coefficient of 0·88 [95% confidence interval (CI) 0·81-0·95; P < 0·001]. The matrix achieved a 75·7% agreement in classifying EB into its nine subtypes, with a kappa coefficient of 0·73 (95% CI 0·69-0·77; P < 0·001). The matrix appears to be simple, valid and useful in predicting the type and subtype of EB. An electronic version will facilitate further testing. © 2016 British Association of Dermatologists.

  2. Introducing Matrix Management within a Children's Services Setting--Personal Reflections

    Science.gov (United States)

    Brooks, Michael; Kakabadse, Nada K.

    2014-01-01

    This article reflects on the introduction of "matrix management" arrangements for an Educational Psychology Service (EPS) within a Children's Service Directorate of a Local Authority (LA). It seeks to demonstrate critical self-awareness, consider relevant literature with a view to bringing insights to processes and outcomes, and offers…

  3. The Set of Fear Inducing Pictures (SFIP): Development and validation in fearful and nonfearful individuals.

    Science.gov (United States)

    Michałowski, Jarosław M; Droździel, Dawid; Matuszewski, Jacek; Koziejowski, Wojtek; Jednoróg, Katarzyna; Marchewka, Artur

    2017-08-01

    Emotionally charged pictorial materials are frequently used in phobia research, but no existing standardized picture database is dedicated to the study of different phobias. The present work describes the results of two independent studies through which we sought to develop and validate this type of database-a Set of Fear Inducing Pictures (SFIP). In Study 1, 270 fear-relevant and 130 neutral stimuli were rated for fear, arousal, and valence by four groups of participants; small-animal (N = 34), blood/injection (N = 26), social-fearful (N = 35), and nonfearful participants (N = 22). The results from Study 1 were employed to develop the final version of the SFIP, which includes fear-relevant images of social exposure (N = 40), blood/injection (N = 80), spiders/bugs (N = 80), and angry faces (N = 30), as well as 726 neutral photographs. In Study 2, we aimed to validate the SFIP in a sample of spider, blood/injection, social-fearful, and control individuals (N = 66). The fear-relevant images were rated as being more unpleasant and led to greater fear and arousal in fearful than in nonfearful individuals. The fear images differentiated between the three fear groups in the expected directions. Overall, the present findings provide evidence for the high validity of the SFIP and confirm that the set may be successfully used in phobia research.

  4. Shield verification and validation action matrix summary

    International Nuclear Information System (INIS)

    Boman, C.

    1992-02-01

    WSRC-RP-90-26, Certification Plan for Reactor Analysis Computer Codes, describes a series of action items to be completed for certification of reactor analysis computer codes used in Technical Specifications development and for other safety and production support calculations. Validation and verification are integral part of the certification process. This document identifies the work performed and documentation generated to satisfy these action items for the SHIELD, SHLDED, GEDIT, GENPRT, FIPROD, FPCALC, and PROCES modules of the SHIELD system, it is not certification of the complete SHIELD system. Complete certification will follow at a later date. Each action item is discussed with the justification for its completion. Specific details of the work performed are not included in this document but can be found in the references. The validation and verification effort for the SHIELD, SHLDED, GEDIT, GENPRT, FIPROD, FPCALC, and PROCES modules of the SHIELD system computer code is completed

  5. NLTE steady-state response matrix method.

    Science.gov (United States)

    Faussurier, G.; More, R. M.

    2000-05-01

    A connection between atomic kinetics and non-equilibrium thermodynamics has been recently established by using a collisional-radiative model modified to include line absorption. The calculated net emission can be expressed as a non-local thermodynamic equilibrium (NLTE) symmetric response matrix. In the paper, this connection is extended to both cases of the average-atom model and the Busquet's model (RAdiative-Dependent IOnization Model, RADIOM). The main properties of the response matrix still remain valid. The RADIOM source function found in the literature leads to a diagonal response matrix, stressing the absence of any frequency redistribution among the frequency groups at this order of calculation.

  6. Transition matrices and orbitals from reduced density matrix theory

    Energy Technology Data Exchange (ETDEWEB)

    Etienne, Thibaud [Université de Lorraine – Nancy, Théorie-Modélisation-Simulation, SRSMC, Boulevard des Aiguillettes 54506, Vandoeuvre-lès-Nancy (France); CNRS, Théorie-Modélisation-Simulation, SRSMC, Boulevard des Aiguillettes 54506, Vandoeuvre-lès-Nancy (France); Unité de Chimie Physique Théorique et Structurale, Université de Namur, Rue de Bruxelles 61, 5000 Namur (Belgium)

    2015-06-28

    In this contribution, we report two different methodologies for characterizing the electronic structure reorganization occurring when a chromophore undergoes an electronic transition. For the first method, we start by setting the theoretical background necessary to the reinterpretation through simple tensor analysis of (i) the transition density matrix and (ii) the natural transition orbitals in the scope of reduced density matrix theory. This novel interpretation is made more clear thanks to a short compendium of the one-particle reduced density matrix theory in a Fock space. The formalism is further applied to two different classes of excited states calculation methods, both requiring a single-determinant reference, that express an excited state as a hole-particle mono-excited configurations expansion, to which particle-hole correlation is coupled (time-dependent Hartree-Fock/time-dependent density functional theory) or not (configuration interaction single/Tamm-Dancoff approximation). For the second methodology presented in this paper, we introduce a novel and complementary concept related to electronic transitions with the canonical transition density matrix and the canonical transition orbitals. Their expression actually reflects the electronic cloud polarisation in the orbital space with a decomposition based on the actual contribution of one-particle excitations from occupied canonical orbitals to virtual ones. This approach validates our novel interpretation of the transition density matrix elements in terms of the Euclidean norm of elementary transition vectors in a linear tensor space. A proper use of these new concepts leads to the conclusion that despite the different principles underlying their construction, they provide two equivalent excited states topological analyses. This connexion is evidenced through simple illustrations of (in)organic dyes electronic transitions analysis.

  7. The algebras of large N matrix mechanics

    Energy Technology Data Exchange (ETDEWEB)

    Halpern, M.B.; Schwartz, C.

    1999-09-16

    Extending early work, we formulate the large N matrix mechanics of general bosonic, fermionic and supersymmetric matrix models, including Matrix theory: The Hamiltonian framework of large N matrix mechanics provides a natural setting in which to study the algebras of the large N limit, including (reduced) Lie algebras, (reduced) supersymmetry algebras and free algebras. We find in particular a broad array of new free algebras which we call symmetric Cuntz algebras, interacting symmetric Cuntz algebras, symmetric Bose/Fermi/Cuntz algebras and symmetric Cuntz superalgebras, and we discuss the role of these algebras in solving the large N theory. Most important, the interacting Cuntz algebras are associated to a set of new (hidden!) local quantities which are generically conserved only at large N. A number of other new large N phenomena are also observed, including the intrinsic nonlocality of the (reduced) trace class operators of the theory and a closely related large N field identification phenomenon which is associated to another set (this time nonlocal) of new conserved quantities at large N.

  8. Validation of the Comprehensive ICF Core Set for obstructive pulmonary diseases from the perspective of physiotherapists.

    Science.gov (United States)

    Rauch, Alexandra; Kirchberger, Inge; Stucki, Gerold; Cieza, Alarcos

    2009-12-01

    The 'Comprehensive ICF Core Set for obstructive pulmonary diseases' (OPD) is an application of the International Classification of Functioning, Disability and Health (ICF) and represents the typical spectrum of problems in functioning of patients with OPD. To optimize a multidisciplinary and patient-oriented approach in pulmonary rehabilitation, in which physiotherapy plays an important role, the ICF offers a standardized language and understanding of functioning. For it to be a useful tool for physiotherapists in rehabilitation of patients with OPD, the objective of this study was to validate this Comprehensive ICF Core Set for OPD from the perspective of physiotherapists. A three-round survey based on the Delphi technique of physiotherapists who are experienced in the treatment of OPD asked about the problems, resources and aspects of environment of patients with OPD that physiotherapists treat in clinical practice (physiotherapy intervention categories). Responses were linked to the ICF and compared with the existing Comprehensive ICF Core Set for OPD. Fifty-one physiotherapists from 18 countries named 904 single terms that were linked to 124 ICF categories, 9 personal factors and 16 'not classified' concepts. The identified ICF categories were mainly third-level categories compared with mainly second-level categories of the Comprehensive ICF Core Set for OPD. Seventy of the ICF categories, all personal factors and 15 'not classified' concepts gained more than 75% agreement among the physiotherapists. Of these ICF categories, 55 (78.5%) were covered by the Comprehensive ICF Core Set for OPD. The validity of the Comprehensive ICF Core Set for OPD was largely supported by the physiotherapists. Nevertheless, ICF categories that were not covered, personal factors and not classified terms offer opportunities towards the final ICF Core Set for OPD and further research to strengthen physiotherapists' perspective in pulmonary rehabilitation.

  9. TRAC-P validation test matrix. Revision 1.0

    International Nuclear Information System (INIS)

    Hughes, E.D.; Boyack, B.E.

    1997-01-01

    This document briefly describes the elements of the Nuclear Regulatory Commission's (NRC's) software quality assurance program leading to software (code) qualification and identifies a test matrix for qualifying Transient Reactor Analysis Code (TRAC)-Pressurized Water Reactor Version (-P), or TRAC-P, to the NRC's software quality assurance requirements. Code qualification is the outcome of several software life-cycle activities, specifically, (1) Requirements Definition, (2) Design, (3) Implementation, and (4) Qualification Testing. The major objective of this document is to define the TRAC-P Qualification Testing effort

  10. The development and validation of an interprofessional scale to assess teamwork in mental health settings.

    Science.gov (United States)

    Tomizawa, Ryoko; Yamano, Mayumi; Osako, Mitue; Misawa, Takeshi; Hirabayashi, Naotugu; Oshima, Nobuo; Sigeta, Masahiro; Reeves, Scott

    2014-09-01

    Currently, no evaluative scale exists to assess the quality of interprofessional teamwork in mental health settings across the globe. As a result, little is known about the detailed process of team development within this setting. The purpose of this study is to develop and validate a global interprofessional scale that assesses teamwork in mental health settings using an international comparative study based in Japan and the United States. This report provides a description of this study and reports progress made to date. Specifically, it outlines work on literature reviews to identify evaluative teamwork tools as well as identify relevant teamwork models and theories. It also outlines plans for empirical work that will be undertaken in both Japan and the United States.

  11. Validation of Fall Risk Assessment Specific to the Inpatient Rehabilitation Facility Setting.

    Science.gov (United States)

    Thomas, Dan; Pavic, Andrea; Bisaccia, Erin; Grotts, Jonathan

    2016-09-01

    To evaluate and compare the Morse Fall Scale (MFS) and the Casa Colina Fall Risk Assessment Scale (CCFRA) for identification of patients at risk for falling in an acute inpatient rehabilitation facility. The primary objective of this study was to perform a retrospective validation study of the CCFRAS, specifically for use in the inpatient rehabilitation facility (IRF) setting. Retrospective validation study. The study was approved under expedited review by the local Institutional Review Board. Data were collected on all patients admitted to Cottage Rehabiliation Hospital (CRH), a 38-bed acute inpatient rehabilitation hospital, from March 2012 to August 2013. Patients were excluded from the study if they had a length of stay less than 3 days or age less than 18. The area under the receiver operating characteristic curve (AUC) and the diagnostic odds ratio were used to examine the differences between the MFS and CCFRAS. AUC between fall scales was compared using the DeLong Test. There were 931 patients included in the study with 62 (6.7%) patient falls. The average age of the population was 68.8 with 503 males (51.2%). The AUC was 0.595 and 0.713 for the MFS and CCFRAS, respectively (0.006). The diagnostic odds ratio of the MFS was 2.0 and 3.6 for the CCFRAS using the recommended cutoffs of 45 for the MFS and 80 for the CCFRAS. The CCFRAS appears to be a better tool in detecting fallers vs. nonfallers specific to the IRF setting. The assessment and identification of patients at high risk for falling is important to implement specific precautions and care for these patients to reduce their risk of falling. The CCFRAS is more clinically relevant in identifying patients at high risk for falling in the IRF setting compared to other fall risk assessments. Implementation of this scale may lead to a reduction in fall rate and injuries from falls as it more appropriately identifies patients at high risk for falling. © 2015 Association of Rehabilitation Nurses.

  12. Community Priority Index: utility, applicability and validation for priority setting in community-based participatory research

    Directory of Open Access Journals (Sweden)

    Hamisu M. Salihu

    2015-07-01

    Full Text Available Background. Providing practitioners with an intuitive measure for priority setting that can be combined with diverse data collection methods is a necessary step to foster accountability of the decision-making process in community settings. Yet, there is a lack of easy-to-use, but methodologically robust measures, that can be feasibly implemented for reliable decision-making in community settings. To address this important gap in community based participatory research (CBPR, the purpose of this study was to demonstrate the utility, applicability, and validation of a community priority index in a community-based participatory research setting. Design and Methods. Mixed-method study that combined focus groups findings, nominal group technique with six key informants, and the generation of a Community Priority Index (CPI that integrated community importance, changeability, and target populations. Bootstrapping and simulation were performed for validation. Results. For pregnant mothers, the top three highly important and highly changeable priorities were: stress (CPI=0.85; 95%CI: 0.70, 1.00, lack of affection (CPI=0.87; 95%CI: 0.69, 1.00, and nutritional issues (CPI=0.78; 95%CI: 0.48, 1.00. For non-pregnant women, top priorities were: low health literacy (CPI=0.87; 95%CI: 0.69, 1.00, low educational attainment (CPI=0.78; 95%CI: 0.48, 1.00, and lack of self-esteem (CPI=0.72; 95%CI: 0.44, 1.00. For children and adolescents, the top three priorities were: obesity (CPI=0.88; 95%CI: 0.69, 1.00, low self-esteem (CPI=0.81; 95%CI: 0.69, 0.94, and negative attitudes toward education (CPI=0.75; 95%CI: 0.50, 0.94. Conclusions. This study demonstrates the applicability of the CPI as a simple and intuitive measure for priority setting in CBPR.

  13. TRAC-P validation test matrix. Revision 1.0

    Energy Technology Data Exchange (ETDEWEB)

    Hughes, E.D.; Boyack, B.E.

    1997-09-05

    This document briefly describes the elements of the Nuclear Regulatory Commission`s (NRC`s) software quality assurance program leading to software (code) qualification and identifies a test matrix for qualifying Transient Reactor Analysis Code (TRAC)-Pressurized Water Reactor Version (-P), or TRAC-P, to the NRC`s software quality assurance requirements. Code qualification is the outcome of several software life-cycle activities, specifically, (1) Requirements Definition, (2) Design, (3) Implementation, and (4) Qualification Testing. The major objective of this document is to define the TRAC-P Qualification Testing effort.

  14. Structured decomposition design of partial Mueller matrix polarimeters.

    Science.gov (United States)

    Alenin, Andrey S; Scott Tyo, J

    2015-07-01

    Partial Mueller matrix polarimeters (pMMPs) are active sensing instruments that probe a scattering process with a set of polarization states and analyze the scattered light with a second set of polarization states. Unlike conventional Mueller matrix polarimeters, pMMPs do not attempt to reconstruct the entire Mueller matrix. With proper choice of generator and analyzer states, a subset of the Mueller matrix space can be reconstructed with fewer measurements than that of the full Mueller matrix polarimeter. In this paper we consider the structure of the Mueller matrix and our ability to probe it using a reduced number of measurements. We develop analysis tools that allow us to relate the particular choice of generator and analyzer polarization states to the portion of Mueller matrix space that the instrument measures, as well as develop an optimization method that is based on balancing the signal-to-noise ratio of the resulting instrument with the ability of that instrument to accurately measure a particular set of desired polarization components with as few measurements as possible. In the process, we identify 10 classes of pMMP systems, for which the space coverage is immediately known. We demonstrate the theory with a numerical example that designs partial polarimeters for the task of monitoring the damage state of a material as presented earlier by Hoover and Tyo [Appl. Opt.46, 8364 (2007)10.1364/AO.46.008364APOPAI1559-128X]. We show that we can reduce the polarimeter to making eight measurements while still covering the Mueller matrix subspace spanned by the objects.

  15. Validity and validation of expert (Q)SAR systems.

    Science.gov (United States)

    Hulzebos, E; Sijm, D; Traas, T; Posthumus, R; Maslankiewicz, L

    2005-08-01

    At a recent workshop in Setubal (Portugal) principles were drafted to assess the suitability of (quantitative) structure-activity relationships ((Q)SARs) for assessing the hazards and risks of chemicals. In the present study we applied some of the Setubal principles to test the validity of three (Q)SAR expert systems and validate the results. These principles include a mechanistic basis, the availability of a training set and validation. ECOSAR, BIOWIN and DEREK for Windows have a mechanistic or empirical basis. ECOSAR has a training set for each QSAR. For half of the structural fragments the number of chemicals in the training set is >4. Based on structural fragments and log Kow, ECOSAR uses linear regression to predict ecotoxicity. Validating ECOSAR for three 'valid' classes results in predictivity of > or = 64%. BIOWIN uses (non-)linear regressions to predict the probability of biodegradability based on fragments and molecular weight. It has a large training set and predicts non-ready biodegradability well. DEREK for Windows predictions are supported by a mechanistic rationale and literature references. The structural alerts in this program have been developed with a training set of positive and negative toxicity data. However, to support the prediction only a limited number of chemicals in the training set is presented to the user. DEREK for Windows predicts effects by 'if-then' reasoning. The program predicts best for mutagenicity and carcinogenicity. Each structural fragment in ECOSAR and DEREK for Windows needs to be evaluated and validated separately.

  16. The validation of language tests

    African Journals Online (AJOL)

    KATEVG

    Stellenbosch Papers in Linguistics, Vol. ... validation is necessary because of the major impact which test results can have on the many ... Messick (1989: 20) introduces his much-quoted progressive matrix (cf. table 1), which ... argue that current accounts of validity only superficially address theories of measurement.

  17. Validation of a global scale to assess the quality of interprofessional teamwork in mental health settings.

    Science.gov (United States)

    Tomizawa, Ryoko; Yamano, Mayumi; Osako, Mitue; Hirabayashi, Naotugu; Oshima, Nobuo; Sigeta, Masahiro; Reeves, Scott

    2017-12-01

    Few scales currently exist to assess the quality of interprofessional teamwork through team members' perceptions of working together in mental health settings. The purpose of this study was to revise and validate an interprofessional scale to assess the quality of teamwork in inpatient psychiatric units and to use it multi-nationally. A literature review was undertaken to identify evaluative teamwork tools and develop an additional 12 items to ensure a broad global focus. Focus group discussions considered adaptation to different care systems using subjective judgements from 11 participants in a pre-test of items. Data quality, construct validity, reproducibility, and internal consistency were investigated in the survey using an international comparative design. Exploratory factor analysis yielded five factors with 21 items: 'patient/community centred care', 'collaborative communication', 'interprofessional conflict', 'role clarification', and 'environment'. High overall internal consistency, reproducibility, adequate face validity, and reasonable construct validity were shown in the USA and Japan. The revised Collaborative Practice Assessment Tool (CPAT) is a valid measure to assess the quality of interprofessional teamwork in psychiatry and identifies the best strategies to improve team performance. Furthermore, the revised scale will generate more rigorous evidence for collaborative practice in psychiatry internationally.

  18. Validity and predictive ability of the juvenile arthritis disease activity score based on CRP versus ESR in a Nordic population-based setting

    DEFF Research Database (Denmark)

    Nordal, E B; Zak, M; Aalto, K

    2012-01-01

    To compare the juvenile arthritis disease activity score (JADAS) based on C reactive protein (CRP) (JADAS-CRP) with JADAS based on erythrocyte sedimentation rate (ESR) (JADAS-ESR) and to validate JADAS in a population-based setting.......To compare the juvenile arthritis disease activity score (JADAS) based on C reactive protein (CRP) (JADAS-CRP) with JADAS based on erythrocyte sedimentation rate (ESR) (JADAS-ESR) and to validate JADAS in a population-based setting....

  19. Position Error Covariance Matrix Validation and Correction

    Science.gov (United States)

    Frisbee, Joe, Jr.

    2016-01-01

    In order to calculate operationally accurate collision probabilities, the position error covariance matrices predicted at times of closest approach must be sufficiently accurate representations of the position uncertainties. This presentation will discuss why the Gaussian distribution is a reasonable expectation for the position uncertainty and how this assumed distribution type is used in the validation and correction of position error covariance matrices.

  20. Validation of KENO, ANISN and Hansen-Roach cross-section set on plutonium oxide and metal fuel system

    International Nuclear Information System (INIS)

    Matsumoto, Tadakuni; Yumoto, Ryozo; Nakano, Koh.

    1980-01-01

    In the previous report, the authors discussed the validity of KENO, ANISN and Hansen-Roach 16 group cross-section set on the critical plutonium nitrate solution systems with various geometries, absorbers and neutron interactions. The purpose of the present report is to examine the validity of the same calculation systems on the homogeneous plutonium oxide and plutonium-uranium mixed oxide fuels with various density values. Eleven experiments adopted for validation are summarized. First six experiments were performed at Pacific Northwest Laboratory of Battelle Memorial Institute, and the remaining five at Los Alamos Scientific Laboratory. The characteristics of core fuel are given, and the isotopic composition of plutonium, the relation between H/(Pu + U) atomic ratio and fuel density as compared with the atomic ratios of PuO 2 and mixed oxides in powder storage and pellet fabrication processes, and critical core dimensions and reflector conditions are shown. The effective multiplication factors were calculated with the KENO code. In case of the metal fuels with simple sphere geometry, additional calculations with the ANISN code were performed. The criticality calculation system composed of KENO, ANISN and Hansen-Roach cross-section set was found to be valid for calculating the criticality on plutonium oxide, plutonium-uranium mixed oxide, plutonium metal and uranium metal fuel systems as well as on plutonium solution systems with various geometries, absorbers and neutron interactions. There seems to remain some problems in the method for evaluating experimental correction. Some discussions foloow. (Wakatsuki, Y.)

  1. Analysis of three sets of SWIW tracer test data using a two-population complex fracture model for matrix diffusion and sorption

    International Nuclear Information System (INIS)

    Doughty, Christine; Chin-Fu Tsang

    2009-03-01

    This study has been undertaken to obtain a better understanding of the processes underlying retention of radionuclides in fractured rock by using different model conceptualisations when interpreting SWIW tests. In particular the aim is to infer the diffusion and sorption parameters from the SWIW test data by matching tracer breakthrough curves (BTC) with a complex fracture model. The model employs two populations for diffusion and sorption. One population represents the semi-infinite rock matrix and the other represents finite blocks that can become saturated, thereafter accepting no further diffusion or sorption. For the non-sorbing tracer uranine, both the finite and the semi-infinite populations play a distinct role in controlling BTC. For the sorbing tracers Cs and Rb the finite population does not saturate, but acts essentially semi-infinite, thus the BTC behaviour is comparable to that obtained for a model containing only a semi-infinite rock matrix. The ability to match BTC for both sorbing and non-sorbing tracers for these three different SWIW data sets demonstrates that the two-population complex fracture model may be useful to analyze SWIW tracer test data in general. One of the two populations should be the semi-infinite rock matrix and the other finite blocks that can saturate. The latter can represent either rock blocks within the fracture, a fracture skin zone or stagnation zones. Three representative SWIW tracer tests recently conducted by SKB have been analyzed with a complex fracture model employing two populations for diffusion and sorption, one population being the semi-infinite rock matrix and the other, finite blocks. The results show that by adjusting diffusion and sorption parameters of the model, a good match with field data is obtained for BTC of both conservative and non-conservative tracers simultaneously. For non-sorbing tracer uranine, both the finite and the semi-infinite populations play a distinct role in controlling BTC. At early

  2. Novel image analysis methods for quantification of in situ 3-D tendon cell and matrix strain.

    Science.gov (United States)

    Fung, Ashley K; Paredes, J J; Andarawis-Puri, Nelly

    2018-01-23

    Macroscopic tendon loads modulate the cellular microenvironment leading to biological outcomes such as degeneration or repair. Previous studies have shown that damage accumulation and the phases of tendon healing are marked by significant changes in the extracellular matrix, but it remains unknown how mechanical forces of the extracellular matrix are translated to mechanotransduction pathways that ultimately drive the biological response. Our overarching hypothesis is that the unique relationship between extracellular matrix strain and cell deformation will dictate biological outcomes, prompting the need for quantitative methods to characterize the local strain environment. While 2-D methods have successfully calculated matrix strain and cell deformation, 3-D methods are necessary to capture the increased complexity that can arise due to high levels of anisotropy and out-of-plane motion, particularly in the disorganized, highly cellular, injured state. In this study, we validated the use of digital volume correlation methods to quantify 3-D matrix strain using images of naïve tendon cells, the collagen fiber matrix, and injured tendon cells. Additionally, naïve tendon cell images were used to develop novel methods for 3-D cell deformation and 3-D cell-matrix strain, which is defined as a quantitative measure of the relationship between matrix strain and cell deformation. The results support that these methods can be used to detect strains with high accuracy and can be further extended to an in vivo setting for observing temporal changes in cell and matrix mechanics during degeneration and healing. Copyright © 2017. Published by Elsevier Ltd.

  3. Predicting death from kala-azar: construction, development, and validation of a score set and accompanying software.

    Science.gov (United States)

    Costa, Dorcas Lamounier; Rocha, Regina Lunardi; Chaves, Eldo de Brito Ferreira; Batista, Vivianny Gonçalves de Vasconcelos; Costa, Henrique Lamounier; Costa, Carlos Henrique Nery

    2016-01-01

    Early identification of patients at higher risk of progressing to severe disease and death is crucial for implementing therapeutic and preventive measures; this could reduce the morbidity and mortality from kala-azar. We describe a score set composed of four scales in addition to software for quick assessment of the probability of death from kala-azar at the point of care. Data from 883 patients diagnosed between September 2005 and August 2008 were used to derive the score set, and data from 1,031 patients diagnosed between September 2008 and November 2013 were used to validate the models. Stepwise logistic regression analyses were used to derive the optimal multivariate prediction models. Model performance was assessed by its discriminatory accuracy. A computational specialist system (Kala-Cal(r)) was developed to speed up the calculation of the probability of death based on clinical scores. The clinical prediction score showed high discrimination (area under the curve [AUC] 0.90) for distinguishing death from survival for children ≤2 years old. Performance improved after adding laboratory variables (AUC 0.93). The clinical score showed equivalent discrimination (AUC 0.89) for older children and adults, which also improved after including laboratory data (AUC 0.92). The score set also showed a high, although lower, discrimination when applied to the validation cohort. This score set and Kala-Cal(r) software may help identify individuals with the greatest probability of death. The associated software may speed up the calculation of the probability of death based on clinical scores and assist physicians in decision-making.

  4. Development of a tool to measure person-centered maternity care in developing settings: validation in a rural and urban Kenyan population.

    Science.gov (United States)

    Afulani, Patience A; Diamond-Smith, Nadia; Golub, Ginger; Sudhinaraset, May

    2017-09-22

    Person-centered reproductive health care is recognized as critical to improving reproductive health outcomes. Yet, little research exists on how to operationalize it. We extend the literature in this area by developing and validating a tool to measure person-centered maternity care. We describe the process of developing the tool and present the results of psychometric analyses to assess its validity and reliability in a rural and urban setting in Kenya. We followed standard procedures for scale development. First, we reviewed the literature to define our construct and identify domains, and developed items to measure each domain. Next, we conducted expert reviews to assess content validity; and cognitive interviews with potential respondents to assess clarity, appropriateness, and relevance of the questions. The questions were then refined and administered in surveys; and survey results used to assess construct and criterion validity and reliability. The exploratory factor analysis yielded one dominant factor in both the rural and urban settings. Three factors with eigenvalues greater than one were identified for the rural sample and four factors identified for the urban sample. Thirty of the 38 items administered in the survey were retained based on the factors loadings and correlation between the items. Twenty-five items load very well onto a single factor in both the rural and urban sample, with five items loading well in either the rural or urban sample, but not in both samples. These 30 items also load on three sub-scales that we created to measure dignified and respectful care, communication and autonomy, and supportive care. The Chronbach alpha for the main scale is greater than 0.8 in both samples, and that for the sub-scales are between 0.6 and 0.8. The main scale and sub-scales are correlated with global measures of satisfaction with maternity services, suggesting criterion validity. We present a 30-item scale with three sub-scales to measure person

  5. The improved Apriori algorithm based on matrix pruning and weight analysis

    Science.gov (United States)

    Lang, Zhenhong

    2018-04-01

    This paper uses the matrix compression algorithm and weight analysis algorithm for reference and proposes an improved matrix pruning and weight analysis Apriori algorithm. After the transactional database is scanned for only once, the algorithm will construct the boolean transaction matrix. Through the calculation of one figure in the rows and columns of the matrix, the infrequent item set is pruned, and a new candidate item set is formed. Then, the item's weight and the transaction's weight as well as the weight support for items are calculated, thus the frequent item sets are gained. The experimental result shows that the improved Apriori algorithm not only reduces the number of repeated scans of the database, but also improves the efficiency of data correlation mining.

  6. GoM Diet Matrix

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This data set was taken from CRD 08-18 at the NEFSC. Specifically, the Gulf of Maine diet matrix was developed for the EMAX exercise described in that center...

  7. A strategy for developing representative germplasm sets for systematic QTL validation, demonstrated for apple, peach, and sweet cherry

    NARCIS (Netherlands)

    Peace, C.P.; Luby, J.; Weg, van de W.E.; Bink, M.C.A.M.; Iezzoni, A.F.

    2014-01-01

    Horticultural crop improvement would benefit from a standardized, systematic, and statistically robust procedure for validating quantitative trait loci (QTLs) in germplasm relevant to breeding programs. Here, we describe and demonstrate a strategy for developing reference germplasm sets of

  8. BESST (Bochum Emotional Stimulus Set)--a pilot validation study of a stimulus set containing emotional bodies and faces from frontal and averted views.

    Science.gov (United States)

    Thoma, Patrizia; Soria Bauser, Denise; Suchan, Boris

    2013-08-30

    This article introduces the freely available Bochum Emotional Stimulus Set (BESST), which contains pictures of bodies and faces depicting either a neutral expression or one of the six basic emotions (happiness, sadness, fear, anger, disgust, and surprise), presented from two different perspectives (0° frontal view vs. camera averted by 45° to the left). The set comprises 565 frontal view and 564 averted view pictures of real-life bodies with masked facial expressions and 560 frontal and 560 averted view faces which were synthetically created using the FaceGen 3.5 Modeller. All stimuli were validated in terms of categorization accuracy and the perceived naturalness of the expression. Additionally, each facial stimulus was morphed into three age versions (20/40/60 years). The results show high recognition of the intended facial expressions, even under speeded forced-choice conditions, as corresponds to common experimental settings. The average naturalness ratings for the stimuli range between medium and high. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  9. Betatron coupling: Merging Hamiltonian and matrix approaches

    Directory of Open Access Journals (Sweden)

    R. Calaga

    2005-03-01

    Full Text Available Betatron coupling is usually analyzed using either matrix formalism or Hamiltonian perturbation theory. The latter is less exact but provides a better physical insight. In this paper direct relations are derived between the two formalisms. This makes it possible to interpret the matrix approach in terms of resonances, as well as use results of both formalisms indistinctly. An approach to measure the complete coupling matrix and its determinant from turn-by-turn data is presented. Simulations using methodical accelerator design MAD-X, an accelerator design and tracking program, were performed to validate the relations and understand the scope of their application to real accelerators such as the Relativistic Heavy Ion Collider.

  10. The impact of episodic nonequilibrium fracture-matrix flow on geological repository performance

    International Nuclear Information System (INIS)

    Buscheck, T.A.; Nitao, J.J.; Chestnut, D.A.

    1991-01-01

    Adequate representation of fracture-matrix interaction during episodic infiltration events is crucial in making valid hydrological predictions of repository performance at Yucca Mountain. Various approximations have been applied to represent fracture-matrix flow interaction, including the Equivalent Continuum Model (ECM), which assumes capillary equilibrium between fractures and matrix, and the Fracture-Matrix Model (FMM), which accounts for nonequilibrium fracture-matrix flow. We analyze the relative impact of matrix imbibition on episodic nonequilibrium fracture-matrix flow for the eight major hydrostratigraphic units in the unsaturated zone at Yucca Mountain. Comparisons are made between ECM and FMM predictions to determine the applicability of the ECM. The implications of nonequilibrium fracture-matrix flow on radionuclide transport are also discussed

  11. Identification and Validation of a New Set of Five Genes for Prediction of Risk in Early Breast Cancer

    Directory of Open Access Journals (Sweden)

    Giorgio Mustacchi

    2013-05-01

    Full Text Available Molecular tests predicting the outcome of breast cancer patients based on gene expression levels can be used to assist in making treatment decisions after consideration of conventional markers. In this study we identified a subset of 20 mRNA differentially regulated in breast cancer analyzing several publicly available array gene expression data using R/Bioconductor package. Using RTqPCR we evaluate 261 consecutive invasive breast cancer cases not selected for age, adjuvant treatment, nodal and estrogen receptor status from paraffin embedded sections. The biological samples dataset was split into a training (137 cases and a validation set (124 cases. The gene signature was developed on the training set and a multivariate stepwise Cox analysis selected five genes independently associated with DFS: FGF18 (HR = 1.13, p = 0.05, BCL2 (HR = 0.57, p = 0.001, PRC1 (HR = 1.51, p = 0.001, MMP9 (HR = 1.11, p = 0.08, SERF1a (HR = 0.83, p = 0.007. These five genes were combined into a linear score (signature weighted according to the coefficients of the Cox model, as: 0.125FGF18 − 0.560BCL2 + 0.409PRC1 + 0.104MMP9 − 0.188SERF1A (HR = 2.7, 95% CI = 1.9–4.0, p < 0.001. The signature was then evaluated on the validation set assessing the discrimination ability by a Kaplan Meier analysis, using the same cut offs classifying patients at low, intermediate or high risk of disease relapse as defined on the training set (p < 0.001. Our signature, after a further clinical validation, could be proposed as prognostic signature for disease free survival in breast cancer patients where the indication for adjuvant chemotherapy added to endocrine treatment is uncertain.

  12. The Lehmer Matrix and Its Recursive Analogue

    Science.gov (United States)

    2010-01-01

    LU factorization of matrix A by considering det A = det U = ∏n i=1 2i−1 i2 . The nth Catalan number is given in terms of binomial coefficients by Cn...for failing to comply with a collection of information if it does not display a currently valid OMB control number . 1. REPORT DATE 2010 2. REPORT...TYPE 3. DATES COVERED 00-00-2010 to 00-00-2010 4. TITLE AND SUBTITLE The Lehmer matrix and its recursive analogue 5a. CONTRACT NUMBER 5b

  13. Development and construct validation of the Client-Centredness of Goal Setting (C-COGS) scale.

    Science.gov (United States)

    Doig, Emmah; Prescott, Sarah; Fleming, Jennifer; Cornwell, Petrea; Kuipers, Pim

    2015-07-01

    Client-centred philosophy is integral to occupational therapy practice and client-centred goal planning is considered fundamental to rehabilitation. Evaluation of whether goal-planning practices are client-centred requires an understanding of the client's perspective about goal-planning processes and practices. The Client-Centredness of Goal Setting (C-COGS) was developed for use by practitioners who seek to be more client-centred and who require a scale to guide and evaluate individually orientated practice, especially with adults with cognitive impairment related to acquired brain injury. To describe development of the C-COGS scale and examine its construct validity. The C-COGS was administered to 42 participants with acquired brain injury after multidisciplinary goal planning. C-COGS scores were correlated with the Canadian Occupational Performance Measure (COPM) importance scores, and measures of therapeutic alliance, motivation, and global functioning to establish construct validity. The C-COGS scale has three subscales evaluating goal alignment, goal planning participation, and client-centredness of goals. The C-COGS subscale items demonstrated moderately significant correlations with scales measuring similar constructs. Findings provide preliminary evidence to support the construct validity of the C-COGS scale, which is intended to be used to evaluate and reflect on client-centred goal planning in clinical practice, and to highlight factors contributing to best practice rehabilitation.

  14. Sandia Generated Matrix Tool (SGMT) v. 1.0

    Energy Technology Data Exchange (ETDEWEB)

    2010-03-24

    Provides a tool with which create and characterize a very large set of matrix-based visual analogy problems that have properties that are similar to Raven's Progressive Matrices (RPMs)™. The software uses the same underlying patterns found in RPMs to generate large numbers of unique matrix problems using parameters chosen by the researcher. Specifically, the software is designed so that researchers can choose the type, direction, and number of relations in a problem and then create any number of unique matrices that share the same underlying structure (e.g. changes in numerosity in a diagonal pattern) but have different surface features (e.g. shapes, colors).Raven's Progressive Matrices (RPMs) ™ are a widely-used test for assessing intelligence and reasoning ability. Since the test is non-verbal, it can be applied to many different populations and has been used all over the world. However, there are relatively few matrices in the sets developed by Raven, which limits their use in experiments requiring large numbers of stimuli. This tool creates a matrix set in a systematic way that allows researchers to have a great deal of control over the underlying structure, surface features, and difficulty of the matrix problems while providing a large set of novel matrices with which to conduct experiments.

  15. R-matrix analysis code (RAC)

    International Nuclear Information System (INIS)

    Chen Zhenpeng; Qi Huiquan

    1990-01-01

    A comprehensive R-matrix analysis code has been developed. It is based on the multichannel and multilevel R-matrix theory and runs in VAX computer with FORTRAN-77. With this code many kinds of experimental data for one nuclear system can be fitted simultaneously. The comparisions between code RAC and code EDA of LANL are made. The data show both codes produced the same calculation results when one set of R-matrix parameters was used. The differential cross section of 10 B (n, α) 7 Li for E n = 0.4 MeV and the polarization of 16 O (n,n) 16 O for E n = 2.56 MeV are presented

  16. Time delay correlations in chaotic scattering and random matrix approach

    International Nuclear Information System (INIS)

    Lehmann, N.; Savin, D.V.; Sokolov, V.V.; Sommers, H.J.

    1994-01-01

    We study the correlations in the time delay a model of chaotic resonance scattering based on the random matrix approach. Analytical formulae which are valid for arbitrary number of open channels and arbitrary coupling strength between resonances and channels are obtained by the supersymmetry method. The time delay correlation function, through being not a Lorentzian, is characterized, similar to that of the scattering matrix, by the gap between the cloud of complex poles of the S-matrix and the real energy axis. 28 refs.; 4 figs

  17. Quasinormal-Mode Expansion of the Scattering Matrix

    Directory of Open Access Journals (Sweden)

    Filippo Alpeggiani

    2017-06-01

    Full Text Available It is well known that the quasinormal modes (or resonant states of photonic structures can be associated with the poles of the scattering matrix of the system in the complex-frequency plane. In this work, the inverse problem, i.e., the reconstruction of the scattering matrix from the knowledge of the quasinormal modes, is addressed. We develop a general and scalable quasinormal-mode expansion of the scattering matrix, requiring only the complex eigenfrequencies and the far-field properties of the eigenmodes. The theory is validated by applying it to illustrative nanophotonic systems with multiple overlapping electromagnetic modes. The examples demonstrate that our theory provides an accurate first-principles prediction of the scattering properties, without the need for postulating ad hoc nonresonant channels.

  18. The Svalbard study 1988-89: a unique setting for validation of self-reported alcohol consumption.

    Science.gov (United States)

    Høyer, G; Nilssen, O; Brenn, T; Schirmer, H

    1995-04-01

    The Norwegian island of Spitzbergen, Svalbard offers a unique setting for validation studies on self-reported alcohol consumption. No counterfeit production or illegal import exists, thus making complete registration of all sources of alcohol possible. In this study we recorded sales from all agencies selling alcohol on Svalbard over a 2-month period in 1988. During the same period all adults living permanently on Svalbard were invited to take part in a health screening. As part of the screening a self-administered questionnaire on alcohol consumption was introduced to the participants. We found that the self-reported volume accounted for approximately 40 percent of the sales volume. Because of the unique situation applying to Svalbard, the estimate made in this study is believed to be more reliable compared to other studies using sales volume to validate self-reports.

  19. Recognition of Risk Information - Adaptation of J. Bertin's Orderable Matrix for social communication

    Science.gov (United States)

    Ishida, Keiichi

    2018-05-01

    This paper aims to show capability of the Orderable Matrix of Jacques Bertin which is a visualization method of data analyze and/or a method to recognize data. That matrix can show the data by replacing numbers to visual element. As an example, using a set of data regarding natural hazard rankings for certain metropolitan cities in the world, this paper describes how the Orderable Matrix handles the data set and show characteristic factors of this data to understand it. Not only to see a kind of risk ranking of cities, the Orderable Matrix shows how differently danger concerned cities ones and others are. Furthermore, we will see that the visualized data by Orderable Matrix allows us to see the characteristics of the data set comprehensively and instantaneously.

  20. The Geriatric ICF Core Set reflecting health-related problems in community-living older adults aged 75 years and older without dementia: development and validation.

    Science.gov (United States)

    Spoorenberg, Sophie L W; Reijneveld, Sijmen A; Middel, Berrie; Uittenbroek, Ronald J; Kremer, Hubertus P H; Wynia, Klaske

    2015-01-01

    The aim of the present study was to develop a valid Geriatric ICF Core Set reflecting relevant health-related problems of community-living older adults without dementia. A Delphi study was performed in order to reach consensus (≥70% agreement) on second-level categories from the International Classification of Functioning, Disability and Health (ICF). The Delphi panel comprised 41 older adults, medical and non-medical experts. Content validity of the set was tested in a cross-sectional study including 267 older adults identified as frail or having complex care needs. Consensus was reached for 30 ICF categories in the Delphi study (fourteen Body functions, ten Activities and Participation and six Environmental Factors categories). Content validity of the set was high: the prevalence of all the problems was >10%, except for d530 Toileting. The most frequently reported problems were b710 Mobility of joint functions (70%), b152 Emotional functions (65%) and b455 Exercise tolerance functions (62%). No categories had missing values. The final Geriatric ICF Core Set is a comprehensive and valid set of 29 ICF categories, reflecting the most relevant health-related problems among community-living older adults without dementia. This Core Set may contribute to optimal care provision and support of the older population. Implications for Rehabilitation The Geriatric ICF Core Set may provide a practical tool for gaining an understanding of the relevant health-related problems of community-living older adults without dementia. The Geriatric ICF Core Set may be used in primary care practice as an assessment tool in order to tailor care and support to the needs of older adults. The Geriatric ICF Core Set may be suitable for use in multidisciplinary teams in integrated care settings, since it is based on a broad range of problems in functioning. Professionals should pay special attention to health problems related to mobility and emotional functioning since these are the most

  1. Low-Rank Matrix Factorization With Adaptive Graph Regularizer.

    Science.gov (United States)

    Lu, Gui-Fu; Wang, Yong; Zou, Jian

    2016-05-01

    In this paper, we present a novel low-rank matrix factorization algorithm with adaptive graph regularizer (LMFAGR). We extend the recently proposed low-rank matrix with manifold regularization (MMF) method with an adaptive regularizer. Different from MMF, which constructs an affinity graph in advance, LMFAGR can simultaneously seek graph weight matrix and low-dimensional representations of data. That is, graph construction and low-rank matrix factorization are incorporated into a unified framework, which results in an automatically updated graph rather than a predefined one. The experimental results on some data sets demonstrate that the proposed algorithm outperforms the state-of-the-art low-rank matrix factorization methods.

  2. Quality data validation: Comprehensive approach to environmental data validation

    International Nuclear Information System (INIS)

    Matejka, L.A. Jr.

    1993-01-01

    Environmental data validation consists of an assessment of three major areas: analytical method validation; field procedures and documentation review; evaluation of the level of achievement of data quality objectives based in part on PARCC parameters analysis and expected applications of data. A program utilizing matrix association of required levels of validation effort and analytical levels versus applications of this environmental data was developed in conjunction with DOE-ID guidance documents to implement actions under the Federal Facilities Agreement and Consent Order in effect at the Idaho National Engineering Laboratory. This was an effort to bring consistent quality to the INEL-wide Environmental Restoration Program and database in an efficient and cost-effective manner. This program, documenting all phases of the review process, is described here

  3. Validating hierarchical verbal autopsy expert algorithms in a large data set with known causes of death.

    Science.gov (United States)

    Kalter, Henry D; Perin, Jamie; Black, Robert E

    2016-06-01

    Physician assessment historically has been the most common method of analyzing verbal autopsy (VA) data. Recently, the World Health Organization endorsed two automated methods, Tariff 2.0 and InterVA-4, which promise greater objectivity and lower cost. A disadvantage of the Tariff method is that it requires a training data set from a prior validation study, while InterVA relies on clinically specified conditional probabilities. We undertook to validate the hierarchical expert algorithm analysis of VA data, an automated, intuitive, deterministic method that does not require a training data set. Using Population Health Metrics Research Consortium study hospital source data, we compared the primary causes of 1629 neonatal and 1456 1-59 month-old child deaths from VA expert algorithms arranged in a hierarchy to their reference standard causes. The expert algorithms were held constant, while five prior and one new "compromise" neonatal hierarchy, and three former child hierarchies were tested. For each comparison, the reference standard data were resampled 1000 times within the range of cause-specific mortality fractions (CSMF) for one of three approximated community scenarios in the 2013 WHO global causes of death, plus one random mortality cause proportions scenario. We utilized CSMF accuracy to assess overall population-level validity, and the absolute difference between VA and reference standard CSMFs to examine particular causes. Chance-corrected concordance (CCC) and Cohen's kappa were used to evaluate individual-level cause assignment. Overall CSMF accuracy for the best-performing expert algorithm hierarchy was 0.80 (range 0.57-0.96) for neonatal deaths and 0.76 (0.50-0.97) for child deaths. Performance for particular causes of death varied, with fairly flat estimated CSMF over a range of reference values for several causes. Performance at the individual diagnosis level was also less favorable than that for overall CSMF (neonatal: best CCC = 0.23, range 0

  4. Teaching the Extracellular Matrix and Introducing Online Databases within a Multidisciplinary Course with i-Cell-MATRIX: A Student-Centered Approach

    Science.gov (United States)

    Sousa, Joao Carlos; Costa, Manuel Joao; Palha, Joana Almeida

    2010-01-01

    The biochemistry and molecular biology of the extracellular matrix (ECM) is difficult to convey to students in a classroom setting in ways that capture their interest. The understanding of the matrix's roles in physiological and pathological conditions study will presumably be hampered by insufficient knowledge of its molecular structure.…

  5. European validation of The Comprehensive International Classification of Functioning, Disability and Health Core Set for Osteoarthritis from the perspective of patients with osteoarthritis of the knee or hip.

    Science.gov (United States)

    Weigl, Martin; Wild, Heike

    2017-09-15

    To validate the International Classification of Functioning, Disability and Health Comprehensive Core Set for Osteoarthritis from the patient perspective in Europe. This multicenter cross-sectional study involved 375 patients with knee or hip osteoarthritis. Trained health professionals completed the Comprehensive Core Set, and patients completed the Short-Form 36 questionnaire. Content validity was evaluated by calculating prevalences of impairments in body function and structures, limitations in activities and participation and environmental factors, which were either barriers or facilitators. Convergent construct validity was evaluated by correlating the International Classification of Functioning, Disability and Health categories with the Short-Form 36 Physical Component Score and the SF-36 Mental Component Score in a subgroup of 259 patients. The prevalences of all body function, body structure and activities and participation categories were >40%, >32% and >20%, respectively, and all environmental factors were relevant for >16% of patients. Few categories showed relevant differences between knee and hip osteoarthritis. All body function categories and all but two activities and participation categories showed significant correlations with the Physical Component Score. Body functions from the ICF chapter Mental Functions showed higher correlations with the Mental Component Score than with the Physical Component Score. This study supports the validity of the International Classification of Functioning, Disability and Health Comprehensive Core Set for Osteoarthritis. Implications for Rehabilitation Comprehensive International Classification of Functioning, Disability and Health Core Sets were developed as practical tools for application in multidisciplinary assessments. The validity of the Comprehensive International Classification of Functioning, Disability and Health Core Set for Osteoarthritis in this study supports its application in European patients with

  6. The ToMenovela – A photograph-based stimulus set for the study of social cognition with high ecological validity

    Directory of Open Access Journals (Sweden)

    Maike C. Herbort

    2016-12-01

    Full Text Available We present the ToMenovela, a stimulus set that has been developed to provide a set of normatively rated socio-emotional stimuli showing varying amount of characters in emotionally laden interactions for experimental investigations of i cognitive and ii affective ToM, iii emotional reactivity, and iv complex emotion judgment with respect to Ekman’s basic emotions (happiness, sadness, anger, fear, surprise and disgust, Ekman & Friesen, 1975. Stimuli were generated with focus on ecological validity and consist of 190 scenes depicting daily-life situations. Two or more of eight main characters with distinct biographies and personalities are depicted on each scene picture.To obtain an initial evaluation of the stimulus set and to pave the way for future studies in clinical populations, normative data on each stimulus of the set was obtained from a sample of 61 neurologically and psychiatrically healthy participants (31 female, 30 male; mean age 26.74 +/- 5.84, including a visual analog scale rating of Ekman’s basic emotions (happiness, sadness, anger, fear, surprise and disgust and free-text descriptions of the content. The ToMenovela is being developed to provide standardized material of social scenes that are available to researchers in the study of social cognition. It should facilitate experimental control while keeping ecological validity high.

  7. A generic validation methodology and its application to a set of multi-axial creep damage constitutive equations

    International Nuclear Information System (INIS)

    Xu Qiang

    2005-01-01

    A generic validation methodology for a set of multi-axial creep damage constitutive equations is proposed and its use is illustrated with 0.5Cr0.5Mo0.25V ferritic steel which is featured as brittle or intergranular rupture. The objective of this research is to develop a methodology to guide systematically assess the quality of a set of multi-axial creep damage constitutive equations in order to ensure its general applicability. This work adopted a total quality assurance approach and expanded as a Four Stages procedure (Theories and Fundamentals, Parameter Identification, Proportional Load, and Non-proportional load). Its use is illustrated with 0.5Cr0.5Mo0.25V ferritic steel and this material is chosen due to its industry importance, the popular use of KRH type of constitutive equations, and the available qualitative experimental data including damage distribution from notched bar test. The validation exercise clearly revealed the deficiencies existed in the KRH formulation (in terms of mathematics and physics of damage mechanics) and its incapability to predict creep deformation accurately. Consequently, its use should be warned, which is particularly important due to its wide use as indicated in literature. This work contributes to understand the rational for formulation and the quality assurance of a set of constitutive equations in creep damage mechanics as well as in general damage mechanics. (authors)

  8. Efficiency criterion for teleportation via channel matrix, measurement matrix and collapsed matrix

    Directory of Open Access Journals (Sweden)

    Xin-Wei Zha

    Full Text Available In this paper, three kinds of coefficient matrixes (channel matrix, measurement matrix, collapsed matrix associated with the pure state for teleportation are presented, the general relation among channel matrix, measurement matrix and collapsed matrix is obtained. In addition, a criterion for judging whether a state can be teleported successfully is given, depending on the relation between the number of parameter of an unknown state and the rank of the collapsed matrix. Keywords: Channel matrix, Measurement matrix, Collapsed matrix, Teleportation

  9. Spanish translation and cross-language validation of a sleep habits questionnaire for use in clinical and research settings.

    Science.gov (United States)

    Baldwin, Carol M; Choi, Myunghan; McClain, Darya Bonds; Celaya, Alma; Quan, Stuart F

    2012-04-15

    To translate, back-translate and cross-language validate (English/Spanish) the Sleep Heart Health Study Sleep Habits Questionnaire for use with Spanish-speakers in clinical and research settings. Following rigorous translation and back-translation, this cross-sectional cross-language validation study recruited bilingual participants from academic, clinic, and community-based settings (N = 50; 52% women; mean age 38.8 ± 12 years; 90% of Mexican heritage). Participants completed English and Spanish versions of the Sleep Habits Questionnaire, the Epworth Sleepiness Scale, and the Acculturation Rating Scale for Mexican Americans II one week apart in randomized order. Psychometric properties were assessed, including internal consistency, convergent validity, scale equivalence, language version intercorrelations, and exploratory factor analysis using PASW (Version18) software. Grade level readability of the sleep measure was evaluated. All sleep categories (duration, snoring, apnea, insomnia symptoms, other sleep symptoms, sleep disruptors, restless legs syndrome) showed Cronbach α, Spearman-Brown coefficients and intercorrelations ≥ 0.700, suggesting robust internal consistency, correlation, and agreement between language versions. The Epworth correlated significantly with snoring, apnea, sleep symptoms, restless legs, and sleep disruptors) on both versions, supporting convergent validity. Items loaded on 4 factors accounted for 68% and 67% of the variance on the English and Spanish versions, respectively. The Spanish-language Sleep Habits Questionnaire demonstrates conceptual and content equivalency. It has appropriate measurement properties and should be useful for assessing sleep health in community-based clinics and intervention studies among Spanish-speaking Mexican Americans. Both language versions showed readability at the fifth grade level. Further testing is needed with larger samples.

  10. The Metadistrict as the Territorial Strategy: From Set Theory and a Matrix Organization Model Hypothesis

    Directory of Open Access Journals (Sweden)

    Francesco Contò

    2012-06-01

    Full Text Available The purpose of this proposal is to explore a new concept of 'Metadistrict' to be applied in a region of Southern Italy – Apulia ‐ in order to analyze the impact that the activation of a special network between different sector chains and several integrated projects may have for revitalizing the local economy; an important role is assigned to the network of relationships and so to the social capital. The Metadistrict model stems from the Local Action Groups and the Integrated Projects of Food Chain frameworks. It may represent a crucial driver of the rural economy through the realization of sector circuits connected to the concept of multi‐functionality in agriculture, that is Network of the Territorial Multi‐functionality. It was formalized by making use of a set of theories and of a Matrix Organization Model. The adoption of the Metadistrict perspective as the territorial strategy may play a key role to revitalize the primary sector, through the increase of economic and productive opportunities due to the implementation of a common and shared strategy and organization.

  11. Using digital photography in a clinical setting: a valid, accurate, and applicable method to assess food intake.

    Science.gov (United States)

    Winzer, Eva; Luger, Maria; Schindler, Karin

    2018-06-01

    Regular monitoring of food intake is hardly integrated in clinical routine. Therefore, the aim was to examine the validity, accuracy, and applicability of an appropriate and also quick and easy-to-use tool for recording food intake in a clinical setting. Two digital photography methods, the postMeal method with a picture after the meal, the pre-postMeal method with a picture before and after the meal, and the visual estimation method (plate diagram; PD) were compared against the reference method (weighed food records; WFR). A total of 420 dishes from lunch (7 weeks) were estimated with both photography methods and the visual method. Validity, applicability, accuracy, and precision of the estimation methods, and additionally food waste, macronutrient composition, and energy content were examined. Tests of validity revealed stronger correlations for photography methods (postMeal: r = 0.971, p < 0.001; pre-postMeal: r = 0.995, p < 0.001) compared to the visual estimation method (r = 0.810; p < 0.001). The pre-postMeal method showed smaller variability (bias < 1 g) and also smaller overestimation and underestimation. This method accurately and precisely estimated portion sizes in all food items. Furthermore, the total food waste was 22% for lunch over the study period. The highest food waste was observed in salads and the lowest in desserts. The pre-postMeal digital photography method is valid, accurate, and applicable in monitoring food intake in clinical setting, which enables a quantitative and qualitative dietary assessment. Thus, nutritional care might be initiated earlier. This method might be also advantageous for quantitative and qualitative evaluation of food waste, with a resultantly reduction in costs.

  12. Whitby Mudstone, flow from matrix to fractures

    Science.gov (United States)

    Houben, Maartje; Hardebol, Nico; Barnhoorn, Auke; Boersma, Quinten; Peach, Colin; Bertotti, Giovanni; Drury, Martyn

    2016-04-01

    Fluid flow from matrix to well in shales would be faster if we account for the duality of the permeable medium considering a high permeable fracture network together with a tight matrix. To investigate how long and how far a gas molecule would have to travel through the matrix until it reaches an open connected fracture we investigated the permeability of the Whitby Mudstone (UK) matrix in combination with mapping the fracture network present in the current outcrops of the Whitby Mudstone at the Yorkshire coast. Matrix permeability was measured perpendicular to the bedding using a pressure step decay method on core samples and permeability values are in the microdarcy range. The natural fracture network present in the pavement shows a connected network with dominant NS and EW strikes, where the NS fractures are the main fracture set with an orthogonal fracture set EW. Fracture spacing relations in the pavements show that the average distance to the nearest fracture varies between 7 cm (EW) and 14 cm (NS), where 90% of the matrix is 30 cm away from the nearest fracture. By making some assumptions like; fracture network at depth is similar to what is exposed in the current pavements and open to flow, fracture network is at hydrostatic pressure at 3 km depth, overpressure between matrix and fractures is 10% and a matrix permeability perpendicular to the bedding of 0.1 microdarcy, we have calculated the time it takes for a gas molecule to travel to the nearest fracture. These input values give travel times up to 8 days for a distance of 14 cm. If the permeability is changed to 1 nanodarcy or 10 microdarcy travel times change to 2.2 years or 2 hours respectively.

  13. Fatigue and frictional heating in ceramic matrix composites

    DEFF Research Database (Denmark)

    Jacobsen, T.K.; Sørensen, B.F.; Brøndsted, P.

    1997-01-01

    This paper describes an experimental technique for monitoring the damage evolution in ceramic matrix composites during cyclic testing. The damage is related to heat dissipation, which may be measured as radiated heat from the surface of the test specimen. In the present experimental set-up an iso......This paper describes an experimental technique for monitoring the damage evolution in ceramic matrix composites during cyclic testing. The damage is related to heat dissipation, which may be measured as radiated heat from the surface of the test specimen. In the present experimental set...... with a high spatial and temperature resolution and changes in the heat dissipation can be measured almost instantaneously. The technique has been tested on uni-directional ceramic matrix composites. Experimental results are shown and the possibilities and the limitations of the technique are discussed....

  14. Rigidity percolation in dispersions with a structured viscoelastic matrix

    NARCIS (Netherlands)

    Wilbrink, M.W.L.; Michels, M.A.J.; Vellinga, W.P.; Meijer, H.E.H.

    2005-01-01

    This paper deals with rigidity percolation in composite materials consisting of a dispersion of mineral particles in a microstructured viscoelastic matrix. The viscoelastic matrix in this specific case is a hydrocarbon refinery residue. In a set of model random composites the mean interparticle

  15. Establishing the Reliability and Validity of a Computerized Assessment of Children's Working Memory for Use in Group Settings

    Science.gov (United States)

    St Clair-Thompson, Helen

    2014-01-01

    The aim of the present study was to investigate the reliability and validity of a brief standardized assessment of children's working memory; "Lucid Recall." Although there are many established assessments of working memory, "Lucid Recall" is fully automated and can therefore be administered in a group setting. It is therefore…

  16. [Penile augmentation using acellular dermal matrix].

    Science.gov (United States)

    Zhang, Jin-ming; Cui, Yong-yan; Pan, Shu-juan; Liang, Wei-qiang; Chen, Xiao-xuan

    2004-11-01

    Penile enhancement was performed using acellular dermal matrix. Multiple layers of acellular dermal matrix were placed underneath the penile skin to enlarge its girth. Since March 2002, penile augmentation has been performed on 12 cases using acellular dermal matrix. Postoperatively all the patients had a 1.3-3.1 cm (2.6 cm in average) increase in penile girth in a flaccid state. The penis had normal appearance and feeling without contour deformities. All patients gained sexual ability 3 months after the operation. One had a delayed wound healing due to tight dressing, which was repaired with a scrotal skin flap. Penile enlargement by implantation of multiple layers of acellular dermal matrix was a safe and effective operation. This method can be performed in an outpatient ambulatory setting. The advantages of the acellular dermal matrix over the autogenous dermal fat grafts are elimination of donor site injury and scar and significant shortening of operation time.

  17. Reliability and Validity of Survey Instruments to Measure Work-Related Fatigue in the Emergency Medical Services Setting: A Systematic Review

    Science.gov (United States)

    2018-01-11

    Background: This study sought to systematically search the literature to identify reliable and valid survey instruments for fatigue measurement in the Emergency Medical Services (EMS) occupational setting. Methods: A systematic review study design wa...

  18. Three Interpretations of the Matrix Equation Ax = b

    Science.gov (United States)

    Larson, Christine; Zandieh, Michelle

    2013-01-01

    Many of the central ideas in an introductory undergraduate linear algebra course are closely tied to a set of interpretations of the matrix equation Ax = b (A is a matrix, x and b are vectors): linear combination interpretations, systems interpretations, and transformation interpretations. We consider graphic and symbolic representations for each,…

  19. Convex nonnegative matrix factorization with manifold regularization.

    Science.gov (United States)

    Hu, Wenjun; Choi, Kup-Sze; Wang, Peiliang; Jiang, Yunliang; Wang, Shitong

    2015-03-01

    Nonnegative Matrix Factorization (NMF) has been extensively applied in many areas, including computer vision, pattern recognition, text mining, and signal processing. However, nonnegative entries are usually required for the data matrix in NMF, which limits its application. Besides, while the basis and encoding vectors obtained by NMF can represent the original data in low dimension, the representations do not always reflect the intrinsic geometric structure embedded in the data. Motivated by manifold learning and Convex NMF (CNMF), we propose a novel matrix factorization method called Graph Regularized and Convex Nonnegative Matrix Factorization (GCNMF) by introducing a graph regularized term into CNMF. The proposed matrix factorization technique not only inherits the intrinsic low-dimensional manifold structure, but also allows the processing of mixed-sign data matrix. Clustering experiments on nonnegative and mixed-sign real-world data sets are conducted to demonstrate the effectiveness of the proposed method. Copyright © 2014 Elsevier Ltd. All rights reserved.

  20. KBLAS: An Optimized Library for Dense Matrix-Vector Multiplication on GPU Accelerators

    KAUST Repository

    Abdelfattah, Ahmad

    2016-05-11

    KBLAS is an open-source, high-performance library that provides optimized kernels for a subset of Level 2 BLAS functionalities on CUDA-enabled GPUs. Since performance of dense matrix-vector multiplication is hindered by the overhead of memory accesses, a double-buffering optimization technique is employed to overlap data motion with computation. After identifying a proper set of tuning parameters, KBLAS efficiently runs on various GPU architectures while avoiding code rewriting and retaining compliance with the standard BLAS API. Another optimization technique allows ensuring coalesced memory access when dealing with submatrices, especially for high-level dense linear algebra algorithms. All KBLAS kernels have been leveraged to a multi-GPU environment, which requires the introduction of new APIs. Considering general matrices, KBLAS is very competitive with existing state-of-the-art kernels and provides a smoother performance across a wide range of matrix dimensions. Considering symmetric and Hermitian matrices, the KBLAS performance outperforms existing state-of-the-art implementations on all matrix sizes and achieves asymptotically up to 50% and 60% speedup against the best competitor on single GPU and multi-GPUs systems, respectively. Performance results also validate our performance model. A subset of KBLAS highperformance kernels have been integrated into NVIDIA\\'s standard BLAS implementation (cuBLAS) for larger dissemination, starting from version 6.0. © 2016 ACM.

  1. Joint-2D-SL0 Algorithm for Joint Sparse Matrix Reconstruction

    Directory of Open Access Journals (Sweden)

    Dong Zhang

    2017-01-01

    Full Text Available Sparse matrix reconstruction has a wide application such as DOA estimation and STAP. However, its performance is usually restricted by the grid mismatch problem. In this paper, we revise the sparse matrix reconstruction model and propose the joint sparse matrix reconstruction model based on one-order Taylor expansion. And it can overcome the grid mismatch problem. Then, we put forward the Joint-2D-SL0 algorithm which can solve the joint sparse matrix reconstruction problem efficiently. Compared with the Kronecker compressive sensing method, our proposed method has a higher computational efficiency and acceptable reconstruction accuracy. Finally, simulation results validate the superiority of the proposed method.

  2. A Matrix Splitting Method for Composite Function Minimization

    KAUST Repository

    Yuan, Ganzhao

    2016-12-07

    Composite function minimization captures a wide spectrum of applications in both computer vision and machine learning. It includes bound constrained optimization and cardinality regularized optimization as special cases. This paper proposes and analyzes a new Matrix Splitting Method (MSM) for minimizing composite functions. It can be viewed as a generalization of the classical Gauss-Seidel method and the Successive Over-Relaxation method for solving linear systems in the literature. Incorporating a new Gaussian elimination procedure, the matrix splitting method achieves state-of-the-art performance. For convex problems, we establish the global convergence, convergence rate, and iteration complexity of MSM, while for non-convex problems, we prove its global convergence. Finally, we validate the performance of our matrix splitting method on two particular applications: nonnegative matrix factorization and cardinality regularized sparse coding. Extensive experiments show that our method outperforms existing composite function minimization techniques in term of both efficiency and efficacy.

  3. A Matrix Splitting Method for Composite Function Minimization

    KAUST Repository

    Yuan, Ganzhao; Zheng, Wei-Shi; Ghanem, Bernard

    2016-01-01

    Composite function minimization captures a wide spectrum of applications in both computer vision and machine learning. It includes bound constrained optimization and cardinality regularized optimization as special cases. This paper proposes and analyzes a new Matrix Splitting Method (MSM) for minimizing composite functions. It can be viewed as a generalization of the classical Gauss-Seidel method and the Successive Over-Relaxation method for solving linear systems in the literature. Incorporating a new Gaussian elimination procedure, the matrix splitting method achieves state-of-the-art performance. For convex problems, we establish the global convergence, convergence rate, and iteration complexity of MSM, while for non-convex problems, we prove its global convergence. Finally, we validate the performance of our matrix splitting method on two particular applications: nonnegative matrix factorization and cardinality regularized sparse coding. Extensive experiments show that our method outperforms existing composite function minimization techniques in term of both efficiency and efficacy.

  4. All-at-once Optimization for Coupled Matrix and Tensor Factorizations

    DEFF Research Database (Denmark)

    Evrim, Acar Ataman; Kolda, Tamara G.; Dunlavy, Daniel M.

    2011-01-01

    .g., the person by person social network matrix or the restaurant by category matrix, and higher-order tensors, e.g., the "ratings" tensor of the form restaurant by meal by person. In this paper, we are particularly interested in fusing data sets with the goal of capturing their underlying latent structures. We...... formulate this problem as a coupled matrix and tensor factorization (CMTF) problem where heterogeneous data sets are modeled by fitting outer-product models to higher-order tensors and matrices in a coupled manner. Unlike traditional approaches solving this problem using alternating algorithms, we propose...... an all-at-once optimization approach called CMTF-OPT (CMTF-OPTimization), which is a gradient-based optimization approach for joint analysis of matrices and higher-order tensors. We also extend the algorithm to handle coupled incomplete data sets. Using numerical experiments, we demonstrate...

  5. An Uncertainty Structure Matrix for Models and Simulations

    Science.gov (United States)

    Green, Lawrence L.; Blattnig, Steve R.; Hemsch, Michael J.; Luckring, James M.; Tripathi, Ram K.

    2008-01-01

    Software that is used for aerospace flight control and to display information to pilots and crew is expected to be correct and credible at all times. This type of software is typically developed under strict management processes, which are intended to reduce defects in the software product. However, modeling and simulation (M&S) software may exhibit varying degrees of correctness and credibility, depending on a large and complex set of factors. These factors include its intended use, the known physics and numerical approximations within the M&S, and the referent data set against which the M&S correctness is compared. The correctness and credibility of an M&S effort is closely correlated to the uncertainty management (UM) practices that are applied to the M&S effort. This paper describes an uncertainty structure matrix for M&S, which provides a set of objective descriptions for the possible states of UM practices within a given M&S effort. The columns in the uncertainty structure matrix contain UM elements or practices that are common across most M&S efforts, and the rows describe the potential levels of achievement in each of the elements. A practitioner can quickly look at the matrix to determine where an M&S effort falls based on a common set of UM practices that are described in absolute terms that can be applied to virtually any M&S effort. The matrix can also be used to plan those steps and resources that would be needed to improve the UM practices for a given M&S effort.

  6. Improved diagnostic accuracy of Alzheimer's disease by combining regional cortical thickness and default mode network functional connectivity: Validated in the Alzheimer's disease neuroimaging initiative set

    International Nuclear Information System (INIS)

    Park, Ji Eun; Park, Bum Woo; Kim, Sang Joon; Kim, Ho Sung; Choi, Choong Gon; Jung, Seung Jung; Oh, Joo Young; Shim, Woo Hyun; Lee, Jae Hong; Roh, Jee Hoon

    2017-01-01

    To identify potential imaging biomarkers of Alzheimer's disease by combining brain cortical thickness (CThk) and functional connectivity and to validate this model's diagnostic accuracy in a validation set. Data from 98 subjects was retrospectively reviewed, including a study set (n = 63) and a validation set from the Alzheimer's Disease Neuroimaging Initiative (n = 35). From each subject, data for CThk and functional connectivity of the default mode network was extracted from structural T1-weighted and resting-state functional magnetic resonance imaging. Cortical regions with significant differences between patients and healthy controls in the correlation of CThk and functional connectivity were identified in the study set. The diagnostic accuracy of functional connectivity measures combined with CThk in the identified regions was evaluated against that in the medial temporal lobes using the validation set and application of a support vector machine. Group-wise differences in the correlation of CThk and default mode network functional connectivity were identified in the superior temporal (p < 0.001) and supramarginal gyrus (p = 0.007) of the left cerebral hemisphere. Default mode network functional connectivity combined with the CThk of those two regions were more accurate than that combined with the CThk of both medial temporal lobes (91.7% vs. 75%). Combining functional information with CThk of the superior temporal and supramarginal gyri in the left cerebral hemisphere improves diagnostic accuracy, making it a potential imaging biomarker for Alzheimer's disease

  7. Chern-Simons couplings for dielectric F-strings in matrix string theory

    International Nuclear Information System (INIS)

    Brecher, Dominic; Janssen, Bert; Lozano, Yolanda

    2002-01-01

    We compute the non-abelian couplings in the Chern-Simons action for a set of coinciding fundamental strings in both the type IIA and type IIB Matrix string theories. Starting from Matrix theory in a weakly curved background, we construct the linear couplings of closed string fields to type IIA Matrix strings. Further dualities give a type IIB Matrix string theory and a type IIA theory of Matrix strings with winding. (Abstract Copyright[2002], Wiley Periodicals, Inc.)

  8. Interval-valued intuitionistic fuzzy matrix games based on Archimedean t-conorm and t-norm

    Science.gov (United States)

    Xia, Meimei

    2018-04-01

    Fuzzy game theory has been applied in many decision-making problems. The matrix game with interval-valued intuitionistic fuzzy numbers (IVIFNs) is investigated based on Archimedean t-conorm and t-norm. The existing matrix games with IVIFNs are all based on Algebraic t-conorm and t-norm, which are special cases of Archimedean t-conorm and t-norm. In this paper, the intuitionistic fuzzy aggregation operators based on Archimedean t-conorm and t-norm are employed to aggregate the payoffs of players. To derive the solution of the matrix game with IVIFNs, several mathematical programming models are developed based on Archimedean t-conorm and t-norm. The proposed models can be transformed into a pair of primal-dual linear programming models, based on which, the solution of the matrix game with IVIFNs is obtained. It is proved that the theorems being valid in the exiting matrix game with IVIFNs are still true when the general aggregation operator is used in the proposed matrix game with IVIFNs. The proposed method is an extension of the existing ones and can provide more choices for players. An example is given to illustrate the validity and the applicability of the proposed method.

  9. The Matrix exponential, Dynamic Systems and Control

    DEFF Research Database (Denmark)

    Poulsen, Niels Kjølstad

    The matrix exponential can be found in various connections in analysis and control of dynamic systems. In this short note we are going to list a few examples. The matrix exponential usably pops up in connection to the sampling process, whatever it is in a deterministic or a stochastic setting...... or it is a tool for determining a Gramian matrix. This note is intended to be used in connection to the teaching post the course in Stochastic Adaptive Control (02421) given at Informatics and Mathematical Modelling (IMM), The Technical University of Denmark. This work is a result of a study of the litterature....

  10. Generalized canonical analysis based on optimizing matrix correlations and a relation with IDIOSCAL

    NARCIS (Netherlands)

    Kiers, Henk A.L.; Cléroux, R.; Ten Berge, Jos M.F.

    1994-01-01

    Carroll's method for generalized canonical analysis of two or more sets of variables is shown to optimize the sum of squared inner-product matrix correlations between a consensus matrix and matrices with canonical variates for each set of variables. In addition, the method that analogously optimizes

  11. Eigenvalue-eigenvector decomposition (EED) analysis of dissimilarity and covariance matrix obtained from total synchronous fluorescence spectral (TSFS) data sets of herbal preparations: Optimizing the classification approach

    Science.gov (United States)

    Tarai, Madhumita; Kumar, Keshav; Divya, O.; Bairi, Partha; Mishra, Kishor Kumar; Mishra, Ashok Kumar

    2017-09-01

    The present work compares the dissimilarity and covariance based unsupervised chemometric classification approaches by taking the total synchronous fluorescence spectroscopy data sets acquired for the cumin and non-cumin based herbal preparations. The conventional decomposition method involves eigenvalue-eigenvector analysis of the covariance of the data set and finds the factors that can explain the overall major sources of variation present in the data set. The conventional approach does this irrespective of the fact that the samples belong to intrinsically different groups and hence leads to poor class separation. The present work shows that classification of such samples can be optimized by performing the eigenvalue-eigenvector decomposition on the pair-wise dissimilarity matrix.

  12. A Lexicographic Method for Matrix Games with Payoffs of Triangular Intuitionistic Fuzzy Numbers

    Directory of Open Access Journals (Sweden)

    Jiang-Xia Nan

    2010-09-01

    Full Text Available The intuitionistic fuzzy set (IF-set has not been applied to matrix game problems yet since it was introduced by K.T.Atanassov. The aim of this paper is to develop a methodology for solving matrix games with payoffs of triangular intuitionistic fuzzy numbers (TIFNs. Firstly the concept of TIFNs and their arithmetic operations and cut sets are introduced as well as the ranking order relations. Secondly the concept of solutions for matrix games with payoffs of TIFNs is defined. A lexicographic methodology is developed to determine the solutions of matrix games with payoffs of TIFNs for both Players through solving a pair of bi-objective linear programming models derived from two new auxiliary intuitionistic fuzzy programming models. The proposed method is illustrated with a numerical example.

  13. Open-Switch Fault Diagnosis and Fault Tolerant for Matrix Converter with Finite Control Set-Model Predictive Control

    DEFF Research Database (Denmark)

    Peng, Tao; Dan, Hanbing; Yang, Jian

    2016-01-01

    To improve the reliability of the matrix converter (MC), a fault diagnosis method to identify single open-switch fault is proposed in this paper. The introduced fault diagnosis method is based on finite control set-model predictive control (FCS-MPC), which employs a time-discrete model of the MC...... topology and a cost function to select the best switching state for the next sampling period. The proposed fault diagnosis method is realized by monitoring the load currents and judging the switching state to locate the faulty switch. Compared to the conventional modulation strategies such as carrier......-based modulation method, indirect space vector modulation and optimum Alesina-Venturini, the FCS-MPC has known and unchanged switching state in a sampling period. It is simpler to diagnose the exact location of the open switch in MC with FCS-MPC. To achieve better quality of the output current under single open...

  14. P-matrix in the quark compound bag model

    International Nuclear Information System (INIS)

    Kalashnikova, Yu.S.; Narodetskij, I.M.; Veselov, A.I.

    1983-01-01

    Meaning of the P-matrix analysis is discussed within the quark compound bag (QCB) model. The most general version of this model is considered including the arbitrary coupling between quark and hadronic channels and the arbitrary smearipg of the surface interection region. The behaviour of P-matrix poles as functions of matching radius r,L0 is discussed for r 0 > + . In conclusion are presented the parameters of an illustrative set of NN potentials that has been obtained from the P-matrix fit to experimental data

  15. The nuclear higher-order structure defined by the set of topological relationships between DNA and the nuclear matrix is species-specific in hepatocytes.

    Science.gov (United States)

    Silva-Santiago, Evangelina; Pardo, Juan Pablo; Hernández-Muñoz, Rolando; Aranda-Anzaldo, Armando

    2017-01-15

    During the interphase the nuclear DNA of metazoan cells is organized in supercoiled loops anchored to constituents of a nuclear substructure or compartment known as the nuclear matrix. The stable interactions between DNA and the nuclear matrix (NM) correspond to a set of topological relationships that define a nuclear higher-order structure (NHOS). Current evidence suggests that the NHOS is cell-type-specific. Biophysical evidence and theoretical models suggest that thermodynamic and structural constraints drive the actualization of DNA-NM interactions. However, if the topological relationships between DNA and the NM were the subject of any biological constraint with functional significance then they must be adaptive and thus be positively selected by natural selection and they should be reasonably conserved, at least within closely related species. We carried out a coarse-grained, comparative evaluation of the DNA-NM topological relationships in primary hepatocytes from two closely related mammals: rat and mouse, by determining the relative position to the NM of a limited set of target sequences corresponding to highly-conserved genomic regions that also represent a sample of distinct chromosome territories within the interphase nucleus. Our results indicate that the pattern of topological relationships between DNA and the NM is not conserved between the hepatocytes of the two closely related species, suggesting that the NHOS, like the karyotype, is species-specific. Copyright © 2016 Elsevier B.V. All rights reserved.

  16. Discriminant validity of well-being measures.

    Science.gov (United States)

    Lucas, R E; Diener, E; Suh, E

    1996-09-01

    The convergent and discriminant validities of well-being concepts were examined using multitrait-multimethod matrix analyses (D. T. Campbell & D. W. Fiske, 1959) on 3 sets of data. In Study 1, participants completed measures of life satisfaction, positive affect, negative affect, self-esteem, and optimism on 2 occasions 4 weeks apart and also obtained 3 informant ratings. In Study 2, participants completed each of the 5 measures on 2 occasions 2 years apart and collected informant reports at Time 2. In Study 3, participants completed 2 different scales for each of the 5 constructs. Analyses showed that (a) life satisfaction is discriminable from positive and negative affect, (b) positive affect is discriminable from negative affect, (c) life satisfaction is discriminable from optimism and self-esteem, and (d) optimism is separable from trait measures of negative affect.

  17. Update of Standard Practices for New Method Validation in Forensic Toxicology.

    Science.gov (United States)

    Wille, Sarah M R; Coucke, Wim; De Baere, Thierry; Peters, Frank T

    2017-01-01

    International agreement concerning validation guidelines is important to obtain quality forensic bioanalytical research and routine applications as it all starts with the reporting of reliable analytical data. Standards for fundamental validation parameters are provided in guidelines as those from the US Food and Drug Administration (FDA), the European Medicines Agency (EMA), the German speaking Gesellschaft fur Toxikologie und Forensische Chemie (GTFCH) and the Scientific Working Group of Forensic Toxicology (SWGTOX). These validation parameters include selectivity, matrix effects, method limits, calibration, accuracy and stability, as well as other parameters such as carryover, dilution integrity and incurred sample reanalysis. It is, however, not easy for laboratories to implement these guidelines into practice as these international guidelines remain nonbinding protocols, that depend on the applied analytical technique, and that need to be updated according the analyst's method requirements and the application type. In this manuscript, a review of the current guidelines and literature concerning bioanalytical validation parameters in a forensic context is given and discussed. In addition, suggestions for the experimental set-up, the pros and cons of statistical approaches and adequate acceptance criteria for the validation of bioanalytical applications are given. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  18. Examination of the MMPI-2 restructured form (MMPI-2-RF) validity scales in civil forensic settings: findings from simulation and known group samples.

    Science.gov (United States)

    Wygant, Dustin B; Ben-Porath, Yossef S; Arbisi, Paul A; Berry, David T R; Freeman, David B; Heilbronner, Robert L

    2009-11-01

    The current study examined the effectiveness of the MMPI-2 Restructured Form (MMPI-2-RF; Ben-Porath and Tellegen, 2008) over-reporting indicators in civil forensic settings. The MMPI-2-RF includes three revised MMPI-2 over-reporting validity scales and a new scale to detect over-reported somatic complaints. Participants dissimulated medical and neuropsychological complaints in two simulation samples, and a known-groups sample used symptom validity tests as a response bias criterion. Results indicated large effect sizes for the MMPI-2-RF validity scales, including a Cohen's d of .90 for Fs in a head injury simulation sample, 2.31 for FBS-r, 2.01 for F-r, and 1.97 for Fs in a medical simulation sample, and 1.45 for FBS-r and 1.30 for F-r in identifying poor effort on SVTs. Classification results indicated good sensitivity and specificity for the scales across the samples. This study indicates that the MMPI-2-RF over-reporting validity scales are effective at detecting symptom over-reporting in civil forensic settings.

  19. Time-oriented experimental design method to optimize hydrophilic matrix formulations with gelation kinetics and drug release profiles.

    Science.gov (United States)

    Shin, Sangmun; Choi, Du Hyung; Truong, Nguyen Khoa Viet; Kim, Nam Ah; Chu, Kyung Rok; Jeong, Seong Hoon

    2011-04-04

    A new experimental design methodology was developed by integrating the response surface methodology and the time series modeling. The major purposes were to identify significant factors in determining swelling and release rate from matrix tablets and their relative factor levels for optimizing the experimental responses. Properties of tablet swelling and drug release were assessed with ten factors and two default factors, a hydrophilic model drug (terazosin) and magnesium stearate, and compared with target values. The selected input control factors were arranged in a mixture simplex lattice design with 21 experimental runs. The obtained optimal settings for gelation were PEO, LH-11, Syloid, and Pharmacoat with weight ratios of 215.33 (88.50%), 5.68 (2.33%), 19.27 (7.92%), and 3.04 (1.25%), respectively. The optimal settings for drug release were PEO and citric acid with weight ratios of 191.99 (78.91%) and 51.32 (21.09%), respectively. Based on the results of matrix swelling and drug release, the optimal solutions, target values, and validation experiment results over time were similar and showed consistent patterns with very small biases. The experimental design methodology could be a very promising experimental design method to obtain maximum information with limited time and resources. It could also be very useful in formulation studies by providing a systematic and reliable screening method to characterize significant factors in the sustained release matrix tablet. Copyright © 2011 Elsevier B.V. All rights reserved.

  20. Diffraction measurements of residual stress in titanium matrix composites

    International Nuclear Information System (INIS)

    James, M.R.; Bourke, M.A.; Goldstone, J.A.; Lawson, A.C.

    1993-01-01

    Metal matrix composites develop residual strains after consolidation due to the thermal expansion mismatch between the reinforcement fiber and the matrix. X-ray and neutron diffraction measured values for the longitudinal residual stress in the matrix of four titanium MMCs are reported. For thick composites (> 6 plies) the surface stress measured by x-ray diffraction matches that determined by neutron diffraction and therefore represents the stress in the bulk region consisting of the fibers and matrix. For thin sheet composites, the surface values are lower than in the interior and increase as the outer rows of fibers are approached. While a rationale for the behavior in the thin sheet has yet to be developed, accounting for composite thickness is important when using x-ray measured values to validate analytic and finite element calculations of the residual stress state

  1. An iterative method to invert the LTSn matrix

    Energy Technology Data Exchange (ETDEWEB)

    Cardona, A.V.; Vilhena, M.T. de [UFRGS, Porto Alegre (Brazil)

    1996-12-31

    Recently Vilhena and Barichello proposed the LTSn method to solve, analytically, the Discrete Ordinates Problem (Sn problem) in transport theory. The main feature of this method consist in the application of the Laplace transform to the set of Sn equations and solve the resulting algebraic system for the transport flux. Barichello solve the linear system containing the parameter s applying the definition of matrix invertion exploiting the structure of the LTSn matrix. In this work, it is proposed a new scheme to invert the LTSn matrix, decomposing it in blocks and recursively inverting this blocks.

  2. Ab initio nuclear structure - the large sparse matrix eigenvalue problem

    Energy Technology Data Exchange (ETDEWEB)

    Vary, James P; Maris, Pieter [Department of Physics, Iowa State University, Ames, IA, 50011 (United States); Ng, Esmond; Yang, Chao [Computational Research Division, Lawrence Berkeley National Laboratory, Berkeley, CA 94720 (United States); Sosonkina, Masha, E-mail: jvary@iastate.ed [Scalable Computing Laboratory, Ames Laboratory, Iowa State University, Ames, IA, 50011 (United States)

    2009-07-01

    The structure and reactions of light nuclei represent fundamental and formidable challenges for microscopic theory based on realistic strong interaction potentials. Several ab initio methods have now emerged that provide nearly exact solutions for some nuclear properties. The ab initio no core shell model (NCSM) and the no core full configuration (NCFC) method, frame this quantum many-particle problem as a large sparse matrix eigenvalue problem where one evaluates the Hamiltonian matrix in a basis space consisting of many-fermion Slater determinants and then solves for a set of the lowest eigenvalues and their associated eigenvectors. The resulting eigenvectors are employed to evaluate a set of experimental quantities to test the underlying potential. For fundamental problems of interest, the matrix dimension often exceeds 10{sup 10} and the number of nonzero matrix elements may saturate available storage on present-day leadership class facilities. We survey recent results and advances in solving this large sparse matrix eigenvalue problem. We also outline the challenges that lie ahead for achieving further breakthroughs in fundamental nuclear theory using these ab initio approaches.

  3. Ab initio nuclear structure - the large sparse matrix eigenvalue problem

    International Nuclear Information System (INIS)

    Vary, James P; Maris, Pieter; Ng, Esmond; Yang, Chao; Sosonkina, Masha

    2009-01-01

    The structure and reactions of light nuclei represent fundamental and formidable challenges for microscopic theory based on realistic strong interaction potentials. Several ab initio methods have now emerged that provide nearly exact solutions for some nuclear properties. The ab initio no core shell model (NCSM) and the no core full configuration (NCFC) method, frame this quantum many-particle problem as a large sparse matrix eigenvalue problem where one evaluates the Hamiltonian matrix in a basis space consisting of many-fermion Slater determinants and then solves for a set of the lowest eigenvalues and their associated eigenvectors. The resulting eigenvectors are employed to evaluate a set of experimental quantities to test the underlying potential. For fundamental problems of interest, the matrix dimension often exceeds 10 10 and the number of nonzero matrix elements may saturate available storage on present-day leadership class facilities. We survey recent results and advances in solving this large sparse matrix eigenvalue problem. We also outline the challenges that lie ahead for achieving further breakthroughs in fundamental nuclear theory using these ab initio approaches.

  4. Correction of failure in antenna array using matrix pencil technique

    International Nuclear Information System (INIS)

    Khan, SU; Rahim, MKA

    2017-01-01

    In this paper a non-iterative technique is developed for the correction of faulty antenna array based on matrix pencil technique (MPT). The failure of a sensor in antenna array can damage the radiation power pattern in terms of sidelobes level and nulls. In the developed technique, the radiation pattern of the array is sampled to form discrete power pattern information set. Then this information set can be arranged in the form of Hankel matrix (HM) and execute the singular value decomposition (SVD). By removing nonprincipal values, we obtain an optimum lower rank estimation of HM. This lower rank matrix corresponds to the corrected pattern. Then the proposed technique is employed to recover the weight excitation and position allocations from the estimated matrix. Numerical simulations confirm the efficiency of the proposed technique, which is compared with the available techniques in terms of sidelobes level and nulls. (paper)

  5. Development and validation of an in vitro–in vivo correlation (IVIVC model for propranolol hydrochloride extended-release matrix formulations

    Directory of Open Access Journals (Sweden)

    Chinhwa Cheng

    2014-06-01

    Full Text Available The objective of this study was to develop an in vitro–in vivo correlation (IVIVC model for hydrophilic matrix extended-release (ER propranolol dosage formulations. The in vitro release characteristics of the drug were determined using USP apparatus I at 100 rpm, in a medium of varying pH (from pH 1.2 to pH 6.8. In vivo plasma concentrations and pharmacokinetic parameters in male beagle dogs were obtained after administering oral, ER formulations and immediate-release (IR commercial products. The similarity factor f2 was used to compare the dissolution data. The IVIVC model was developed using pooled fraction dissolved and fraction absorbed of propranolol ER formulations, ER-F and ER-S, with different release rates. An additional formulation ER-V, with a different release rate of propranolol, was prepared for evaluating the external predictability. The results showed that the percentage prediction error (%PE values of Cmax and AUC0–∞ were 0.86% and 5.95%, respectively, for the external validation study. The observed low prediction errors for Cmax and AUC0–∞ demonstrated that the propranolol IVIVC model was valid.

  6. Validity of verbal autopsy method to determine causes of death among adults in the urban setting of Ethiopia

    Directory of Open Access Journals (Sweden)

    Misganaw Awoke

    2012-08-01

    Full Text Available Abstract Background Verbal autopsy has been widely used to estimate causes of death in settings with inadequate vital registries, but little is known about its validity. This analysis was part of Addis Ababa Mortality Surveillance Program to examine the validity of verbal autopsy for determining causes of death compared with hospital medical records among adults in the urban setting of Ethiopia. Methods This validation study consisted of comparison of verbal autopsy final diagnosis with hospital diagnosis taken as a “gold standard”. In public and private hospitals of Addis Ababa, 20,152 adult deaths (15 years and above were recorded between 2007 and 2010. With the same period, a verbal autopsy was conducted for 4,776 adult deaths of which, 1,356 were deceased in any of Addis Ababa hospitals. Then, verbal autopsy and hospital data sets were merged using the variables; full name of the deceased, sex, address, age, place and date of death. We calculated sensitivity, specificity and positive predictive values with 95% confidence interval. Results After merging, a total of 335 adult deaths were captured. For communicable diseases, the values of sensitivity, specificity and positive predictive values of verbal autopsy diagnosis were 79%, 78% and 68% respectively. For non-communicable diseases, sensitivity of the verbal autopsy diagnoses was 69%, specificity 78% and positive predictive value 79%. Regarding injury, sensitivity of the verbal autopsy diagnoses was 70%, specificity 98% and positive predictive value 83%. Higher sensitivity was achieved for HIV/AIDS and tuberculosis, but lower specificity with relatively more false positives. Conclusion These findings may indicate the potential of verbal autopsy to provide cost-effective information to guide policy on communicable and non communicable diseases double burden among adults in Ethiopia. Thus, a well structured verbal autopsy method, followed by qualified physician reviews could be capable of

  7. Validity of verbal autopsy method to determine causes of death among adults in the urban setting of Ethiopia

    Science.gov (United States)

    2012-01-01

    Background Verbal autopsy has been widely used to estimate causes of death in settings with inadequate vital registries, but little is known about its validity. This analysis was part of Addis Ababa Mortality Surveillance Program to examine the validity of verbal autopsy for determining causes of death compared with hospital medical records among adults in the urban setting of Ethiopia. Methods This validation study consisted of comparison of verbal autopsy final diagnosis with hospital diagnosis taken as a “gold standard”. In public and private hospitals of Addis Ababa, 20,152 adult deaths (15 years and above) were recorded between 2007 and 2010. With the same period, a verbal autopsy was conducted for 4,776 adult deaths of which, 1,356 were deceased in any of Addis Ababa hospitals. Then, verbal autopsy and hospital data sets were merged using the variables; full name of the deceased, sex, address, age, place and date of death. We calculated sensitivity, specificity and positive predictive values with 95% confidence interval. Results After merging, a total of 335 adult deaths were captured. For communicable diseases, the values of sensitivity, specificity and positive predictive values of verbal autopsy diagnosis were 79%, 78% and 68% respectively. For non-communicable diseases, sensitivity of the verbal autopsy diagnoses was 69%, specificity 78% and positive predictive value 79%. Regarding injury, sensitivity of the verbal autopsy diagnoses was 70%, specificity 98% and positive predictive value 83%. Higher sensitivity was achieved for HIV/AIDS and tuberculosis, but lower specificity with relatively more false positives. Conclusion These findings may indicate the potential of verbal autopsy to provide cost-effective information to guide policy on communicable and non communicable diseases double burden among adults in Ethiopia. Thus, a well structured verbal autopsy method, followed by qualified physician reviews could be capable of providing reasonable cause

  8. Universal shocks in the Wishart random-matrix ensemble.

    Science.gov (United States)

    Blaizot, Jean-Paul; Nowak, Maciej A; Warchoł, Piotr

    2013-05-01

    We show that the derivative of the logarithm of the average characteristic polynomial of a diffusing Wishart matrix obeys an exact partial differential equation valid for an arbitrary value of N, the size of the matrix. In the large N limit, this equation generalizes the simple inviscid Burgers equation that has been obtained earlier for Hermitian or unitary matrices. The solution, through the method of characteristics, presents singularities that we relate to the precursors of shock formation in the Burgers equation. The finite N effects appear as a viscosity term in the Burgers equation. Using a scaling analysis of the complete equation for the characteristic polynomial, in the vicinity of the shocks, we recover in a simple way the universal Bessel oscillations (so-called hard-edge singularities) familiar in random-matrix theory.

  9. Validation of the k0 standardization method in neutron activation analysis

    International Nuclear Information System (INIS)

    Kubesova, Marie

    2009-01-01

    The goal of this work was to validate the k 0 standardization method in neutron activation analysis for use by the Nuclear Physics Institute's NAA Laboratory. The precision and accuracy of the method were examined by using two types of reference materials: the one type comprised a set of synthetic materials and served to check the implementation of k 0 standardization, the other type consisted of matrix NIST SRMs comprising various different matrices. In general, a good agreement was obtained between the results of this work and the certified values, giving evidence of the accuracy of our results. In addition, the limits were evaluated for 61 elements

  10. Validation of a simple evaporation-transpiration scheme (SETS) to estimate evaporation using micro-lysimeter measurements

    Science.gov (United States)

    Ghazanfari, Sadegh; Pande, Saket; Savenije, Hubert

    2014-05-01

    Several methods exist to estimate E and T. The Penman-Montieth or Priestly-Taylor methods along with the Jarvis scheme for estimating vegetation resistance are commonly used to estimate these fluxes as a function of land cover, atmospheric forcing and soil moisture content. In this study, a simple evaporation transpiration method is developed based on MOSAIC Land Surface Model that explicitly accounts for soil moisture. Soil evaporation and transpiration estimated by SETS is validated on a single column of soil profile with measured evaporation data from three micro-lysimeters located at Ferdowsi University of Mashhad synoptic station, Iran, for the year 2005. SETS is run using both implicit and explicit computational schemes. Results show that the implicit scheme estimates the vapor flux close to that by the explicit scheme. The mean difference between the implicit and explicit scheme is -0.03 mm/day. The paired T-test of mean difference (p-Value = 0.042 and t-Value = 2.04) shows that there is no significant difference between the two methods. The sum of soil evaporation and transpiration from SETS is also compared with P-M equation and micro-lysimeters measurements. The SETS predicts the actual evaporation with a lower bias (= 1.24mm/day) than P-M (= 1.82 mm/day) and with R2 value of 0.82.

  11. Cross-cultural validation of Lupus Impact Tracker in five European clinical practice settings.

    Science.gov (United States)

    Schneider, Matthias; Mosca, Marta; Pego-Reigosa, José-Maria; Gunnarsson, Iva; Maurel, Frédérique; Garofano, Anna; Perna, Alessandra; Porcasi, Rolando; Devilliers, Hervé

    2017-05-01

    The aim was to evaluate the cross-cultural validity of the Lupus Impact Tracker (LIT) in five European countries and to assess its acceptability and feasibility from the patient and physician perspectives. A prospective, observational, cross-sectional and multicentre validation study was conducted in clinical settings. Before the visit, patients completed LIT, Short Form 36 (SF-36) and care satisfaction questionnaires. During the visit, physicians assessed disease activity [Safety of Estrogens in Lupus Erythematosus National Assessment (SELENA)-SLEDAI], organ damage [SLICC/ACR damage index (SDI)] and flare occurrence. Cross-cultural validity was assessed using the Differential Item Functioning method. Five hundred and sixty-nine SLE patients were included by 25 specialists; 91.7% were outpatients and 89.9% female, with mean age 43.5 (13.0) years. Disease profile was as follows: 18.3% experienced flares; mean SELENA-SLEDAI score 3.4 (4.5); mean SDI score 0.8 (1.4); and SF-36 mean physical and mental component summary scores: physical component summary 42.8 (10.8) and mental component summary 43.0 (12.3). Mean LIT score was 34.2 (22.3) (median: 32.5), indicating that lupus moderately impacted patients' daily life. A cultural Differential Item Functioning of negligible magnitude was detected across countries (pseudo- R 2 difference of 0.01-0.04). Differences were observed between LIT scores and Physician Global Assessment, SELENA-SLEDAI, SDI scores = 0 (P cultural invariability across countries. They suggest that LIT can be used in routine clinical practice to evaluate and follow patient-reported outcomes in order to improve patient-physician interaction. © The Author 2017. Published by Oxford University Press on behalf of the British Society for Rheumatology. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  12. A J matrix engine for density functional theory calculations

    International Nuclear Information System (INIS)

    White, C.A.; Head-Gordon, M.

    1996-01-01

    We introduce a new method for the formation of the J matrix (Coulomb interaction matrix) within a basis of Cartesian Gaussian functions, as needed in density functional theory and Hartree endash Fock calculations. By summing the density matrix into the underlying Gaussian integral formulas, we have developed a J matrix open-quote open-quote engine close-quote close-quote which forms the exact J matrix without explicitly forming the full set of two electron integral intermediates. Several precomputable quantities have been identified, substantially reducing the number of floating point operations and memory accesses needed in a J matrix calculation. Initial timings indicate a speedup of greater than four times for the (pp parallel pp) class of integrals with speedups increasing to over ten times for (ff parallel ff) integrals. copyright 1996 American Institute of Physics

  13. Strategy BMT Al-Ittihad Using Matrix IE, Matrix SWOT 8K, Matrix SPACE and Matrix TWOS

    Directory of Open Access Journals (Sweden)

    Nofrizal Nofrizal

    2018-03-01

    Full Text Available This research aims to formulate and select BMT Al-Ittihad Rumbai strategy to face the changing of business environment both from internal environment such as organization resources, finance, member and external business such as competitor, economy, politics and others. This research method used Analysis of EFAS, IFAS, IE Matrix, SWOT-8K Matrix, SPACE Matrix and TWOS Matrix. our hope from this research it can assist BMT Al-Ittihad in formulating and selecting strategies for the sustainability of BMT Al-Ittihad in the future. The sample in this research is using purposive sampling technique that is the manager and leader of BMT Al-IttihadRumbaiPekanbaru. The result of this research shows that the position of BMT Al-Ittihad using IE Matrix, SWOT-8K Matrix and SPACE Matrix is in growth position, stabilization and aggressive. The choice of strategy after using TWOS Matrix is market penetration, market development, vertical integration, horizontal integration, and stabilization (careful.

  14. Eigenvalue-eigenvector decomposition (EED) analysis of dissimilarity and covariance matrix obtained from total synchronous fluorescence spectral (TSFS) data sets of herbal preparations: Optimizing the classification approach.

    Science.gov (United States)

    Tarai, Madhumita; Kumar, Keshav; Divya, O; Bairi, Partha; Mishra, Kishor Kumar; Mishra, Ashok Kumar

    2017-09-05

    The present work compares the dissimilarity and covariance based unsupervised chemometric classification approaches by taking the total synchronous fluorescence spectroscopy data sets acquired for the cumin and non-cumin based herbal preparations. The conventional decomposition method involves eigenvalue-eigenvector analysis of the covariance of the data set and finds the factors that can explain the overall major sources of variation present in the data set. The conventional approach does this irrespective of the fact that the samples belong to intrinsically different groups and hence leads to poor class separation. The present work shows that classification of such samples can be optimized by performing the eigenvalue-eigenvector decomposition on the pair-wise dissimilarity matrix. Copyright © 2017 Elsevier B.V. All rights reserved.

  15. Total and Local Quadratic Indices of the Molecular Pseudograph's Atom Adjacency Matrix: Applications to the Prediction of Physical Properties of Organic Compounds

    Directory of Open Access Journals (Sweden)

    Yovani Marrero Ponce

    2003-08-01

    Full Text Available A novel topological approach for obtaining a family of new molecular descriptors is proposed. In this connection, a vector space E (molecular vector space, whose elements are organic molecules, is defined as a “direct sum“ of different ℜi spaces. In this way we can represent molecules having a total of i atoms as elements (vectors of the vector spaces ℜi (i=1, 2, 3,..., n; where n is number of atoms in the molecule. In these spaces the components of the vectors are atomic properties that characterize each kind of atom in particular. The total quadratic indices are based on the calculation of mathematical quadratic forms. These forms are functions of the k-th power of the molecular pseudograph's atom adjacency matrix (M. For simplicity, canonical bases are selected as the quadratic forms' bases. These indices were generalized to “higher analogues“ as number sequences. In addition, this paper also introduces a local approach (local invariant for molecular quadratic indices. This approach is based mainly on the use of a local matrix [Mk(G, FR]. This local matrix is obtained from the k-th power (Mk(G of the atom adjacency matrix M. Mk(G, FR includes the elements of the fragment of interest and those that are connected with it, through paths of length k. Finally, total (and local quadratic indices have been used in QSPR studies of four series of organic compounds. The quantitative models found are significant from a statistical point of view and permit a clear interpretation of the studied properties in terms of the structural features of molecules. External prediction series and cross-validation procedures (leave-one-out and leave-group-out assessed model predictability. The reported method has shown similar results, compared with other topological approaches. The results obtained were the following: a Seven physical properties of 74 normal and branched alkanes (boiling points

  16. Validating internet research: a test of the psychometric equivalence of internet and in-person samples.

    Science.gov (United States)

    Meyerson, Paul; Tryon, Warren W

    2003-11-01

    This study evaluated the psychometric equivalency of Web-based research. The Sexual Boredom Scale was presented via the World-Wide Web along with five additional scales used to validate it. A subset of 533 participants that matched a previously published sample (Watt & Ewing, 1996) on age, gender, and race was identified. An 8 x 8 correlation matrix from the matched Internet sample was compared via structural equation modeling with a similar 8 x 8 correlation matrix from the previously published study. The Internet and previously published samples were psychometrically equivalent. Coefficient alpha values calculated on the matched Internet sample yielded reliability coefficients almost identical to those for the previously published sample. Factors such as computer administration and uncontrollable administration settings did not appear to affect the results. Demographic data indicated an overrepresentation of males by about 6% and Caucasians by about 13% relative to the U.S. Census (2000). A total of 2,230 participants were obtained in about 8 months without remuneration. These results suggest that data collection on the Web is (1) reliable, (2) valid, (3) reasonably representative, (4) cost effective, and (5) efficient.

  17. Definition of a matrix of the generalized parameters asymmetrical multiphase transmission lines

    Directory of Open Access Journals (Sweden)

    Suslov V.M.

    2005-12-01

    Full Text Available Idle time, without introduction of wave characteristics, algorithm of definition of a matrix of the generalized parameters asymmetrical multiphase transmission lines is offered. Definition of a matrix of parameters is based on a matrix primary specific of parameters of line and simple iterative procedure. The amount of iterations of iterative procedure is determined by a set error of performance of the resulted matrix ratio between separate blocks of a determined matrix. The given error is connected by close image of with a margin error determined matrix.

  18. Symmetric geometric transfer matrix partial volume correction for PET imaging: principle, validation and robustness

    Science.gov (United States)

    Sattarivand, Mike; Kusano, Maggie; Poon, Ian; Caldwell, Curtis

    2012-11-01

    Limited spatial resolution of positron emission tomography (PET) often requires partial volume correction (PVC) to improve the accuracy of quantitative PET studies. Conventional region-based PVC methods use co-registered high resolution anatomical images (e.g. computed tomography (CT) or magnetic resonance images) to identify regions of interest. Spill-over between regions is accounted for by calculating regional spread functions (RSFs) in a geometric transfer matrix (GTM) framework. This paper describes a new analytically derived symmetric GTM (sGTM) method that relies on spill-over between RSFs rather than between regions. It is shown that the sGTM is mathematically equivalent to Labbe's method; however it is a region-based method rather than a voxel-based method and it avoids handling large matrices. The sGTM method was validated using two three-dimensional (3D) digital phantoms and one physical phantom. A 3D digital sphere phantom with sphere diameters ranging from 5 to 30 mm and a sphere-to-background uptake ratio of 3-to-1 was used. A 3D digital brain phantom was used with four different anatomical regions and a background region with different activities assigned to each region. A physical sphere phantom with the same geometry and uptake as the digital sphere phantom was manufactured and PET-CT images were acquired. Using these three phantoms, the performance of the sGTM method was assessed against that of the GTM method in terms of accuracy, precision, noise propagation and robustness. The robustness was assessed by applying mis-registration errors and errors in estimates of PET point spread function (PSF). In all three phantoms, the results showed that the sGTM method has accuracy similar to that of the GTM method and within 5%. However, the sGTM method showed better precision and noise propagation than the GTM method, especially for spheres smaller than 13 mm. Moreover, the sGTM method was more robust than the GTM method when mis-registration errors or

  19. Symmetric geometric transfer matrix partial volume correction for PET imaging: principle, validation and robustness

    International Nuclear Information System (INIS)

    Sattarivand, Mike; Caldwell, Curtis; Kusano, Maggie; Poon, Ian

    2012-01-01

    Limited spatial resolution of positron emission tomography (PET) often requires partial volume correction (PVC) to improve the accuracy of quantitative PET studies. Conventional region-based PVC methods use co-registered high resolution anatomical images (e.g. computed tomography (CT) or magnetic resonance images) to identify regions of interest. Spill-over between regions is accounted for by calculating regional spread functions (RSFs) in a geometric transfer matrix (GTM) framework. This paper describes a new analytically derived symmetric GTM (sGTM) method that relies on spill-over between RSFs rather than between regions. It is shown that the sGTM is mathematically equivalent to Labbe's method; however it is a region-based method rather than a voxel-based method and it avoids handling large matrices. The sGTM method was validated using two three-dimensional (3D) digital phantoms and one physical phantom. A 3D digital sphere phantom with sphere diameters ranging from 5 to 30 mm and a sphere-to-background uptake ratio of 3-to-1 was used. A 3D digital brain phantom was used with four different anatomical regions and a background region with different activities assigned to each region. A physical sphere phantom with the same geometry and uptake as the digital sphere phantom was manufactured and PET-CT images were acquired. Using these three phantoms, the performance of the sGTM method was assessed against that of the GTM method in terms of accuracy, precision, noise propagation and robustness. The robustness was assessed by applying mis-registration errors and errors in estimates of PET point spread function (PSF). In all three phantoms, the results showed that the sGTM method has accuracy similar to that of the GTM method and within 5%. However, the sGTM method showed better precision and noise propagation than the GTM method, especially for spheres smaller than 13 mm. Moreover, the sGTM method was more robust than the GTM method when mis-registration errors or

  20. Study of the validity of a job-exposure matrix for the job strain model factors: an update and a study of changes over time.

    Science.gov (United States)

    Niedhammer, Isabelle; Milner, Allison; LaMontagne, Anthony D; Chastang, Jean-François

    2018-03-08

    The objectives of the study were to construct a job-exposure matrix (JEM) for psychosocial work factors of the job strain model, to evaluate its validity, and to compare the results over time. The study was based on national representative data of the French working population with samples of 46,962 employees (2010 SUMER survey) and 24,486 employees (2003 SUMER survey). Psychosocial work factors included the job strain model factors (Job Content Questionnaire): psychological demands, decision latitude, social support, job strain and iso-strain. Job title was defined by three variables: occupation and economic activity coded using standard classifications, and company size. A JEM was constructed using a segmentation method (Classification and Regression Tree-CART) and cross-validation. The best quality JEM was found using occupation and company size for social support. For decision latitude and psychological demands, there was not much difference using occupation and company size with or without economic activity. The validity of the JEM estimates was higher for decision latitude, job strain and iso-strain, and lower for social support and psychological demands. Differential changes over time were observed for psychosocial work factors according to occupation, economic activity and company size. This study demonstrated that company size in addition to occupation may improve the validity of JEMs for psychosocial work factors. These matrices may be time-dependent and may need to be updated over time. More research is needed to assess the validity of JEMs given that these matrices may be able to provide exposure assessments to study a range of health outcomes.

  1. De-MetaST-BLAST: a tool for the validation of degenerate primer sets and data mining of publicly available metagenomes.

    Directory of Open Access Journals (Sweden)

    Christopher A Gulvik

    Full Text Available Development and use of primer sets to amplify nucleic acid sequences of interest is fundamental to studies spanning many life science disciplines. As such, the validation of primer sets is essential. Several computer programs have been created to aid in the initial selection of primer sequences that may or may not require multiple nucleotide combinations (i.e., degeneracies. Conversely, validation of primer specificity has remained largely unchanged for several decades, and there are currently few available programs that allows for an evaluation of primers containing degenerate nucleotide bases. To alleviate this gap, we developed the program De-MetaST that performs an in silico amplification using user defined nucleotide sequence dataset(s and primer sequences that may contain degenerate bases. The program returns an output file that contains the in silico amplicons. When De-MetaST is paired with NCBI's BLAST (De-MetaST-BLAST, the program also returns the top 10 nr NCBI database hits for each recovered in silico amplicon. While the original motivation for development of this search tool was degenerate primer validation using the wealth of nucleotide sequences available in environmental metagenome and metatranscriptome databases, this search tool has potential utility in many data mining applications.

  2. Direct calculation of resonance energies and widths using an R-matrix approach

    International Nuclear Information System (INIS)

    Schneider, B.I.

    1981-01-01

    A modified R-matrix technique is presented which determines the eigenvalues and widths of resonant states by the direct diagonalization of a complex, non-Hermitian matrix. The method utilizes only real basis sets and requires a minimum of complex arithmetic. The method is applied to two problems, a set of coupled square wells and the Pi/sub g/ resonance of N 2 in the static-exchange approximation. The results of the calculation are in good agreement with other methods and converge very quickly with basis-set size

  3. Identification of generalized state transfer matrix using neural networks

    International Nuclear Information System (INIS)

    Zhu Changchun

    2001-01-01

    The research is introduced on identification of generalized state transfer matrix of linear time-invariant (LTI) system by use of neural networks based on LM (Levenberg-Marquart) algorithm. Firstly, the generalized state transfer matrix is defined. The relationship between the identification of state transfer matrix of structural dynamics and the identification of the weight matrix of neural networks has been established in theory. A singular layer neural network is adopted to obtain the structural parameters as a powerful tool that has parallel distributed processing ability and the property of adaptation or learning. The constraint condition of weight matrix of the neural network is deduced so that the learning and training of the designed network can be more effective. The identified neural network can be used to simulate the structural response excited by any other signals. In order to cope with its further application in practical problems, some noise (5% and 10%) is expected to be present in the response measurements. Results from computer simulation studies show that this method is valid and feasible

  4. Matrix Effect Evaluation and Method Validation of Azoxystrobin and Difenoconazole Residues in Red Flesh Dragon Fruit (Hylocereus polyrhizus) Matrices Using QuEChERS Sample Preparation Methods Followed by LC-MS/MS Determination.

    Science.gov (United States)

    Noegrohati, Sri; Hernadi, Elan; Asviastuti, Syanti

    2018-03-30

    Production of red flesh dragon fruit (Hylocereus polyrhizus) was hampered by Colletotrichum sp. Pre-harvest application of azoxystrobin and difenoconazole mixture is recommended, therefore, a selective and sensitive multi residues analytical method is required in monitoring and evaluating the commodity's safety. LC-MS/MS is a well-established analytical technique for qualitative and quantitative determination in complex matrices. However, this method is hurdled by co-eluted coextractives interferences. This work evaluated the pH effect of acetate buffered and citrate buffered QuEChERS sample preparation in their effectiveness of matrix effect reduction. Citrate buffered QuEChERS proved to produce clean final extract with relative matrix effect 0.4%-0.7%. Method validation of the selected sample preparation followed by LC-MS/MS for whole dragon fruit, flesh and peel matrices fortified at 0.005, 0.01, 0.1 and 1 g/g showed recoveries 75%-119%, intermediate repeatability 2%-14%. The expanded uncertainties were 7%-48%. Based on the international acceptance criteria, this method is valid.

  5. Mathematical model of water transport in Bacon and alkaline matrix-type hydrogen-oxygen fuel cells

    Science.gov (United States)

    Prokopius, P. R.; Easter, R. W.

    1972-01-01

    Based on general mass continuity and diffusive transport equations, a mathematical model was developed that simulates the transport of water in Bacon and alkaline-matrix fuel cells. The derived model was validated by using it to analytically reproduce various Bacon and matrix-cell experimental water transport transients.

  6. Universal composition-structure-property maps for natural and biomimetic platelet-matrix composites and stacked heterostructures.

    Science.gov (United States)

    Sakhavand, Navid; Shahsavari, Rouzbeh

    2015-03-16

    Many natural and biomimetic platelet-matrix composites--such as nacre, silk, and clay-polymer-exhibit a remarkable balance of strength, toughness and/or stiffness, which call for a universal measure to quantify this outstanding feature given the structure and material characteristics of the constituents. Analogously, there is an urgent need to quantify the mechanics of emerging electronic and photonic systems such as stacked heterostructures. Here we report the development of a unified framework to construct universal composition-structure-property diagrams that decode the interplay between various geometries and inherent material features in both platelet-matrix composites and stacked heterostructures. We study the effects of elastic and elastic-perfectly plastic matrices, overlap offset ratio and the competing mechanisms of platelet versus matrix failures. Validated by several 3D-printed specimens and a wide range of natural and synthetic materials across scales, the proposed universally valid diagrams have important implications for science-based engineering of numerous platelet-matrix composites and stacked heterostructures.

  7. Extending multivariate distance matrix regression with an effect size measure and the asymptotic null distribution of the test statistic.

    Science.gov (United States)

    McArtor, Daniel B; Lubke, Gitta H; Bergeman, C S

    2017-12-01

    Person-centered methods are useful for studying individual differences in terms of (dis)similarities between response profiles on multivariate outcomes. Multivariate distance matrix regression (MDMR) tests the significance of associations of response profile (dis)similarities and a set of predictors using permutation tests. This paper extends MDMR by deriving and empirically validating the asymptotic null distribution of its test statistic, and by proposing an effect size for individual outcome variables, which is shown to recover true associations. These extensions alleviate the computational burden of permutation tests currently used in MDMR and render more informative results, thus making MDMR accessible to new research domains.

  8. Validity of the Perceived Health Competence Scale in a UK primary care setting.

    Science.gov (United States)

    Dempster, Martin; Donnelly, Michael

    2008-01-01

    The Perceived Health Competence Scale (PHCS) is a measure of self-efficacy regarding general health-related behaviour. This brief paper examines the psychometric properties of the PHCS in a UK context. Questionnaires containing the PHCS, the SF-36 and questions about perceived health needs were posted to 486 patients randomly selected from a GP practice list. Complete questionnaires were returned by 320 patients. Analyses of these responses provide strong evidence for the validity of the PHCS in this setting. Consequently, we conclude that the PHCS is a useful addition to measures of global self-efficacy and measures of self-efficacy regarding specific behaviours in the toolkit of health psychologists. This range of self-efficacy assessment tools will ensure that psychologists can match the level of specificity of the measure of expectancy beliefs to the level of specificity of the outcome of interest.

  9. Matrix metalloproteinase-2 plays a critical role in overload induced skeletal muscle hypertrophy.

    Science.gov (United States)

    Zhang, Qia; Joshi, Sunil K; Lovett, David H; Zhang, Bryon; Bodine, Sue; Kim, Hubert T; Liu, Xuhui

    2014-01-01

    extracellular matrix (ECM) components are instrumental in maintaining homeostasis and muscle fiber functional integrity. Skeletal muscle hypertrophy is associated with ECM remodeling. Specifically, recent studies have reported the involvement of matrix metalloproteinases (MMPs) in muscle ECM remodeling. However, the functional role of MMPs in muscle hypertrophy remains largely unknown. in this study, we examined the role of MMP-2 in skeletal muscle hypertrophy using a previously validated method where the plantaris muscle of mice were subjected to mechanical overload due to the surgical removal of synergist muscles (gastrocnemius and soleus). following two weeks of overload, we observed a significant increase in MMP-2 activity and up-regulation of ECM components and remodeling enzymes in the plantaris muscles of wild-type mice. However, MMP-2 knockout mice developed significantly less hypertrophy and ECM remodeling in response to overload compared to their wild-type littermates. Investigation of protein synthesis rate and Akt/mTOR signaling revealed no difference between wild-type and MMP-2 knockout mice, suggesting that a difference in hypertrophy was independent of protein synthesis. taken together, our results suggest that MMP-2 is a key mediator of ECM remodeling in the setting of skeletal muscle hypertrophy.

  10. Application of Neutrosophic Set Theory in Generalized Assignment Problem

    Directory of Open Access Journals (Sweden)

    Supriya Kar

    2015-09-01

    Full Text Available This paper presents the application of Neutrosophic Set Theory (NST in solving Generalized Assignment Problem (GAP. GAP has been solved earlier under fuzzy environment. NST is a generalization of the concept of classical set, fuzzy set, interval-valued fuzzy set, intuitionistic fuzzy set. Elements of Neutrosophic set are characterized by a truth-membership function, falsity and also indeterminacy which is a more realistic way of expressing the parameters in real life problem. Here the elements of the cost matrix for the GAP are considered as neutrosophic elements which have not been considered earlier by any other author. The problem has been solved by evaluating score function matrix and then solving it by Extremum Difference Method (EDM [1] to get the optimal assignment. The method has been demonstrated by a suitable numerical example.

  11. Reliability and validity of a novel tool to comprehensively assess food and beverage marketing in recreational sport settings.

    Science.gov (United States)

    Prowse, Rachel J L; Naylor, Patti-Jean; Olstad, Dana Lee; Carson, Valerie; Mâsse, Louise C; Storey, Kate; Kirk, Sara F L; Raine, Kim D

    2018-05-31

    Current methods for evaluating food marketing to children often study a single marketing channel or approach. As the World Health Organization urges the removal of unhealthy food marketing in children's settings, methods that comprehensively explore the exposure and power of food marketing within a setting from multiple marketing channels and approaches are needed. The purpose of this study was to test the inter-rater reliability and the validity of a novel settings-based food marketing audit tool. The Food and beverage Marketing Assessment Tool for Settings (FoodMATS) was developed and its psychometric properties evaluated in five public recreation and sport facilities (sites) and subsequently used in 51 sites across Canada for a cross-sectional analysis of food marketing. Raters recorded the count of food marketing occasions, presence of child-targeted and sports-related marketing techniques, and the physical size of marketing occasions. Marketing occasions were classified by healthfulness. Inter-rater reliability was tested using Cohen's kappa (κ) and intra-class correlations (ICC). FoodMATS scores for each site were calculated using an algorithm that represented the theoretical impact of the marketing environment on food preferences, purchases, and consumption. Higher FoodMATS scores represented sites with higher exposure to, and more powerful (unhealthy, child-targeted, sports-related, large) food marketing. Validity of the scoring algorithm was tested through (1) Pearson's correlations between FoodMATS scores and facility sponsorship dollars, and (2) sequential multiple regression for predicting "Least Healthy" food sales from FoodMATS scores. Inter-rater reliability was very good to excellent (κ = 0.88-1.00, p marketing in recreation facilities, the FoodMATS provides a novel means to comprehensively track changes in food marketing environments that can assist in developing and monitoring the impact of policies and interventions.

  12. Content validation of the international classification of functioning, disability and health core set for stroke from gender perspective using a qualitative approach.

    Science.gov (United States)

    Glässel, A; Coenen, M; Kollerits, B; Cieza, A

    2014-06-01

    The extended ICF Core Set for stroke is an application of the International Classification of Functioning, Disability and Health (ICF) of the World Health Organisation (WHO) with the purpose to represent the typical spectrum of functioning of persons with stroke. The objective of the study is to add evidence to the content validity of the extended ICF Core Set for stroke from persons after stroke taking into account gender perspective. A qualitative study design was conducted by using individual interviews with women and men after stroke in an in- and outpatient rehabilitation setting. The sampling followed the maximum variation strategy. Sample size was determined by saturation. Concepts from qualitative data analysis were linked to ICF categories and compared to the extended ICF Core Set for stroke. Twelve women and 12 men participated in 24 individual interviews. In total, 143 out of 166 ICF categories included in the extended ICF Core Set for stroke were confirmed (women: N.=13; men: N.=17; both genders: N.=113). Thirty-eight additional categories that are not yet included in the extended ICF Core Set for stroke were raised by women and men. This study confirms that the experience of functioning and disability after stroke shows communalities and differences for women and men. The validity of the extended ICF Core Set for stroke could be mostly confirmed, since it does not only include those areas of functioning and disability relevant to both genders but also those exclusively relevant to either women or men. Further research is needed on ICF categories not yet included in the extended ICF Core Set for stroke.

  13. Transfer matrix representation for periodic planar media

    Science.gov (United States)

    Parrinello, A.; Ghiringhelli, G. L.

    2016-06-01

    Sound transmission through infinite planar media characterized by in-plane periodicity is faced by exploiting the free wave propagation on the related unit cells. An appropriate through-thickness transfer matrix, relating a proper set of variables describing the acoustic field at the two external surfaces of the medium, is derived by manipulating the dynamic stiffness matrix related to a finite element model of the unit cell. The adoption of finite element models avoids analytical modeling or the simplification on geometry or materials. The obtained matrix is then used in a transfer matrix method context, making it possible to combine the periodic medium with layers of different nature and to treat both hard-wall and semi-infinite fluid termination conditions. A finite sequence of identical sub-layers through the thickness of the medium can be handled within the transfer matrix method, significantly decreasing the computational burden. Transfer matrices obtained by means of the proposed method are compared with analytical or equivalent models, in terms of sound transmission through barriers of different nature.

  14. Matrix matters: differences of grand skink metapopulation parameters in native tussock grasslands and exotic pasture grasslands.

    Directory of Open Access Journals (Sweden)

    Konstanze Gebauer

    Full Text Available Modelling metapopulation dynamics is a potentially very powerful tool for conservation biologists. In recent years, scientists have broadened the range of variables incorporated into metapopulation modelling from using almost exclusively habitat patch size and isolation, to the inclusion of attributes of the matrix and habitat patch quality. We investigated the influence of habitat patch and matrix characteristics on the metapopulation parameters of a highly endangered lizard species, the New Zealand endemic grand skink (Oligosoma grande taking into account incomplete detectability. The predictive ability of the developed zxmetapopulation model was assessed through cross-validation of the data and with an independent data-set. Grand skinks occur on scattered rock-outcrops surrounded by indigenous tussock (bunch and pasture grasslands therefore implying a metapopulation structure. We found that the type of matrix surrounding the habitat patch was equally as important as the size of habitat patch for estimating occupancy, colonisation and extinction probabilities. Additionally, the type of matrix was more important than the physical distance between habitat patches for colonisation probabilities. Detection probability differed between habitat patches in the two matrix types and between habitat patches with different attributes such as habitat patch composition and abundance of vegetation on the outcrop. The developed metapopulation models can now be used for management decisions on area protection, monitoring, and the selection of translocation sites for the grand skink. Our study showed that it is important to incorporate not only habitat patch size and distance between habitat patches, but also those matrix type and habitat patch attributes which are vital in the ecology of the target species.

  15. Neutrino mass matrix: Inverted hierarchy and CP violation

    International Nuclear Information System (INIS)

    Frigerio, Michele; Smirnov, Alexei Yu.

    2003-01-01

    We reconstruct the neutrino mass matrix in the flavor basis, in the case of an inverted mass hierarchy (ordering), using all available experimental data on neutrino masses and oscillations. We analyze the dependence of the matrix elements m αβ on the CP violating Dirac δ and Majorana ρ and σ phases, for different values of the absolute mass scale. We find that the present data admit various structures of the mass matrix: (i) hierarchical structures with a set of small (zero) elements; (ii) structures with equalities among various groups of elements: e-row and/or μτ-block elements, diagonal and/or off-diagonal elements; (iii) 'democratic' structure. We find the values of phases for which these structures are realized. The mass matrix elements can anticorrelate with flavor: inverted partial or complete flavor alignment is possible. For various structures of the mass matrix we identify the possible underlying symmetry. We find that the mass matrix can be reconstructed completely only in particular cases, provided that the absolute scale of the mass is measured. Generally, the freedom related to the Majorana phase σ will not be removed, thus admitting various types of mass matrix

  16. Sparse and smooth canonical correlation analysis through rank-1 matrix approximation

    Science.gov (United States)

    Aïssa-El-Bey, Abdeldjalil; Seghouane, Abd-Krim

    2017-12-01

    Canonical correlation analysis (CCA) is a well-known technique used to characterize the relationship between two sets of multidimensional variables by finding linear combinations of variables with maximal correlation. Sparse CCA and smooth or regularized CCA are two widely used variants of CCA because of the improved interpretability of the former and the better performance of the later. So far, the cross-matrix product of the two sets of multidimensional variables has been widely used for the derivation of these variants. In this paper, two new algorithms for sparse CCA and smooth CCA are proposed. These algorithms differ from the existing ones in their derivation which is based on penalized rank-1 matrix approximation and the orthogonal projectors onto the space spanned by the two sets of multidimensional variables instead of the simple cross-matrix product. The performance and effectiveness of the proposed algorithms are tested on simulated experiments. On these results, it can be observed that they outperform the state of the art sparse CCA algorithms.

  17. Microlocal study of S-matrix singularity structure

    International Nuclear Information System (INIS)

    Kawai, Takahiro; Kyoto Univ.; Stapp, H.P.

    1975-01-01

    Support is adduced for two related conjectures of simplicity of the analytic structure of the S-matrix and related function; namely, Sato's conjecture that the S-matrix is a solution of a maximally over-determined system of pseudo-differential equations, and our conjecture that the singularity spectrum of any bubble diagram function has the conormal structure with respect to a canonical decomposition of the solutions of the relevant Landau equations. This latter conjecture eliminates the open sets of allowed singularities that existing procedures permit. (orig.) [de

  18. Coulomb matrix elements in multi-orbital Hubbard models.

    Science.gov (United States)

    Bünemann, Jörg; Gebhard, Florian

    2017-04-26

    Coulomb matrix elements are needed in all studies in solid-state theory that are based on Hubbard-type multi-orbital models. Due to symmetries, the matrix elements are not independent. We determine a set of independent Coulomb parameters for a d-shell and an f-shell and all point groups with up to 16 elements (O h , O, T d , T h , D 6h , and D 4h ). Furthermore, we express all other matrix elements as a function of the independent Coulomb parameters. Apart from the solution of the general point-group problem we investigate in detail the spherical approximation and first-order corrections to the spherical approximation.

  19. Nonnegative Matrix Factorizations Performing Object Detection and Localization

    Directory of Open Access Journals (Sweden)

    G. Casalino

    2012-01-01

    Full Text Available We study the problem of detecting and localizing objects in still, gray-scale images making use of the part-based representation provided by nonnegative matrix factorizations. Nonnegative matrix factorization represents an emerging example of subspace methods, which is able to extract interpretable parts from a set of template image objects and then to additively use them for describing individual objects. In this paper, we present a prototype system based on some nonnegative factorization algorithms, which differ in the additional properties added to the nonnegative representation of data, in order to investigate if any additional constraint produces better results in general object detection via nonnegative matrix factorizations.

  20. GB Diet matrix as informed by EMAX

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This data set was taken from CRD 08-18 at the NEFSC. Specifically, the Georges Bank diet matrix was developed for the EMAX exercise described in that center...

  1. Norming the odd: creation, norming, and validation of a stimulus set for the study of incongruities across music and language.

    Science.gov (United States)

    Featherstone, Cara R; Waterman, Mitch G; Morrison, Catriona M

    2012-03-01

    Research into similarities between music and language processing is currently experiencing a strong renewed interest. Recent methodological advances have led to neuroimaging studies presenting striking similarities between neural patterns associated with the processing of music and language--notably, in the study of participants' responses to elements that are incongruous with their musical or linguistic context. Responding to a call for greater systematicity by leading researchers in the field of music and language psychology, this article describes the creation, selection, and validation of a set of auditory stimuli in which both congruence and resolution were manipulated in equivalent ways across harmony, rhythm, semantics, and syntax. Three conditions were created by changing the contexts preceding and following musical and linguistic incongruities originally used for effect by authors and composers: Stimuli in the incongruous-resolved condition reproduced the original incongruity and resolution into the same context; stimuli in the incongruous-unresolved condition reproduced the incongruity but continued postincongruity with a new context dictated by the incongruity; and stimuli in the congruous condition presented the same element of interest, but the entire context was adapted to match it so that it was no longer incongruous. The manipulations described in this article rendered unrecognizable the original incongruities from which the stimuli were adapted, while maintaining ecological validity. The norming procedure and validation study resulted in a significant increase in perceived oddity from congruous to incongruous-resolved and from incongruous-resolved to incongruous-unresolved in all four components of music and language, making this set of stimuli a theoretically grounded and empirically validated resource for this growing area of research.

  2. Matrix completion by deep matrix factorization.

    Science.gov (United States)

    Fan, Jicong; Cheng, Jieyu

    2018-02-01

    Conventional methods of matrix completion are linear methods that are not effective in handling data of nonlinear structures. Recently a few researchers attempted to incorporate nonlinear techniques into matrix completion but there still exists considerable limitations. In this paper, a novel method called deep matrix factorization (DMF) is proposed for nonlinear matrix completion. Different from conventional matrix completion methods that are based on linear latent variable models, DMF is on the basis of a nonlinear latent variable model. DMF is formulated as a deep-structure neural network, in which the inputs are the low-dimensional unknown latent variables and the outputs are the partially observed variables. In DMF, the inputs and the parameters of the multilayer neural network are simultaneously optimized to minimize the reconstruction errors for the observed entries. Then the missing entries can be readily recovered by propagating the latent variables to the output layer. DMF is compared with state-of-the-art methods of linear and nonlinear matrix completion in the tasks of toy matrix completion, image inpainting and collaborative filtering. The experimental results verify that DMF is able to provide higher matrix completion accuracy than existing methods do and DMF is applicable to large matrices. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. Validity and Interrater Reliability of the Visual Quarter-Waste Method for Assessing Food Waste in Middle School and High School Cafeteria Settings.

    Science.gov (United States)

    Getts, Katherine M; Quinn, Emilee L; Johnson, Donna B; Otten, Jennifer J

    2017-11-01

    Measuring food waste (ie, plate waste) in school cafeterias is an important tool to evaluate the effectiveness of school nutrition policies and interventions aimed at increasing consumption of healthier meals. Visual assessment methods are frequently applied in plate waste studies because they are more convenient than weighing. The visual quarter-waste method has become a common tool in studies of school meal waste and consumption, but previous studies of its validity and reliability have used correlation coefficients, which measure association but not necessarily agreement. The aims of this study were to determine, using a statistic measuring interrater agreement, whether the visual quarter-waste method is valid and reliable for assessing food waste in a school cafeteria setting when compared with the gold standard of weighed plate waste. To evaluate validity, researchers used the visual quarter-waste method and weighed food waste from 748 trays at four middle schools and five high schools in one school district in Washington State during May 2014. To assess interrater reliability, researcher pairs independently assessed 59 of the same trays using the visual quarter-waste method. Both validity and reliability were assessed using a weighted κ coefficient. For validity, as compared with the measured weight, 45% of foods assessed using the visual quarter-waste method were in almost perfect agreement, 42% of foods were in substantial agreement, 10% were in moderate agreement, and 3% were in slight agreement. For interrater reliability between pairs of visual assessors, 46% of foods were in perfect agreement, 31% were in almost perfect agreement, 15% were in substantial agreement, and 8% were in moderate agreement. These results suggest that the visual quarter-waste method is a valid and reliable tool for measuring plate waste in school cafeteria settings. Copyright © 2017 Academy of Nutrition and Dietetics. Published by Elsevier Inc. All rights reserved.

  4. Not dead yet: the rise, fall and persistence of the BCG Matrix

    OpenAIRE

    Madsen, Dag Øivind

    2017-01-01

    The BCG Matrix was introduced almost 50 years ago, and is today considered one of the most iconic strategic planning techniques. Using management fashion theory as a theoretical lens, this paper examines the historical rise, fall and persistence of the BCG Matrix. The analysis highlights the role played by fashion-setting actors (e.g., consultants, business schools and business media) in the rise of the BCG Matrix. However, over time, portfolio planning models such as the BCG Matrix were atta...

  5. Inverter Matrix for the Clementine Mission

    Science.gov (United States)

    Buehler, M. G.; Blaes, B. R.; Tardio, G.; Soli, G. A.

    1994-01-01

    An inverter matrix test circuit was designed for the Clementine space mission and is built into the RRELAX (Radiation and Reliability Assurance Experiment). The objective is to develop a circuit that will allow the evaluation of the CMOS FETs using a lean data set in the noisy spacecraft environment.

  6. Validity of the Elite HRV Smartphone Application for Examining Heart Rate Variability in a Field-Based Setting.

    Science.gov (United States)

    Perrotta, Andrew S; Jeklin, Andrew T; Hives, Ben A; Meanwell, Leah E; Warburton, Darren E R

    2017-08-01

    Perrotta, AS, Jeklin, AT, Hives, BA, Meanwell, LE, and Warburton, DER. Validity of the elite HRV smartphone application for examining heart rate variability in a field-based setting. J Strength Cond Res 31(8): 2296-2302, 2017-The introduction of smartphone applications has allowed athletes and practitioners to record and store R-R intervals on smartphones for immediate heart rate variability (HRV) analysis. This user-friendly option should be validated in the effort to provide practitioners confidence when monitoring their athletes before implementing such equipment. The objective of this investigation was to examine the relationship and validity between a vagal-related HRV index, rMSSD, when derived from a smartphone application accessible with most operating systems against a frequently used computer software program, Kubios HRV 2.2. R-R intervals were recorded immediately upon awakening over 14 consecutive days using the Elite HRV smartphone application. R-R recordings were then exported into Kubios HRV 2.2 for analysis. The relationship and levels of agreement between rMSSDln derived from Elite HRV and Kubios HRV 2.2 was examined using a Pearson product-moment correlation and a Bland-Altman Plot. An extremely large relationship was identified (r = 0.92; p smartphone HRV application may offer a reliable platform when assessing parasympathetic modulation.

  7. The Child Behaviour Assessment Instrument: development and validation of a measure to screen for externalising child behavioural problems in community setting

    Directory of Open Access Journals (Sweden)

    Perera Hemamali

    2010-06-01

    Full Text Available Abstract Background In Sri Lanka, behavioural problems have grown to epidemic proportions accounting second highest category of mental health problems among children. Early identification of behavioural problems in children is an important pre-requisite of the implementation of interventions to prevent long term psychiatric outcomes. The objectives of the study were to develop and validate a screening instrument for use in the community setting to identify behavioural problems in children aged 4-6 years. Methods An initial 54 item questionnaire was developed following an extensive review of the literature. A three round Delphi process involving a panel of experts from six relevant fields was then undertaken to refine the nature and number of items and created the 15 item community screening instrument, Child Behaviour Assessment Instrument (CBAI. Validation study was conducted in the Medical Officer of Health area Kaduwela, Sri Lanka and a community sample of 332 children aged 4-6 years were recruited by two stage randomization process. The behaviour status of the participants was assessed by an interviewer using the CBAI and a clinical psychologist following clinical assessment concurrently. Criterion validity was appraised by assessing the sensitivity, specificity and predictive values at the optimum screen cut off value. Construct validity of the instrument was quantified by testing whether the data of validation study fits to a hypothetical model. Face and content validity of the CBAI were qualitatively assessed by a panel of experts. The reliability of the instrument was assessed by internal consistency analysis and test-retest methods in a 15% subset of the community sample. Results Using the Receiver Operating Characteristic analysis the CBAI score of >16 was identified as the cut off point that optimally differentiated children having behavioural problems, with a sensitivity of 0.88 (95% CI = 0.80-0.96 and specificity of 0.81 (95% CI = 0

  8. Matrix metalloproteinase activity assays: Importance of zymography.

    Science.gov (United States)

    Kupai, K; Szucs, G; Cseh, S; Hajdu, I; Csonka, C; Csont, T; Ferdinandy, P

    2010-01-01

    Matrix metalloproteinases (MMPs) are zinc-dependent endopeptidases capable of degrading extracellular matrix, including the basement membrane. MMPs are associated with various physiological processes such as morphogenesis, angiogenesis, and tissue repair. Moreover, due to the novel non-matrix related intra- and extracellular targets of MMPs, dysregulation of MMP activity has been implicated in a number of acute and chronic pathological processes, such as arthritis, acute myocardial infarction, chronic heart failure, chronic obstructive pulmonary disease, inflammation, and cancer metastasis. MMPs are considered as viable drug targets in the therapy of the above diseases. For the development of selective MMP inhibitor molecules, reliable methods are necessary for target validation and lead development. Here, we discuss the major methods used for MMP assays, focusing on substrate zymography. We highlight some problems frequently encountered during sample preparations, electrophoresis, and data analysis of zymograms. Zymography is a widely used technique to study extracellular matrix-degrading enzymes, such as MMPs, from tissue extracts, cell cultures, serum or urine. This simple and sensitive technique identifies MMPs by the degradation of their substrate and by their molecular weight and therefore helps to understand the widespread role of MMPs in different pathologies and cellular pathways. Copyright 2010 Elsevier Inc. All rights reserved.

  9. Solution of the scattering T matrix equation in discrete complex momentum space

    International Nuclear Information System (INIS)

    Rawitscher, G.H.; Delic, G.

    1984-01-01

    The scattering solution to the Lippmann-Schwinger equation is expanded into a set of spherical Bessel functions of complex wave numbers, K/sub j/, with j = 1,2 , . . . , M. The value of each K/sub j/ is determined from the condition that the spherical Bessel function smoothly matches onto an asymptotically outgoing spherical Hankel (or Coulomb) function of the correct physical wave number at a matching point R. The spherical Bessel functions thus determined are Sturmian functions, and they form a complete set in the interval 0 to R. The coefficients of the expansion of the scattering function are determined by matrix inversion of a linear set of algebraic equations, which are equivalent to the solution of the T-matrix equation in complex momentum space. In view of the presence of a matching radius, no singularities are encountered for the Green's functions, and the inclusion of Coulomb potentials offers no computational difficulties. Three numerical examples are performed in order to illustrate the convergence of the elastic scattering matrix S with M. One of these consists of a set of coupled equations which describe the breakup of a deuteron as it scatters from the nucleus on 58 Ni. A value of M of 15 or less is found sufficient to reproduce the exact S matrix element to an accuracy of four figures after the decimal point

  10. Explicit treatment of N-body correlations within a density-matrix formalism

    International Nuclear Information System (INIS)

    Shun-Jin, W.; Cassing, W.

    1985-01-01

    The nuclear many-body problem is reformulated in the density-matrix approach such that n-body correlations are separated out from the reduced density matrix rho/sub n/. A set of equations for the time evolution of the n-body correlations c/sub n/ is derived which allows for physically transparent truncations with respect to the order of correlations. In the stationary limit (c/sub n/ = 0) a restriction to two-body correlations yields a generalized Bethe-Goldstone equation a restriction to body correlations yields generalized Faddeev equations in the density-matrix formulation. Furthermore it can be shown that any truncation of the set of equations (c/sub n/ = 0, n>m) is compatible with conservation laws, a quality which in general is not fulfilled if higher order correlations are treated perturbatively

  11. Matrix metalloproteinases in acute coronary syndromes: current perspectives.

    Science.gov (United States)

    Kampoli, Anna-Maria; Tousoulis, Dimitris; Papageorgiou, Nikolaos; Antoniades, Charalambos; Androulakis, Emmanuel; Tsiamis, Eleftherios; Latsios, George; Stefanadis, Christodoulos

    2012-01-01

    Matrix metalloproteinases (MMPs) are a family of zinc metallo-endopeptidases secreted by cells and are responsible for much of the turnover of matrix components. Several studies have shown that MMPs are involved in all stages of the atherosclerotic process, from the initial lesion to plaque rupture. Recent evidence suggests that MMP activity may facilitate atherosclerosis, plaque destabilization, and platelet aggregation. In the heart, matrix metalloproteinases participate in vascular remodeling, plaque instability, and ventricular remodelling after cardiac injury. The aim of the present article is to review the structure, function, regulation of MMPs and to discuss their potential role in the pathogenesis of acute coronary syndromes, as well as their contribution and usefullness in the setting of the disease.

  12. From deep TLS validation to ensembles of atomic models built from elemental motions

    International Nuclear Information System (INIS)

    Urzhumtsev, Alexandre; Afonine, Pavel V.; Van Benschoten, Andrew H.; Fraser, James S.; Adams, Paul D.

    2015-01-01

    Procedures are described for extracting the vibration and libration parameters corresponding to a given set of TLS matrices and their simultaneous validation. Knowledge of these parameters allows the generation of structural ensembles corresponding to these matrices. The translation–libration–screw model first introduced by Cruickshank, Schomaker and Trueblood describes the concerted motions of atomic groups. Using TLS models can improve the agreement between calculated and experimental diffraction data. Because the T, L and S matrices describe a combination of atomic vibrations and librations, TLS models can also potentially shed light on molecular mechanisms involving correlated motions. However, this use of TLS models in mechanistic studies is hampered by the difficulties in translating the results of refinement into molecular movement or a structural ensemble. To convert the matrices into a constituent molecular movement, the matrix elements must satisfy several conditions. Refining the T, L and S matrix elements as independent parameters without taking these conditions into account may result in matrices that do not represent concerted molecular movements. Here, a mathematical framework and the computational tools to analyze TLS matrices, resulting in either explicit decomposition into descriptions of the underlying motions or a report of broken conditions, are described. The description of valid underlying motions can then be output as a structural ensemble. All methods are implemented as part of the PHENIX project

  13. From deep TLS validation to ensembles of atomic models built from elemental motions

    Energy Technology Data Exchange (ETDEWEB)

    Urzhumtsev, Alexandre, E-mail: sacha@igbmc.fr [Centre for Integrative Biology, Institut de Génétique et de Biologie Moléculaire et Cellulaire, CNRS–INSERM–UdS, 1 Rue Laurent Fries, BP 10142, 67404 Illkirch (France); Université de Lorraine, BP 239, 54506 Vandoeuvre-les-Nancy (France); Afonine, Pavel V. [Lawrence Berkeley National Laboratory, Berkeley, California (United States); Van Benschoten, Andrew H.; Fraser, James S. [University of California, San Francisco, San Francisco, CA 94158 (United States); Adams, Paul D. [Lawrence Berkeley National Laboratory, Berkeley, California (United States); University of California Berkeley, Berkeley, CA 94720 (United States); Centre for Integrative Biology, Institut de Génétique et de Biologie Moléculaire et Cellulaire, CNRS–INSERM–UdS, 1 Rue Laurent Fries, BP 10142, 67404 Illkirch (France)

    2015-07-28

    Procedures are described for extracting the vibration and libration parameters corresponding to a given set of TLS matrices and their simultaneous validation. Knowledge of these parameters allows the generation of structural ensembles corresponding to these matrices. The translation–libration–screw model first introduced by Cruickshank, Schomaker and Trueblood describes the concerted motions of atomic groups. Using TLS models can improve the agreement between calculated and experimental diffraction data. Because the T, L and S matrices describe a combination of atomic vibrations and librations, TLS models can also potentially shed light on molecular mechanisms involving correlated motions. However, this use of TLS models in mechanistic studies is hampered by the difficulties in translating the results of refinement into molecular movement or a structural ensemble. To convert the matrices into a constituent molecular movement, the matrix elements must satisfy several conditions. Refining the T, L and S matrix elements as independent parameters without taking these conditions into account may result in matrices that do not represent concerted molecular movements. Here, a mathematical framework and the computational tools to analyze TLS matrices, resulting in either explicit decomposition into descriptions of the underlying motions or a report of broken conditions, are described. The description of valid underlying motions can then be output as a structural ensemble. All methods are implemented as part of the PHENIX project.

  14. Prospective Validation of the Decalogue, a Set of Doctor-Patient Communication Recommendations to Improve Patient Illness Experience and Mood States within a Hospital Cardiologic Ambulatory Setting

    Directory of Open Access Journals (Sweden)

    Piercarlo Ballo

    2017-01-01

    Full Text Available Strategies to improve doctor-patient communication may have a beneficial impact on patient’s illness experience and mood, with potential favorable clinical effects. We prospectively tested the psychometric and clinical validity of the Decalogue, a tool utilizing 10 communication recommendations for patients and physicians. The Decalogue was administered to 100 consecutive patients referred for a cardiologic consultation, whereas 49 patients served as controls. The POMS-2 questionnaire was used to measure the total mood disturbance at the end of the consultation. Structural equation modeling showed high internal consistency (Cronbach alpha 0.93, good test-retest reproducibility, and high validity of the psychometric construct (all > 0.80, suggesting a positive effect on patients’ illness experience. The total mood disturbance was lower in the patients exposed to the Decalogue as compared to the controls (1.4±12.1 versus 14.8±27.6, p=0.0010. In an additional questionnaire, patients in the Decalogue group showed a trend towards a better understanding of their state of health (p=0.07. In a cardiologic ambulatory setting, the Decalogue shows good validity and reliability as a tool to improve patients’ illness experience and could have a favorable impact on mood states. These effects might potentially improve patient engagement in care and adherence to therapy, as well as clinical outcome.

  15. QCD event generators with next-to-leading order matrix-elements and parton showers

    International Nuclear Information System (INIS)

    Kurihara, Y.; Fujimoto, J.; Ishikawa, T.; Kato, K.; Kawabata, S.; Munehisa, T.; Tanaka, H.

    2003-01-01

    A new method to construct event-generators based on next-to-leading order QCD matrix-elements and leading-logarithmic parton showers is proposed. Matrix elements of loop diagram as well as those of a tree level can be generated using an automatic system. A soft/collinear singularity is treated using a leading-log subtraction method. Higher order resummation of the soft/collinear correction by the parton shower method is combined with the NLO matrix-element without any double-counting in this method. An example of the event generator for Drell-Yan process is given for demonstrating a validity of this method

  16. ANL Critical Assembly Covariance Matrix Generation - Addendum

    Energy Technology Data Exchange (ETDEWEB)

    McKnight, Richard D. [Argonne National Lab. (ANL), Argonne, IL (United States); Grimm, Karl N. [Argonne National Lab. (ANL), Argonne, IL (United States)

    2014-01-13

    In March 2012, a report was issued on covariance matrices for Argonne National Laboratory (ANL) critical experiments. That report detailed the theory behind the calculation of covariance matrices and the methodology used to determine the matrices for a set of 33 ANL experimental set-ups. Since that time, three new experiments have been evaluated and approved. This report essentially updates the previous report by adding in these new experiments to the preceding covariance matrix structure.

  17. Adaption and validation of the Safety Attitudes Questionnaire for the Danish hospital setting

    Directory of Open Access Journals (Sweden)

    Kristensen S

    2015-02-01

    Full Text Available Solvejg Kristensen,1–3 Svend Sabroe,4 Paul Bartels,1,5 Jan Mainz,3,5 Karl Bang Christensen6 1The Danish Clinical Registries, Aarhus, Denmark; 2Department of Health Science and Technology, Aalborg University, Aalborg, Denmark; 3Aalborg University Hospital, Psychiatry, Aalborg, Denmark; 4Department of Public Health, Aarhus University, Aarhus, Denmark; 5Department of Clinical Medicine, Aalborg University, Aalborg, Denmark; 6Department of Biostatistics, University of Copenhagen, Copenhagen, Denmark Purpose: Measuring and developing a safe culture in health care is a focus point in creating highly reliable organizations being successful in avoiding patient safety incidents where these could normally be expected. Questionnaires can be used to capture a snapshot of an employee's perceptions of patient safety culture. A commonly used instrument to measure safety climate is the Safety Attitudes Questionnaire (SAQ. The purpose of this study was to adapt the SAQ for use in Danish hospitals, assess its construct validity and reliability, and present benchmark data.Materials and methods: The SAQ was translated and adapted for the Danish setting (SAQ-DK. The SAQ-DK was distributed to 1,263 staff members from 31 in- and outpatient units (clinical areas across five somatic and one psychiatric hospitals through meeting administration, hand delivery, and mailing. Construct validity and reliability were tested in a cross-sectional study. Goodness-of-fit indices from confirmatory factor analysis were reported along with inter-item correlations, Cronbach's alpha (α, and item and subscale scores.Results: Participation was 73.2% (N=925 of invited health care workers. Goodness-of-fit indices from the confirmatory factor analysis showed: c2=1496.76, P<0.001, CFI 0.901, RMSEA (90%CI 0.053 (0.050-0056, Probability RMSEA (p close=0.057. Inter-scale correlations between the factors showed moderate-to-high correlations. The scale stress recognition had significant

  18. The QCD spin chain S matrix

    International Nuclear Information System (INIS)

    Ahn, Changrim; Nepomechie, Rafael I.; Suzuki, Junji

    2008-01-01

    Beisert et al. have identified an integrable SU(2,2) quantum spin chain which gives the one-loop anomalous dimensions of certain operators in large N c QCD. We derive a set of nonlinear integral equations (NLIEs) for this model, and compute the scattering matrix of the various (in particular, magnon) excitations

  19. Improved diagnostic accuracy of Alzheimer's disease by combining regional cortical thickness and default mode network functional connectivity: Validated in the Alzheimer's disease neuroimaging initiative set

    Energy Technology Data Exchange (ETDEWEB)

    Park, Ji Eun; Park, Bum Woo; Kim, Sang Joon; Kim, Ho Sung; Choi, Choong Gon; Jung, Seung Jung; Oh, Joo Young; Shim, Woo Hyun [Dept. of Radiology and Research Institute of Radiology, University of Ulsan College of Medicine, Asan Medical Center, Seoul (Korea, Republic of); Lee, Jae Hong; Roh, Jee Hoon [University of Ulsan College of Medicine, Asan Medical Center, Seoul (Korea, Republic of)

    2017-11-15

    To identify potential imaging biomarkers of Alzheimer's disease by combining brain cortical thickness (CThk) and functional connectivity and to validate this model's diagnostic accuracy in a validation set. Data from 98 subjects was retrospectively reviewed, including a study set (n = 63) and a validation set from the Alzheimer's Disease Neuroimaging Initiative (n = 35). From each subject, data for CThk and functional connectivity of the default mode network was extracted from structural T1-weighted and resting-state functional magnetic resonance imaging. Cortical regions with significant differences between patients and healthy controls in the correlation of CThk and functional connectivity were identified in the study set. The diagnostic accuracy of functional connectivity measures combined with CThk in the identified regions was evaluated against that in the medial temporal lobes using the validation set and application of a support vector machine. Group-wise differences in the correlation of CThk and default mode network functional connectivity were identified in the superior temporal (p < 0.001) and supramarginal gyrus (p = 0.007) of the left cerebral hemisphere. Default mode network functional connectivity combined with the CThk of those two regions were more accurate than that combined with the CThk of both medial temporal lobes (91.7% vs. 75%). Combining functional information with CThk of the superior temporal and supramarginal gyri in the left cerebral hemisphere improves diagnostic accuracy, making it a potential imaging biomarker for Alzheimer's disease.

  20. A test matrix sequencer for research test facility automation

    Science.gov (United States)

    Mccartney, Timothy P.; Emery, Edward F.

    1990-01-01

    The hardware and software configuration of a Test Matrix Sequencer, a general purpose test matrix profiler that was developed for research test facility automation at the NASA Lewis Research Center, is described. The system provides set points to controllers and contact closures to data systems during the course of a test. The Test Matrix Sequencer consists of a microprocessor controlled system which is operated from a personal computer. The software program, which is the main element of the overall system is interactive and menu driven with pop-up windows and help screens. Analog and digital input/output channels can be controlled from a personal computer using the software program. The Test Matrix Sequencer provides more efficient use of aeronautics test facilities by automating repetitive tasks that were once done manually.

  1. Hardware matrix multiplier/accumulator for lattice gauge theory calculations

    International Nuclear Information System (INIS)

    Christ, N.H.; Terrano, A.E.

    1984-01-01

    The design and operating characteristics of a special-purpose matrix multiplier/accumulator are described. The device is connected through a standard interface to a host PDP11 computer. It provides a set of high-speed, matrix-oriented instructions which can be called from a program running on the host. The resulting operations accelerate the complex matrix arithmetic required for a class of Monte Carlo calculations currently of interest in high energy particle physics. A working version of the device is presently being used to carry out a pure SU(3) lattice gauge theory calculation using a PDP11/23 with a performance twice that obtainable on a VAX11/780. (orig.)

  2. The correlation matrix of Higgs rates at the LHC

    CERN Document Server

    Arbey, Alexandre; Mahmoudi, Farvah; Moreau, Grégory

    2016-11-17

    The imperfect knowledge of the Higgs boson LHC cross sections and decay rates constitutes a critical systematic uncertainty in the study of the Higgs boson properties. We show that the full covariance matrix between the Higgs rates can be determined from the most elementary sources of uncertainty by a direct application of probability theory. We evaluate the error magnitudes and full correlation matrix on the set of Higgs cross sections and partial decay widths at $\\sqrt{s}=7$, $8$, $13$ and $14$~TeV, which are provided in ancillary files. The impact of this correlation matrix on the global fits is illustrated with the latest $7$+$8$ TeV Higgs dataset.

  3. Biomineralization of a Self-assembled, Soft-Matrix Precursor: Enamel

    Science.gov (United States)

    Snead, Malcolm L.

    2015-04-01

    Enamel is the bioceramic covering of teeth, a composite tissue composed of hierarchical organized hydroxyapatite crystallites fabricated by cells under physiologic pH and temperature. Enamel material properties resist wear and fracture to serve a lifetime of chewing. Understanding the cellular and molecular mechanisms for enamel formation may allow a biology-inspired approach to material fabrication based on self-assembling proteins that control form and function. A genetic understanding of human diseases exposes insight from nature's errors by exposing critical fabrication events that can be validated experimentally and duplicated in mice using genetic engineering to phenocopy the human disease so that it can be explored in detail. This approach led to an assessment of amelogenin protein self-assembly that, when altered, disrupts fabrication of the soft enamel protein matrix. A misassembled protein matrix precursor results in loss of cell-to-matrix contacts essential to fabrication and mineralization.

  4. ABCD Matrix Method a Case Study

    CERN Document Server

    Seidov, Zakir F; Yahalom, Asher

    2004-01-01

    In the Israeli Electrostatic Accelerator FEL, the distance between the accelerator's end and the wiggler's entrance is about 2.1 m, and 1.4 MeV electron beam is transported through this space using four similar quadrupoles (FODO-channel). The transfer matrix method (ABCD matrix method) was used for simulating the beam transport, a set of programs is written in the several programming languages (MATHEMATICA, MATLAB, MATCAD, MAPLE) and reasonable agreement is demonstrated between experimental results and simulations. Comparison of ABCD matrix method with the direct "numerical experiments" using EGUN, ELOP, and GPT programs with and without taking into account the space-charge effects showed the agreement to be good enough as well. Also the inverse problem of finding emittance of the electron beam at the S1 screen position (before FODO-channel), by using the spot image at S2 screen position (after FODO-channel) as function of quad currents, is considered. Spot and beam at both screens are described as tilted eel...

  5. Estimation of influential points in any data set from coefficient of determination and its leave-one-out cross-validated counterpart.

    Science.gov (United States)

    Tóth, Gergely; Bodai, Zsolt; Héberger, Károly

    2013-10-01

    Coefficient of determination (R (2)) and its leave-one-out cross-validated analogue (denoted by Q (2) or R cv (2) ) are the most frequantly published values to characterize the predictive performance of models. In this article we use R (2) and Q (2) in a reversed aspect to determine uncommon points, i.e. influential points in any data sets. The term (1 - Q (2))/(1 - R (2)) corresponds to the ratio of predictive residual sum of squares and the residual sum of squares. The ratio correlates to the number of influential points in experimental and random data sets. We propose an (approximate) F test on (1 - Q (2))/(1 - R (2)) term to quickly pre-estimate the presence of influential points in training sets of models. The test is founded upon the routinely calculated Q (2) and R (2) values and warns the model builders to verify the training set, to perform influence analysis or even to change to robust modeling.

  6. Goal setting as an outcome measure: A systematic review.

    Science.gov (United States)

    Hurn, Jane; Kneebone, Ian; Cropley, Mark

    2006-09-01

    Goal achievement has been considered to be an important measure of outcome by clinicians working with patients in physical and neurological rehabilitation settings. This systematic review was undertaken to examine the reliability, validity and sensitivity of goal setting and goal attainment scaling approaches when used with working age and older people. To review the reliability, validity and sensitivity of both goal setting and goal attainment scaling when employed as an outcome measure within a physical and neurological working age and older person rehabilitation environment, by examining the research literature covering the 36 years since goal-setting theory was proposed. Data sources included a computer-aided literature search of published studies examining the reliability, validity and sensitivity of goal setting/goal attainment scaling, with further references sourced from articles obtained through this process. There is strong evidence for the reliability, validity and sensitivity of goal attainment scaling. Empirical support was found for the validity of goal setting but research demonstrating its reliability and sensitivity is limited. Goal attainment scaling appears to be a sound measure for use in physical rehabilitation settings with working age and older people. Further work needs to be carried out with goal setting to establish its reliability and sensitivity as a measurement tool.

  7. SAR matrices: automated extraction of information-rich SAR tables from large compound data sets.

    Science.gov (United States)

    Wassermann, Anne Mai; Haebel, Peter; Weskamp, Nils; Bajorath, Jürgen

    2012-07-23

    We introduce the SAR matrix data structure that is designed to elucidate SAR patterns produced by groups of structurally related active compounds, which are extracted from large data sets. SAR matrices are systematically generated and sorted on the basis of SAR information content. Matrix generation is computationally efficient and enables processing of large compound sets. The matrix format is reminiscent of SAR tables, and SAR patterns revealed by different categories of matrices are easily interpretable. The structural organization underlying matrix formation is more flexible than standard R-group decomposition schemes. Hence, the resulting matrices capture SAR information in a comprehensive manner.

  8. Evaluation of the validity of job exposure matrix for psychosocial factors at work.

    Directory of Open Access Journals (Sweden)

    Svetlana Solovieva

    Full Text Available To study the performance of a developed job exposure matrix (JEM for the assessment of psychosocial factors at work in terms of accuracy, possible misclassification bias and predictive ability to detect known associations with depression and low back pain (LBP.We utilized two large population surveys (the Health 2000 Study and the Finnish Work and Health Surveys, one to construct the JEM and another to test matrix performance. In the first study, information on job demands, job control, monotonous work and social support at work was collected via face-to-face interviews. Job strain was operationalized based on job demands and job control using quadrant approach. In the second study, the sensitivity and specificity were estimated applying a Bayesian approach. The magnitude of misclassification error was examined by calculating the biased odds ratios as a function of the sensitivity and specificity of the JEM and fixed true prevalence and odds ratios. Finally, we adjusted for misclassification error the observed associations between JEM measures and selected health outcomes.The matrix showed a good accuracy for job control and job strain, while its performance for other exposures was relatively low. Without correction for exposure misclassification, the JEM was able to detect the association between job strain and depression in men and between monotonous work and LBP in both genders.Our results suggest that JEM more accurately identifies occupations with low control and high strain than those with high demands or low social support. Overall, the present JEM is a useful source of job-level psychosocial exposures in epidemiological studies lacking individual-level exposure information. Furthermore, we showed the applicability of a Bayesian approach in the evaluation of the performance of the JEM in a situation where, in practice, no gold standard of exposure assessment exists.

  9. Evaluation of the validity of job exposure matrix for psychosocial factors at work.

    Science.gov (United States)

    Solovieva, Svetlana; Pensola, Tiina; Kausto, Johanna; Shiri, Rahman; Heliövaara, Markku; Burdorf, Alex; Husgafvel-Pursiainen, Kirsti; Viikari-Juntura, Eira

    2014-01-01

    To study the performance of a developed job exposure matrix (JEM) for the assessment of psychosocial factors at work in terms of accuracy, possible misclassification bias and predictive ability to detect known associations with depression and low back pain (LBP). We utilized two large population surveys (the Health 2000 Study and the Finnish Work and Health Surveys), one to construct the JEM and another to test matrix performance. In the first study, information on job demands, job control, monotonous work and social support at work was collected via face-to-face interviews. Job strain was operationalized based on job demands and job control using quadrant approach. In the second study, the sensitivity and specificity were estimated applying a Bayesian approach. The magnitude of misclassification error was examined by calculating the biased odds ratios as a function of the sensitivity and specificity of the JEM and fixed true prevalence and odds ratios. Finally, we adjusted for misclassification error the observed associations between JEM measures and selected health outcomes. The matrix showed a good accuracy for job control and job strain, while its performance for other exposures was relatively low. Without correction for exposure misclassification, the JEM was able to detect the association between job strain and depression in men and between monotonous work and LBP in both genders. Our results suggest that JEM more accurately identifies occupations with low control and high strain than those with high demands or low social support. Overall, the present JEM is a useful source of job-level psychosocial exposures in epidemiological studies lacking individual-level exposure information. Furthermore, we showed the applicability of a Bayesian approach in the evaluation of the performance of the JEM in a situation where, in practice, no gold standard of exposure assessment exists.

  10. Recording and Validation of Audiovisual Expressions by Faces and Voices

    Directory of Open Access Journals (Sweden)

    Sachiko Takagi

    2011-10-01

    Full Text Available This study aims to further examine the cross-cultural differences in multisensory emotion perception between Western and East Asian people. In this study, we recorded the audiovisual stimulus video of Japanese and Dutch actors saying neutral phrase with one of the basic emotions. Then we conducted a validation experiment of the stimuli. In the first part (facial expression, participants watched a silent video of actors and judged what kind of emotion the actor is expressing by choosing among 6 options (ie, happiness, anger, disgust, sadness, surprise, and fear. In the second part (vocal expression, they listened to the audio part of the same videos without video images while the task was the same. We analyzed their categorization responses based on accuracy and confusion matrix and created a controlled audiovisual stimulus set.

  11. Evaluation of matrix-assisted laser desorption/ionization time of flight mass spectrometry for the identification of ceratopogonid and culicid larvae.

    Science.gov (United States)

    Steinmann, I C; Pflüger, V; Schaffner, F; Mathis, A; Kaufmann, C

    2013-03-01

    Matrix-assisted laser desorption/ionization time of flight mass spectrometry (MALDI-TOF MS) was evaluated for the rapid identification of ceratopogonid larvae. Optimal sample preparation as evaluated with laboratory-reared biting midges Culicoides nubeculosus was the homogenization of gut-less larvae in 10% formic acid, and analysis of 0.2 mg/ml crude protein homogenate mixed with SA matrix at a ratio of 1:1.5. Using 5 larvae each of 4 ceratopogonid species (C. nubeculosus, C. obsoletus, C. decor, and Dasyhelea sp.) and of 2 culicid species (Aedes aegypti, Ae. japonicus), biomarker mass sets between 27 and 33 masses were determined. In a validation study, 67 larvae belonging to the target species were correctly identified by automated database-based identification (91%) or manual full comparison (9%). Four specimens of non-target species did not yield identification. As anticipated for holometabolous insects, the biomarker mass sets of adults cannot be used for the identification of larvae, and vice versa, because they share only very few similar masses as shown for C. nubeculosus, C. obsoletus, and Ae. japonicus. Thus, protein profiling by MALDI-TOF as a quick, inexpensive and accurate alternative tool is applicable to identify insect larvae of vector species collected in the field.

  12. Investigating the incremental validity of cognitive variables in early mathematics screening.

    Science.gov (United States)

    Clarke, Ben; Shanley, Lina; Kosty, Derek; Baker, Scott K; Cary, Mari Strand; Fien, Hank; Smolkowski, Keith

    2018-03-26

    The purpose of this study was to investigate the incremental validity of a set of domain general cognitive measures added to a traditional screening battery of early numeracy measures. The sample consisted of 458 kindergarten students of whom 285 were designated as severely at-risk for mathematics difficulty. Hierarchical multiple regression results indicated that Wechsler Abbreviated Scales of Intelligence (WASI) Matrix Reasoning and Vocabulary subtests, and Digit Span Forward and Backward measures explained a small, but unique portion of the variance in kindergarten students' mathematics performance on the Test of Early Mathematics Ability-Third Edition (TEMA-3) when controlling for Early Numeracy Curriculum Based Measurement (EN-CBM) screening measures (R² change = .01). Furthermore, the incremental validity of the domain general cognitive measures was relatively stronger for the severely at-risk sample. We discuss results from the study in light of instructional decision-making and note the findings do not justify adding domain general cognitive assessments to mathematics screening batteries. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  13. Neutrosophic Soft Matrix and its application to Decision Making

    Directory of Open Access Journals (Sweden)

    Tuhin Bera

    2017-12-01

    Full Text Available The motivation of this paper is to extend the concept of Neutrosophic soft matrix (NSM theory. Some basic definitions of classical matrix theory in the parlance of neutrosophic soft set theory have been presented with proper examples. Then, a theoretical studies of some traditional operations of NSM have been developed. Finally, a decision making theory has been proposed by developing an appropriate solution algorithm, namely, score function algorithm and it has been illustrated by suitable examples.

  14. A Note on the Eigensystem of the Covariance Matrix of Dichotomous Guttman Items.

    Science.gov (United States)

    Davis-Stober, Clintin P; Doignon, Jean-Paul; Suck, Reinhard

    2015-01-01

    We consider the covariance matrix for dichotomous Guttman items under a set of uniformity conditions, and obtain closed-form expressions for the eigenvalues and eigenvectors of the matrix. In particular, we describe the eigenvalues and eigenvectors of the matrix in terms of trigonometric functions of the number of items. Our results parallel those of Zwick (1987) for the correlation matrix under the same uniformity conditions. We provide an explanation for certain properties of principal components under Guttman scalability which have been first reported by Guttman (1950).

  15. Data fusion in metabolomics using coupled matrix and tensor factorizations

    DEFF Research Database (Denmark)

    Evrim, Acar Ataman; Bro, Rasmus; Smilde, Age Klaas

    2015-01-01

    of heterogeneous (i.e., in the form of higher order tensors and matrices) data sets with shared/unshared factors. In order to jointly analyze such heterogeneous data sets, we formulate data fusion as a coupled matrix and tensor factorization (CMTF) problem, which has already proved useful in many data mining...

  16. Matrix form of Legendre polynomials for solving linear integro-differential equations of high order

    Science.gov (United States)

    Kammuji, M.; Eshkuvatov, Z. K.; Yunus, Arif A. M.

    2017-04-01

    This paper presents an effective approximate solution of high order of Fredholm-Volterra integro-differential equations (FVIDEs) with boundary condition. Legendre truncated series is used as a basis functions to estimate the unknown function. Matrix operation of Legendre polynomials is used to transform FVIDEs with boundary conditions into matrix equation of Fredholm-Volterra type. Gauss Legendre quadrature formula and collocation method are applied to transfer the matrix equation into system of linear algebraic equations. The latter equation is solved by Gauss elimination method. The accuracy and validity of this method are discussed by solving two numerical examples and comparisons with wavelet and methods.

  17. Random Matrix Approach for Primal-Dual Portfolio Optimization Problems

    Science.gov (United States)

    Tada, Daichi; Yamamoto, Hisashi; Shinzato, Takashi

    2017-12-01

    In this paper, we revisit the portfolio optimization problems of the minimization/maximization of investment risk under constraints of budget and investment concentration (primal problem) and the maximization/minimization of investment concentration under constraints of budget and investment risk (dual problem) for the case that the variances of the return rates of the assets are identical. We analyze both optimization problems by the Lagrange multiplier method and the random matrix approach. Thereafter, we compare the results obtained from our proposed approach with the results obtained in previous work. Moreover, we use numerical experiments to validate the results obtained from the replica approach and the random matrix approach as methods for analyzing both the primal and dual portfolio optimization problems.

  18. Atmospheric correction at AERONET locations: A new science and validation data set

    Science.gov (United States)

    Wang, Y.; Lyapustin, A.I.; Privette, J.L.; Morisette, J.T.; Holben, B.

    2009-01-01

    This paper describes an Aerosol Robotic Network (AERONET)-based Surface Reflectance Validation Network (ASRVN) and its data set of spectral surface bidirectional reflectance and albedo based on Moderate Resolution Imaging Spectroradiometer (MODIS) TERRA and AQUA data. The ASRVN is an operational data collection and processing system. It receives 50 ?? 50 km2; subsets of MODIS level 1B (L1B) data from MODIS adaptive processing system and AERONET aerosol and water-vapor information. Then, it performs an atmospheric correction (AC) for about 100 AERONET sites based on accurate radiative-transfer theory with complex quality control of the input data. The ASRVN processing software consists of an L1B data gridding algorithm, a new cloud-mask (CM) algorithm based on a time-series analysis, and an AC algorithm using ancillary AERONET aerosol and water-vapor data. The AC is achieved by fitting the MODIS top-of-atmosphere measurements, accumulated for a 16-day interval, with theoretical reflectance parameterized in terms of the coefficients of the Li SparseRoss Thick (LSRT) model of the bidirectional reflectance factor (BRF). The ASRVN takes several steps to ensure high quality of results: 1) the filtering of opaque clouds by a CM algorithm; 2) the development of an aerosol filter to filter residual semitransparent and subpixel clouds, as well as cases with high inhomogeneity of aerosols in the processing area; 3) imposing the requirement of the consistency of the new solution with previously retrieved BRF and albedo; 4) rapid adjustment of the 16-day retrieval to the surface changes using the last day of measurements; and 5) development of a seasonal backup spectral BRF database to increase data coverage. The ASRVN provides a gapless or near-gapless coverage for the processing area. The gaps, caused by clouds, are filled most naturally with the latest solution for a given pixel. The ASRVN products include three parameters of the LSRT model (kL, kG, and kV), surface albedo

  19. Quantum correlations and Nash equilibria of a bi-matrix game

    International Nuclear Information System (INIS)

    Iqbal, Azhar

    2004-01-01

    Playing a symmetric bi-matrix game is usually physical implemented by sharing pairs of 'objects' between two players. A new setting is proposed that explicitly shows effects of quantum correlations between the pairs on the structure of payoff relations and the 'solutions' of the game. The setting allows a re-expression of the game such that the players play the classical game when their moves are performed on pairs of objects having correlations that satisfy Bell's inequalities. If players receive pairs having quantum correlations the resulting game cannot be considered another classical symmetric bi-matrix game. Also the Nash equilibria of the game are found to be decided by the nature of the correlations. (letter to the editor)

  20. Utility of the MMPI-2-RF (Restructured Form) Validity Scales in Detecting Malingering in a Criminal Forensic Setting: A Known-Groups Design

    Science.gov (United States)

    Sellbom, Martin; Toomey, Joseph A.; Wygant, Dustin B.; Kucharski, L. Thomas; Duncan, Scott

    2010-01-01

    The current study examined the utility of the recently released Minnesota Multiphasic Personality Inventory-2 Restructured Form (MMPI-2-RF; Ben-Porath & Tellegen, 2008) validity scales to detect feigned psychopathology in a criminal forensic setting. We used a known-groups design with the Structured Interview of Reported Symptoms (SIRS;…

  1. The Performance Analysis Based on SAR Sample Covariance Matrix

    Directory of Open Access Journals (Sweden)

    Esra Erten

    2012-03-01

    Full Text Available Multi-channel systems appear in several fields of application in science. In the Synthetic Aperture Radar (SAR context, multi-channel systems may refer to different domains, as multi-polarization, multi-interferometric or multi-temporal data, or even a combination of them. Due to the inherent speckle phenomenon present in SAR images, the statistical description of the data is almost mandatory for its utilization. The complex images acquired over natural media present in general zero-mean circular Gaussian characteristics. In this case, second order statistics as the multi-channel covariance matrix fully describe the data. For practical situations however, the covariance matrix has to be estimated using a limited number of samples, and this sample covariance matrix follow the complex Wishart distribution. In this context, the eigendecomposition of the multi-channel covariance matrix has been shown in different areas of high relevance regarding the physical properties of the imaged scene. Specifically, the maximum eigenvalue of the covariance matrix has been frequently used in different applications as target or change detection, estimation of the dominant scattering mechanism in polarimetric data, moving target indication, etc. In this paper, the statistical behavior of the maximum eigenvalue derived from the eigendecomposition of the sample multi-channel covariance matrix in terms of multi-channel SAR images is simplified for SAR community. Validation is performed against simulated data and examples of estimation and detection problems using the analytical expressions are as well given.

  2. Constrained Perturbation Regularization Approach for Signal Estimation Using Random Matrix Theory

    KAUST Repository

    Suliman, Mohamed Abdalla Elhag

    2016-10-06

    In this work, we propose a new regularization approach for linear least-squares problems with random matrices. In the proposed constrained perturbation regularization approach, an artificial perturbation matrix with a bounded norm is forced into the system model matrix. This perturbation is introduced to improve the singular-value structure of the model matrix and, hence, the solution of the estimation problem. Relying on the randomness of the model matrix, a number of deterministic equivalents from random matrix theory are applied to derive the near-optimum regularizer that minimizes the mean-squared error of the estimator. Simulation results demonstrate that the proposed approach outperforms a set of benchmark regularization methods for various estimated signal characteristics. In addition, simulations show that our approach is robust in the presence of model uncertainty.

  3. The correlation matrix of Higgs rates at the LHC

    Energy Technology Data Exchange (ETDEWEB)

    Arbey, Alexandre [Univ Lyon, Univ Lyon 1, ENS de Lyon, CNRS,Centre de Recherche Astrophysique de Lyon UMR5574,F-69230 Saint-Genis-Laval (France); Theoretical Physics Department, CERN,CH-1211 Geneva 23 (Switzerland); Fichet, Sylvain [ICTP-SAIFR & IFT-UNESP,Rua Dr. Bento Teobaldo Ferraz 271, Sao Paulo (Brazil); Mahmoudi, Farvah [Univ Lyon, Univ Lyon 1, ENS de Lyon, CNRS,Centre de Recherche Astrophysique de Lyon UMR5574,F-69230 Saint-Genis-Laval (France); Theoretical Physics Department, CERN,CH-1211 Geneva 23 (Switzerland); Moreau, Grégory [Laboratoire de Physique Théorique, CNRS, Université Paris-Sud 11, Bât. 210, F-91405 Orsay Cedex (France)

    2016-11-17

    The imperfect knowledge of the Higgs boson decay rates and cross sections at the LHC constitutes a critical systematic uncertainty in the study of the Higgs boson properties. We show that the full covariance matrix between the Higgs rates can be determined from the most elementary sources of uncertainty by a direct application of probability theory. We evaluate the error magnitudes and full correlation matrix on the set of Higgs cross sections and branching ratios at √s=7, 8, 13 and 14 TeV, which are provided in ancillary files. The impact of this correlation matrix on the global fits is illustrated with the latest 7+8 TeV Higgs dataset.

  4. A Rank-Constrained Matrix Representation for Hypergraph-Based Subspace Clustering

    Directory of Open Access Journals (Sweden)

    Yubao Sun

    2015-01-01

    Full Text Available This paper presents a novel, rank-constrained matrix representation combined with hypergraph spectral analysis to enable the recovery of the original subspace structures of corrupted data. Real-world data are frequently corrupted with both sparse error and noise. Our matrix decomposition model separates the low-rank, sparse error, and noise components from the data in order to enhance robustness to the corruption. In order to obtain the desired rank representation of the data within a dictionary, our model directly utilizes rank constraints by restricting the upper bound of the rank range. An alternative projection algorithm is proposed to estimate the low-rank representation and separate the sparse error from the data matrix. To further capture the complex relationship between data distributed in multiple subspaces, we use hypergraph to represent the data by encapsulating multiple related samples into one hyperedge. The final clustering result is obtained by spectral decomposition of the hypergraph Laplacian matrix. Validation experiments on the Extended Yale Face Database B, AR, and Hopkins 155 datasets show that the proposed method is a promising tool for subspace clustering.

  5. A note on the eigensystem of the covariance matrix of dichotomous Guttman items

    Directory of Open Access Journals (Sweden)

    Clintin P Davis-Stober

    2015-12-01

    Full Text Available We consider the sample covariance matrix for dichotomous Guttman items under a set of uniformity conditions, and obtain closed-form expressions for the eigenvalues and eigenvectors of the matrix. In particular, we describe the eigenvalues and eigenvectors of the matrix in terms of trigonometric functions of the number of items. Our results parallel those of Zwick (1987 for the correlation matrix under the same uniformity conditions. We provide an explanation for certain properties of principal components under Guttman scalability which have been first reported by Guttman (1950.

  6. Spatiotemporal matrix image formation for programmable ultrasound scanners

    Science.gov (United States)

    Berthon, Beatrice; Morichau-Beauchant, Pierre; Porée, Jonathan; Garofalakis, Anikitos; Tavitian, Bertrand; Tanter, Mickael; Provost, Jean

    2018-02-01

    As programmable ultrasound scanners become more common in research laboratories, it is increasingly important to develop robust software-based image formation algorithms that can be obtained in a straightforward fashion for different types of probes and sequences with a small risk of error during implementation. In this work, we argue that as the computational power keeps increasing, it is becoming practical to directly implement an approximation to the matrix operator linking reflector point targets to the corresponding radiofrequency signals via thoroughly validated and widely available simulations software. Once such a spatiotemporal forward-problem matrix is constructed, standard and thus highly optimized inversion procedures can be leveraged to achieve very high quality images in real time. Specifically, we show that spatiotemporal matrix image formation produces images of similar or enhanced quality when compared against standard delay-and-sum approaches in phantoms and in vivo, and show that this approach can be used to form images even when using non-conventional probe designs for which adapted image formation algorithms are not readily available.

  7. Implementing the Science Assessment Standards: Developing and validating a set of laboratory assessment tasks in high school biology

    Science.gov (United States)

    Saha, Gouranga Chandra

    Very often a number of factors, especially time, space and money, deter many science educators from using inquiry-based, hands-on, laboratory practical tasks as alternative assessment instruments in science. A shortage of valid inquiry-based laboratory tasks for high school biology has been cited. Driven by this need, this study addressed the following three research questions: (1) How can laboratory-based performance tasks be designed and developed that are doable by students for whom they are designed/written? (2) Do student responses to the laboratory-based performance tasks validly represent at least some of the intended process skills that new biology learning goals want students to acquire? (3) Are the laboratory-based performance tasks psychometrically consistent as individual tasks and as a set? To answer these questions, three tasks were used from the six biology tasks initially designed and developed by an iterative process of trial testing. Analyses of data from 224 students showed that performance-based laboratory tasks that are doable by all students require careful and iterative process of development. Although the students demonstrated more skill in performing than planning and reasoning, their performances at the item level were very poor for some items. Possible reasons for the poor performances have been discussed and suggestions on how to remediate the deficiencies have been made. Empirical evidences for validity and reliability of the instrument have been presented both from the classical and the modern validity criteria point of view. Limitations of the study have been identified. Finally implications of the study and directions for further research have been discussed.

  8. Development of the Human Factors Skills for Healthcare Instrument: a valid and reliable tool for assessing interprofessional learning across healthcare practice settings.

    Science.gov (United States)

    Reedy, Gabriel B; Lavelle, Mary; Simpson, Thomas; Anderson, Janet E

    2017-10-01

    A central feature of clinical simulation training is human factors skills, providing staff with the social and cognitive skills to cope with demanding clinical situations. Although these skills are critical to safe patient care, assessing their learning is challenging. This study aimed to develop, pilot and evaluate a valid and reliable structured instrument to assess human factors skills, which can be used pre- and post-simulation training, and is relevant across a range of healthcare professions. Through consultation with a multi-professional expert group, we developed and piloted a 39-item survey with 272 healthcare professionals attending training courses across two large simulation centres in London, one specialising in acute care and one in mental health, both serving healthcare professionals working across acute and community settings. Following psychometric evaluation, the final 12-item instrument was evaluated with a second sample of 711 trainees. Exploratory factor analysis revealed a 12-item, one-factor solution with good internal consistency (α=0.92). The instrument had discriminant validity, with newly qualified trainees scoring significantly lower than experienced trainees ( t (98)=4.88, pSkills for Healthcare Instrument provides a reliable and valid method of assessing trainees' human factors skills self-efficacy across acute and mental health settings. This instrument has the potential to improve the assessment and evaluation of human factors skills learning in both uniprofessional and interprofessional clinical simulation training.

  9. Assessing Discriminative Performance at External Validation of Clinical Prediction Models.

    Directory of Open Access Journals (Sweden)

    Daan Nieboer

    Full Text Available External validation studies are essential to study the generalizability of prediction models. Recently a permutation test, focusing on discrimination as quantified by the c-statistic, was proposed to judge whether a prediction model is transportable to a new setting. We aimed to evaluate this test and compare it to previously proposed procedures to judge any changes in c-statistic from development to external validation setting.We compared the use of the permutation test to the use of benchmark values of the c-statistic following from a previously proposed framework to judge transportability of a prediction model. In a simulation study we developed a prediction model with logistic regression on a development set and validated them in the validation set. We concentrated on two scenarios: 1 the case-mix was more heterogeneous and predictor effects were weaker in the validation set compared to the development set, and 2 the case-mix was less heterogeneous in the validation set and predictor effects were identical in the validation and development set. Furthermore we illustrated the methods in a case study using 15 datasets of patients suffering from traumatic brain injury.The permutation test indicated that the validation and development set were homogenous in scenario 1 (in almost all simulated samples and heterogeneous in scenario 2 (in 17%-39% of simulated samples. Previously proposed benchmark values of the c-statistic and the standard deviation of the linear predictors correctly pointed at the more heterogeneous case-mix in scenario 1 and the less heterogeneous case-mix in scenario 2.The recently proposed permutation test may provide misleading results when externally validating prediction models in the presence of case-mix differences between the development and validation population. To correctly interpret the c-statistic found at external validation it is crucial to disentangle case-mix differences from incorrect regression coefficients.

  10. The validity of visual acuity assessment using mobile technology devices in the primary care setting.

    Science.gov (United States)

    O'Neill, Samuel; McAndrew, Darryl J

    2016-04-01

    The assessment of visual acuity is indicated in a number of clinical circumstances. It is commonly conducted through the use of a Snellen wall chart. Mobile technology developments and adoption rates by clinicians may potentially provide more convenient methods of assessing visual acuity. Limited data exist on the validity of these devices and applications. The objective of this study was to evaluate the assessment of distance visual acuity using mobile technology devices against the commonly used 3-metre Snellen chart in a primary care setting. A prospective quantitative comparative study was conducted at a regional medical practice. The visual acuity of 60 participants was assessed on a Snellen wall chart and two mobile technology devices (iPhone, iPad). Visual acuity intervals were converted to logarithm of minimum angle of resolution (logMAR) scores and subjected to intraclass correlation coefficient (ICC) assessment. The results show a high level of general agreement between testing modality (ICC 0.917 with a 95% confidence interval of 0.887-0.940). The high level of agreement of visual acuity results between the Snellen wall chart and both mobile technology devices suggests that clinicians can use this technology with confidence in the primary care setting.

  11. Levels of Circulating MMCN-151, a Degradation Product of Mimecan, Reflect Pathological Extracellular Matrix Remodeling in Apolipoprotein E Knockout Mice

    DEFF Research Database (Denmark)

    Barascuk, N; Vassiliadis, E; Zheng, Qiuju

    2011-01-01

    Arterial extracellular matrix (ECM) remodeling by matrix metalloproteinases (MMPs) is one of the major hallmarks of atherosclerosis. Mimecan, also known as osteoglycin has been implicated in the integrity of the ECM. This study assessed the validity of an enzyme-linked immunosorbent assay (ELISA...

  12. Validity of M-3Y force equivalent G-matrix elements for calculations of the nuclear structure in heavy mass region

    International Nuclear Information System (INIS)

    Cheng Lan; Huang Weizhi; Zhou Baosen

    1996-01-01

    Using the matrix elements of M-3Y force as the equivalent G-matrix elements, the spectra of 210 Pb, 206 Pb, 206 Hg and 210 Po are calculated in the framework of the Folded Diagram Method. The results show that such equivalent matrix elements are suitable for microscopic calculations of the nuclear structure in heavy mass region

  13. Reliability and Validity of Survey Instruments to Measure Work-Related Fatigue in the Emergency Medical Services Setting: A Systematic Review.

    Science.gov (United States)

    Patterson, P Daniel; Weaver, Matthew D; Fabio, Anthony; Teasley, Ellen M; Renn, Megan L; Curtis, Brett R; Matthews, Margaret E; Kroemer, Andrew J; Xun, Xiaoshuang; Bizhanova, Zhadyra; Weiss, Patricia M; Sequeira, Denisse J; Coppler, Patrick J; Lang, Eddy S; Higgins, J Stephen

    2018-02-15

    This study sought to systematically search the literature to identify reliable and valid survey instruments for fatigue measurement in the Emergency Medical Services (EMS) occupational setting. A systematic review study design was used and searched six databases, including one website. The research question guiding the search was developed a priori and registered with the PROSPERO database of systematic reviews: "Are there reliable and valid instruments for measuring fatigue among EMS personnel?" (2016:CRD42016040097). The primary outcome of interest was criterion-related validity. Important outcomes of interest included reliability (e.g., internal consistency), and indicators of sensitivity and specificity. Members of the research team independently screened records from the databases. Full-text articles were evaluated by adapting the Bolster and Rourke system for categorizing findings of systematic reviews, and the rated data abstracted from the body of literature as favorable, unfavorable, mixed/inconclusive, or no impact. The Grading of Recommendations, Assessment, Development and Evaluation (GRADE) methodology was used to evaluate the quality of evidence. The search strategy yielded 1,257 unique records. Thirty-four unique experimental and non-experimental studies were determined relevant following full-text review. Nineteen studies reported on the reliability and/or validity of ten different fatigue survey instruments. Eighteen different studies evaluated the reliability and/or validity of four different sleepiness survey instruments. None of the retained studies reported sensitivity or specificity. Evidence quality was rated as very low across all outcomes. In this systematic review, limited evidence of the reliability and validity of 14 different survey instruments to assess the fatigue and/or sleepiness status of EMS personnel and related shift worker groups was identified.

  14. A set of pathological tests to validate new finite elements

    Indian Academy of Sciences (India)

    M. Senthilkumar (Newgen Imaging) 1461 1996 Oct 15 13:05:22

    The finite element method entails several approximations. Hence it ... researchers have designed several pathological tests to validate any new finite element. The .... Three dimensional thick shell elements using a hybrid/mixed formu- lation.

  15. ACE-FTS version 3.0 data set: validation and data processing update

    Directory of Open Access Journals (Sweden)

    Claire Waymark

    2014-01-01

    Full Text Available On 12 August 2003, the Canadian-led Atmospheric Chemistry Experiment (ACE was launched into a 74° inclination orbit at 650 km with the mission objective to measure atmospheric composition using infrared and UV-visible spectroscopy (Bernath et al. 2005. The ACE mission consists of two main instruments, ACE-FTS and MAESTRO (McElroy et al. 2007, which are being used to investigate the chemistry and dynamics of the Earth’s atmosphere.  Here, we focus on the high resolution (0.02 cm-1 infrared Fourier Transform Spectrometer, ACE-FTS, that measures in the 750-4400 cm-1 (2.2 to 13.3 µm spectral region.  This instrument has been making regular solar occultation observations for more than nine years.  The current ACE-FTS data version (version 3.0 provides profiles of temperature and volume mixing ratios (VMRs of more than 30 atmospheric trace gas species, as well as 20 subsidiary isotopologues of the most abundant trace atmospheric constituents over a latitude range of ~85°N to ~85°S.  This letter describes the current data version and recent validation comparisons and provides a description of our planned updates for the ACE-FTS data set. [...

  16. Developing an assessment of fire-setting to guide treatment in secure settings: the St Andrew's Fire and Arson Risk Instrument (SAFARI).

    Science.gov (United States)

    Long, Clive G; Banyard, Ellen; Fulton, Barbara; Hollin, Clive R

    2014-09-01

    Arson and fire-setting are highly prevalent among patients in secure psychiatric settings but there is an absence of valid and reliable assessment instruments and no evidence of a significant approach to intervention. To develop a semi-structured interview assessment specifically for fire-setting to augment structured assessments of risk and need. The extant literature was used to frame interview questions relating to the antecedents, behaviour and consequences necessary to formulate a functional analysis. Questions also covered readiness to change, fire-setting self-efficacy, the probability of future fire-setting, barriers to change, and understanding of fire-setting behaviour. The assessment concludes with indications for assessment and a treatment action plan. The inventory was piloted with a sample of women in secure care and was assessed for comprehensibility, reliability and validity. Staff rated the St Andrews Fire and Risk Instrument (SAFARI) as acceptable to patients and easy to administer. SAFARI was found to be comprehensible by over 95% of the general population, to have good acceptance, high internal reliability, substantial test-retest reliability and validity. SAFARI helps to provide a clear explanation of fire-setting in terms of the complex interplay of antecedents and consequences and facilitates the design of an individually tailored treatment programme in sympathy with a cognitive-behavioural approach. Further studies are needed to verify the reliability and validity of SAFARI with male populations and across settings.

  17. Matrix product operators, matrix product states, and ab initio density matrix renormalization group algorithms

    Science.gov (United States)

    Chan, Garnet Kin-Lic; Keselman, Anna; Nakatani, Naoki; Li, Zhendong; White, Steven R.

    2016-07-01

    Current descriptions of the ab initio density matrix renormalization group (DMRG) algorithm use two superficially different languages: an older language of the renormalization group and renormalized operators, and a more recent language of matrix product states and matrix product operators. The same algorithm can appear dramatically different when written in the two different vocabularies. In this work, we carefully describe the translation between the two languages in several contexts. First, we describe how to efficiently implement the ab initio DMRG sweep using a matrix product operator based code, and the equivalence to the original renormalized operator implementation. Next we describe how to implement the general matrix product operator/matrix product state algebra within a pure renormalized operator-based DMRG code. Finally, we discuss two improvements of the ab initio DMRG sweep algorithm motivated by matrix product operator language: Hamiltonian compression, and a sum over operators representation that allows for perfect computational parallelism. The connections and correspondences described here serve to link the future developments with the past and are important in the efficient implementation of continuing advances in ab initio DMRG and related algorithms.

  18. Response matrix of a multisphere neutron spectrometer with an 3 He proportional counter

    International Nuclear Information System (INIS)

    Vega C, H.R.; Manzanares A, E.; Hernandez D, V.M.; Mercado S, G.A.

    2005-01-01

    The response matrix of a Bonner sphere spectrometer was calculated by use of the MCNP code. As thermal neutron counter, the spectrometer has a 3.2 cm-diameter 3 He-filled proportional counter which is located at the center of a set of polyethylene spheres. The response was calculated for 0, 3, 5, 6, 8, 10, 12, and 16 inches-diameter polyethylene spheres for neutrons whose energy goes from 10 -9 to 20 MeV. The response matrix was compared with a set of responses measured with several monoenergetic neutron sources. In this comparison the calculated matrix agrees with the experimental results. The matrix was also compared with the response matrix calculated for the PTB C spectrometer. Even though that calculation was carried out using a detailed model to describe the proportional counter; both matrices do agree, but small differences are observed in the bare case because of the difference in the model used during calculations. Other differences are in some spheres for 14.8 and 20 MeV neutrons, probably due to the differences in the cross sections used during both calculations. (Author) 28 refs., 1 tab., 6 figs

  19. Neuroanatomy-based matrix-guided trimming protocol for the rat brain.

    Science.gov (United States)

    Defazio, Rossella; Criado, Ana; Zantedeschi, Valentina; Scanziani, Eugenio

    2015-02-01

    Brain trimming through defined neuroanatomical landmarks is recommended to obtain consistent sections in rat toxicity studies. In this article, we describe a matrix-guided trimming protocol that uses channels to reproduce coronal levels of anatomical landmarks. Both setup phase and validation study were performed on Han Wistar male rats (Crl:WI(Han)), 10-week-old, with bodyweight of 298 ± 29 (SD) g, using a matrix (ASI-Instruments(®), Houston, TX) fitted for brains of rats with 200 to 400 g bodyweight. In the setup phase, we identified eight channels, that is, 6, 8, 10, 12, 14, 16, 19, and 21, matching the recommended landmarks midway to the optic chiasm, frontal pole, optic chiasm, infundibulum, mamillary bodies, midbrain, middle cerebellum, and posterior cerebellum, respectively. In the validation study, we trimmed the immersion-fixed brains of 60 rats using the selected channels to determine how consistently the channels reproduced anatomical landmarks. Percentage of success (i.e., presence of expected targets for each level) ranged from 89 to 100%. Where 100% success was not achieved, it was noted that the shift in brain trimming was toward the caudal pole. In conclusion, we developed and validated a trimming protocol for the rat brain that allow comparable extensiveness, homology, and relevance of coronal sections as the landmark-guided trimming with the advantage of being quickly learned by technicians. © 2014 by The Author(s).

  20. Internal damping due to dislocation movements induced by thermal expansion mismatch between matrix and particles in metal matrix composites. [Al/SiC

    Energy Technology Data Exchange (ETDEWEB)

    Girand, C.; Lormand, G.; Fougeres, R.; Vincent, A. (GEMPPM, Villeurbanne (France))

    1993-05-01

    In metal matrix composites (MMCs), the mechanical 1 of the reinforcement-matrix interface is an important parameter because it governs the load transfer from matrix to particles, from which the mechanical properties of these materials are derived. Therefore, it would be useful to set out an experimental method able to characterize the interface and the adjacent matrix behaviors. Thus, a study has been undertaken by means of internal damping (I.D.) measurements, which are well known to be very sensitive for studying irreversible displacements at the atomic scale. More especially, this investigation is based on the fact that, during cooling of MMC's, stress concentrations originating from differences in coefficients of thermal expansion (C.T.E.) of matrix and particles should induce dislocation movements in the matrix surrounding the reinforcement; that is, local microplastic strains occur. Therefore, during I.D. measurements vs temperature these movements should contribute to MMCs I.D. in a process similar to those involved around first order phase transitions in solids. The aim of this paper is to present, in the case of Al/SiC particulate composites, new developments of this approach that has previously led to promising results in the case of Al-Si alloys.

  1. RBAC-Matrix-based EMR right management system to improve HIPAA compliance.

    Science.gov (United States)

    Lee, Hung-Chang; Chang, Shih-Hsin

    2012-10-01

    Security control of Electronic Medical Record (EMR) is a mechanism used to manage electronic medical records files and protect sensitive medical records document from information leakage. Researches proposed the Role-Based Access Control(RBAC). However, with the increasing scale of medical institutions, the access control behavior is difficult to have a detailed declaration among roles in RBAC. Furthermore, with the stringent specifications such as the U.S. HIPAA and Canada PIPEDA etc., patients are encouraged to have the right in regulating the access control of his EMR. In response to these problems, we propose an EMR digital rights management system, which is a RBAC-based extension to a matrix organization of medical institutions, known as RBAC-Matrix. With the aim of authorizing the EMR among roles in the organization, RBAC-Matrix also allow patients to be involved in defining access rights of his records. RBAC-Matrix authorizes access control declaration among matrix organizations of medical institutions by using XrML file in association with each EMR. It processes XrML rights declaration file-based authorization of behavior in the two-stage design, called master & servant stage, thus makes the associated EMR to be better protected. RBAC-Matrix will also make medical record file and its associated XrML declaration to two different EMRA(EMR Authorization)roles, namely, the medical records Document Creator (DC) and the medical records Document Right Setting (DRS). Access right setting, determined by the DRS, is cosigned by the patient, thus make the declaration of rights and the use of EMR to comply with HIPAA specifications.

  2. Reweighting QCD matrix-element and parton-shower calculations

    Energy Technology Data Exchange (ETDEWEB)

    Bothmann, Enrico; Schumann, Steffen [Universitaet Goettingen, II. Physikalisches Institut, Goettingen (Germany); Schoenherr, Marek [Universitaet Zuerich, Physik-Institut, Zuerich (Switzerland)

    2016-11-15

    We present the implementation and validation of the techniques used to efficiently evaluate parametric and perturbative theoretical uncertainties in matrix-element plus parton-shower simulations within the Sherpa event-generator framework. By tracing the full α{sub s} and PDF dependences, including the parton-shower component, as well as the fixed-order scale uncertainties, we compute variational event weights on-the-fly, thereby greatly reducing the computational costs to obtain theoretical-uncertainty estimates. (orig.)

  3. Inverse consistent non-rigid image registration based on robust point set matching

    Science.gov (United States)

    2014-01-01

    Background Robust point matching (RPM) has been extensively used in non-rigid registration of images to robustly register two sets of image points. However, except for the location at control points, RPM cannot estimate the consistent correspondence between two images because RPM is a unidirectional image matching approach. Therefore, it is an important issue to make an improvement in image registration based on RPM. Methods In our work, a consistent image registration approach based on the point sets matching is proposed to incorporate the property of inverse consistency and improve registration accuracy. Instead of only estimating the forward transformation between the source point sets and the target point sets in state-of-the-art RPM algorithms, the forward and backward transformations between two point sets are estimated concurrently in our algorithm. The inverse consistency constraints are introduced to the cost function of RPM and the fuzzy correspondences between two point sets are estimated based on both the forward and backward transformations simultaneously. A modified consistent landmark thin-plate spline registration is discussed in detail to find the forward and backward transformations during the optimization of RPM. The similarity of image content is also incorporated into point matching in order to improve image matching. Results Synthetic data sets, medical images are employed to demonstrate and validate the performance of our approach. The inverse consistent errors of our algorithm are smaller than RPM. Especially, the topology of transformations is preserved well for our algorithm for the large deformation between point sets. Moreover, the distance errors of our algorithm are similar to that of RPM, and they maintain a downward trend as whole, which demonstrates the convergence of our algorithm. The registration errors for image registrations are evaluated also. Again, our algorithm achieves the lower registration errors in same iteration number

  4. Principles of Proper Validation

    DEFF Research Database (Denmark)

    Esbensen, Kim; Geladi, Paul

    2010-01-01

    to suffer from the same deficiencies. The PPV are universal and can be applied to all situations in which the assessment of performance is desired: prediction-, classification-, time series forecasting-, modeling validation. The key element of PPV is the Theory of Sampling (TOS), which allow insight......) is critically necessary for the inclusion of the sampling errors incurred in all 'future' situations in which the validated model must perform. Logically, therefore, all one data set re-sampling approaches for validation, especially cross-validation and leverage-corrected validation, should be terminated...

  5. Verification, validation, and reliability of predictions

    International Nuclear Information System (INIS)

    Pigford, T.H.; Chambre, P.L.

    1987-04-01

    The objective of predicting long-term performance should be to make reliable determinations of whether the prediction falls within the criteria for acceptable performance. Establishing reliable predictions of long-term performance of a waste repository requires emphasis on valid theories to predict performance. The validation process must establish the validity of the theory, the parameters used in applying the theory, the arithmetic of calculations, and the interpretation of results; but validation of such performance predictions is not possible unless there are clear criteria for acceptable performance. Validation programs should emphasize identification of the substantive issues of prediction that need to be resolved. Examples relevant to waste package performance are predicting the life of waste containers and the time distribution of container failures, establishing the criteria for defining container failure, validating theories for time-dependent waste dissolution that depend on details of the repository environment, and determining the extent of congruent dissolution of radionuclides in the UO 2 matrix of spent fuel. Prediction and validation should go hand in hand and should be done and reviewed frequently, as essential tools for the programs to design and develop repositories. 29 refs

  6. Validation of Nurse Practitioner Primary Care Organizational Climate Questionnaire: A New Tool to Study Nurse Practitioner Practice Settings.

    Science.gov (United States)

    Poghosyan, Lusine; Chaplin, William F; Shaffer, Jonathan A

    2017-04-01

    Favorable organizational climate in primary care settings is necessary to expand the nurse practitioner (NP) workforce and promote their practice. Only one NP-specific tool, the Nurse Practitioner Primary Care Organizational Climate Questionnaire (NP-PCOCQ), measures NP organizational climate. We confirmed NP-PCOCQ's factor structure and established its predictive validity. A crosssectional survey design was used to collect data from 314 NPs in Massachusetts in 2012. Confirmatory factor analysis and regression models were used. The 4-factor model characterized NP-PCOCQ. The NP-PCOCQ score predicted job satisfaction (beta = .36; p organizational climate in their clinics. Further testing of NP-PCOCQ is needed.

  7. The paradox of managing a project-oriented matrix: establishing coherence within chaos.

    Science.gov (United States)

    Greiner, L E; Schein, V E

    1981-01-01

    Projects that require the flexible coordination of multidisciplinary teams have tended to adopt a matrix structure to accomplish complex tasks. Yet these project-oriented matrix structures themselves require careful coordination if they are to realize the objectives set for them. The authors identify the basic organizational questions that project-oriented matrix organizations must face. They examine the relationship between responsibility and authority; the tradeoffs between economic efficiency and the technical quality of the work produced; and the sensitive issues of managing individualistic, highly trained professionals while also maintaining group cohesiveness.

  8. The linear parameters and the decoupling matrix for linearly coupled motion in 6 dimensional phase space

    International Nuclear Information System (INIS)

    Parzen, G.

    1997-01-01

    It will be shown that starting from a coordinate system where the 6 phase space coordinates are linearly coupled, one can go to a new coordinate system, where the motion is uncoupled, by means of a linear transformation. The original coupled coordinates and the new uncoupled coordinates are related by a 6 x 6 matrix, R. It will be shown that of the 36 elements of the 6 x 6 decoupling matrix R, only 12 elements are independent. A set of equations is given from which the 12 elements of R can be computed form the one period transfer matrix. This set of equations also allows the linear parameters, the β i , α i , i = 1, 3, for the uncoupled coordinates, to be computed from the one period transfer matrix

  9. A generalized Talmi-Moshinsky transformation for few-body and direct interaction matrix elements

    International Nuclear Information System (INIS)

    Tobocman, W.

    1981-01-01

    A set of basis states for use in evaluating matrix elements of few-body system operators is suggested. These basis states are products of harmonic oscillator wave functions having as arguments a set of Jacobi coordinates for the system. We show that these harmonic oscillator functions can be chosen in a manner that allows such a product to be expanded as a finite sum of the corresponding products for any other set of Jacobi coordinates. This result is a generalization of the Talmi-Moshinsky transformation for two equal-mass particles to a system of any number of particles of arbitrary masses. With the help of our method the multidimensional integral which must be performed to evaluate a few-body matrix element can be transformed into a sum of products of three dimensional integrals. The coefficients in such an expansion are generalized Talmi-Moshinsky coefficients. The method is tested by calculation of a matrix element for knockout scattering for a simple three-body-system. The results indicate that the method is a viable calculational tool. (orig.)

  10. Permitted and forbidden sets in symmetric threshold-linear networks.

    Science.gov (United States)

    Hahnloser, Richard H R; Seung, H Sebastian; Slotine, Jean-Jacques

    2003-03-01

    The richness and complexity of recurrent cortical circuits is an inexhaustible source of inspiration for thinking about high-level biological computation. In past theoretical studies, constraints on the synaptic connection patterns of threshold-linear networks were found that guaranteed bounded network dynamics, convergence to attractive fixed points, and multistability, all fundamental aspects of cortical information processing. However, these conditions were only sufficient, and it remained unclear which were the minimal (necessary) conditions for convergence and multistability. We show that symmetric threshold-linear networks converge to a set of attractive fixed points if and only if the network matrix is copositive. Furthermore, the set of attractive fixed points is nonconnected (the network is multiattractive) if and only if the network matrix is not positive semidefinite. There are permitted sets of neurons that can be coactive at a stable steady state and forbidden sets that cannot. Permitted sets are clustered in the sense that subsets of permitted sets are permitted and supersets of forbidden sets are forbidden. By viewing permitted sets as memories stored in the synaptic connections, we provide a formulation of long-term memory that is more general than the traditional perspective of fixed-point attractor networks. There is a close correspondence between threshold-linear networks and networks defined by the generalized Lotka-Volterra equations.

  11. Development and validation of a set of German stimulus- and target words for an attachment related semantic priming paradigm.

    Directory of Open Access Journals (Sweden)

    Anke Maatz

    Full Text Available Experimental research in adult attachment theory is faced with the challenge to adequately activate the adult attachment system. In view of the multitude of methods employed for this purpose so far, this paper suggests to further make use of the methodological advantages of semantic priming. In order to enable the use of such a paradigm in a German speaking context, a set of German words belonging to the semantic categories 'interpersonal closeness', 'interpersonal distance' and 'neutral' were identified and their semantics were validated combining production- and rating method. 164 university students answered corresponding online-questionnaires. Ratings were analysed using analysis of variance (ANOVA and cluster analysis from which three clearly distinct groups emerged. Beyond providing validated stimulus- and target words which can be used to activate the adult attachment system in a semantic priming paradigm, the results of this study point at important links between attachment and stress which call for further investigation in the future.

  12. Iterative approach as alternative to S-matrix in modal methods

    Science.gov (United States)

    Semenikhin, Igor; Zanuccoli, Mauro

    2014-12-01

    The continuously increasing complexity of opto-electronic devices and the rising demands of simulation accuracy lead to the need of solving very large systems of linear equations making iterative methods promising and attractive from the computational point of view with respect to direct methods. In particular, iterative approach potentially enables the reduction of required computational time to solve Maxwell's equations by Eigenmode Expansion algorithms. Regardless of the particular eigenmodes finding method used, the expansion coefficients are computed as a rule by scattering matrix (S-matrix) approach or similar techniques requiring order of M3 operations. In this work we consider alternatives to the S-matrix technique which are based on pure iterative or mixed direct-iterative approaches. The possibility to diminish the impact of M3 -order calculations to overall time and in some cases even to reduce the number of arithmetic operations to M2 by applying iterative techniques are discussed. Numerical results are illustrated to discuss validity and potentiality of the proposed approaches.

  13. Modulation and control of matrix converter for aerospace application

    Science.gov (United States)

    Kobravi, Keyhan

    In the context of modern aircraft systems, a major challenge is power conversion to supply the aircraft's electrical instruments. These instruments are energized through a fixed-frequency internal power grid. In an aircraft, the available sources of energy are a set of variable-speed generators which provide variable-frequency ac voltages. Therefore, to energize the internal power grid of an aircraft, the variable-frequency ac voltages should be converted to a fixed-frequency ac voltage. As a result, an ac to ac power conversion is required within an aircraft's power system. This thesis develops a Matrix Converter to energize the aircraft's internal power grid. The Matrix Converter provides a direct ac to ac power conversion. A major challenge of designing Matrix Converters for aerospace applications is to minimize the volume and weight of the converter. These parameters are minimized by increasing the switching frequency of the converter. To design a Matrix Converter operating at a high switching frequency, this thesis (i) develops a scheme to integrate fast semiconductor switches within the current available Matrix Converter topologies, i.e., MOSFET-based Matrix Converter, and (ii) develops a new modulation strategy for the Matrix Converter. This Matrix Converter and the new modulation strategy enables the operation of the converter at a switching-frequency of 40kHz. To provide a reliable source of energy, this thesis also develops a new methodology for robust control of Matrix Converter. To verify the performance of the proposed MOSFET-based Matrix Converter, modulation strategy, and control design methodology, various simulation and experimental results are presented. The experimental results are obtained under operating condition present in an aircraft. The experimental results verify the proposed Matrix Converter provides a reliable power conversion in an aircraft under extreme operating conditions. The results prove the superiority of the proposed Matrix

  14. MIMO Radar Transmit Beampattern Design Without Synthesising the Covariance Matrix

    KAUST Repository

    Ahmed, Sajid

    2013-10-28

    Compared to phased-array, multiple-input multiple-output (MIMO) radars provide more degrees-offreedom (DOF) that can be exploited for improved spatial resolution, better parametric identifiability, lower side-lobe levels at the transmitter/receiver, and design variety of transmit beampatterns. The design of the transmit beampattern generally requires the waveforms to have arbitrary auto- and crosscorrelation properties. The generation of such waveforms is a two step complicated process. In the first step a waveform covariance matrix is synthesised, which is a constrained optimisation problem. In the second step, to realise this covariance matrix actual waveforms are designed, which is also a constrained optimisation problem. Our proposed scheme converts this two step constrained optimisation problem into a one step unconstrained optimisation problem. In the proposed scheme, in contrast to synthesising the covariance matrix for the desired beampattern, nT independent finite-alphabet constantenvelope waveforms are generated and pre-processed, with weight matrix W, before transmitting from the antennas. In this work, two weight matrices are proposed that can be easily optimised for the desired symmetric and non-symmetric beampatterns and guarantee equal average power transmission from each antenna. Simulation results validate our claims.

  15. Structure of nuclear transition matrix elements for neutrinoless ...

    Indian Academy of Sciences (India)

    Abstract. The structure of nuclear transition matrix elements (NTMEs) required for the study of neutrinoless double- decay within light Majorana neutrino mass mechanism is disassembled in the PHFB model. The NTMEs are calculated using a set of HFB intrinsic wave functions, the reliability of which has been previously ...

  16. New set of convective heat transfer coefficients established for pools and validated against CLARA experiments for application to corium pools

    Energy Technology Data Exchange (ETDEWEB)

    Michel, B., E-mail: benedicte.michel@irsn.fr

    2015-05-15

    Highlights: • A new set of 2D convective heat transfer correlations is proposed. • It takes into account different horizontal and lateral superficial velocities. • It is based on previously established correlations. • It is validated against recent CLARA experiments. • It has to be implemented in a 0D MCCI (molten core concrete interaction) code. - Abstract: During an hypothetical Pressurized Water Reactor (PWR) or Boiling Water Reactor (BWR) severe accident with core meltdown and vessel failure, corium would fall directly on the concrete reactor pit basemat if no water is present. The high temperature of the corium pool maintained by the residual power would lead to the erosion of the concrete walls and basemat of this reactor pit. The thermal decomposition of concrete will lead to the release of a significant amount of gases that will modify the corium pool thermal hydraulics. In particular, it will affect heat transfers between the corium pool and the concrete which determine the reactor pit ablation kinetics. A new set of convective heat transfer coefficients in a pool with different lateral and horizontal superficial gas velocities is modeled and validated against the recent CLARA experimental program. 155 tests of this program, in two size configurations and a high range of investigated viscosity, have been used to validate the model. Then, a method to define different lateral and horizontal superficial gas velocities in a 0D code is proposed together with a discussion about the possible viscosity in the reactor case when the pool is semi-solid. This model is going to be implemented in the 0D ASTEC/MEDICIS code in order to determine the impact of the convective heat transfer in the concrete ablation by corium.

  17. Manifold regularized matrix completion for multi-label learning with ADMM.

    Science.gov (United States)

    Liu, Bin; Li, Yingming; Xu, Zenglin

    2018-05-01

    Multi-label learning is a common machine learning problem arising from numerous real-world applications in diverse fields, e.g, natural language processing, bioinformatics, information retrieval and so on. Among various multi-label learning methods, the matrix completion approach has been regarded as a promising approach to transductive multi-label learning. By constructing a joint matrix comprising the feature matrix and the label matrix, the missing labels of test samples are regarded as missing values of the joint matrix. With the low-rank assumption of the constructed joint matrix, the missing labels can be recovered by minimizing its rank. Despite its success, most matrix completion based approaches ignore the smoothness assumption of unlabeled data, i.e., neighboring instances should also share a similar set of labels. Thus they may under exploit the intrinsic structures of data. In addition, the matrix completion problem can be less efficient. To this end, we propose to efficiently solve the multi-label learning problem as an enhanced matrix completion model with manifold regularization, where the graph Laplacian is used to ensure the label smoothness over it. To speed up the convergence of our model, we develop an efficient iterative algorithm, which solves the resulted nuclear norm minimization problem with the alternating direction method of multipliers (ADMM). Experiments on both synthetic and real-world data have shown the promising results of the proposed approach. Copyright © 2018 Elsevier Ltd. All rights reserved.

  18. A generalized DEMATEL theory with a shrinkage coefficient for an indirect relation matrix

    Directory of Open Access Journals (Sweden)

    Liu Hsiang-Chuan

    2017-01-01

    Full Text Available In this paper, a novel decision-making trial and evaluation laboratory (DEMATEL theory with a shrinkage coefficient of indirect relation matrix is proposed, and a useful validity index, called Liu’s validity index, is also proposed for evaluating the performance of any DEMATEL model. If the shrinkage coefficient of an indirect relation matrix is equal to 1, then this new theory is identical to the traditional theory; in other words, it is a generalization of the traditional theory. Furthermore, the indirect relation is always considerably greater than the direct one in traditional DEMATEL theory, which is unreasonable and unfair because it overemphasizes the influence of the indirect relation. We prove in this paper that if the shrinkage coefficient is equal to 0.5, then the indirect relation is less than its direct relation. Because the shrinkage coefficient belongs to [0.5, 1], according to Liu’s validity index, we can find a more appropriate shrinkage coefficient to obtain a more efficient DEMATEL method. Some crucial properties of this new theory are discussed, and a simple example is provided to illustrate the advantages of the proposed theory.

  19. The Geriatric ICF Core Set reflecting health-related problems in community-living older adults aged 75 years and older without dementia : development and validation

    NARCIS (Netherlands)

    Spoorenberg, Sophie L. W.; Reijneveld, Sijmen A.; Middel, Berrie; Uittenbroek, Ronald J.; Kremer, Hubertus P. H.; Wynia, Klaske

    2015-01-01

    Purpose: The aim of the present study was to develop a valid Geriatric ICF Core Set reflecting relevant health-related problems of community-living older adults without dementia. Methods: A Delphi study was performed in order to reach consensus (70% agreement) on second-level categories from the

  20. Matrix-variational method: an efficient approach to bound state eigenproblems

    International Nuclear Information System (INIS)

    Gerck, E.; d'Oliveira, A.B.

    1978-11-01

    A new matrix-variational method for solving the radial Schroedinger equation is described. It consists in obtaining an adjustable matrix formulation for the boundary value differential equation, using a set of three functions that obey the boundary conditions. These functions are linearly combined at every three adjacents points to fit the true unknown eigenfunction by a variational technique. With the use of a new class of central differences, the exponential differences, tridiagonal or bidiagonal matrices are obtained. In the bidiagonal case, closed form expressions for the eigenvalues are given for the Coulomb, harmonic, linear, square-root and logarithmic potentials. The values obtained are within 0.1% of the true numerical value. The eigenfunction can be calculated using the eigenvectors to reconstruct the linear combination of the set functions [pt

  1. A mapping from the unitary to doubly stochastic matrices and symbols on a finite set

    Science.gov (United States)

    Karabegov, Alexander V.

    2008-11-01

    We prove that the mapping from the unitary to doubly stochastic matrices that maps a unitary matrix (ukl) to the doubly stochastic matrix (|ukl|2) is a submersion at a generic unitary matrix. The proof uses the framework of operator symbols on a finite set.

  2. The ab-initio density matrix renormalization group in practice

    Energy Technology Data Exchange (ETDEWEB)

    Olivares-Amaya, Roberto; Hu, Weifeng; Sharma, Sandeep; Yang, Jun; Chan, Garnet Kin-Lic [Department of Chemistry, Princeton University, Princeton, New Jersey 08544 (United States); Nakatani, Naoki [Department of Chemistry, Princeton University, Princeton, New Jersey 08544 (United States); Catalysis Research Center, Hokkaido University, Kita 21 Nishi 10, Sapporo, Hokkaido 001-0021 (Japan)

    2015-01-21

    The ab-initio density matrix renormalization group (DMRG) is a tool that can be applied to a wide variety of interesting problems in quantum chemistry. Here, we examine the density matrix renormalization group from the vantage point of the quantum chemistry user. What kinds of problems is the DMRG well-suited to? What are the largest systems that can be treated at practical cost? What sort of accuracies can be obtained, and how do we reason about the computational difficulty in different molecules? By examining a diverse benchmark set of molecules: π-electron systems, benchmark main-group and transition metal dimers, and the Mn-oxo-salen and Fe-porphine organometallic compounds, we provide some answers to these questions, and show how the density matrix renormalization group is used in practice.

  3. The ab-initio density matrix renormalization group in practice.

    Science.gov (United States)

    Olivares-Amaya, Roberto; Hu, Weifeng; Nakatani, Naoki; Sharma, Sandeep; Yang, Jun; Chan, Garnet Kin-Lic

    2015-01-21

    The ab-initio density matrix renormalization group (DMRG) is a tool that can be applied to a wide variety of interesting problems in quantum chemistry. Here, we examine the density matrix renormalization group from the vantage point of the quantum chemistry user. What kinds of problems is the DMRG well-suited to? What are the largest systems that can be treated at practical cost? What sort of accuracies can be obtained, and how do we reason about the computational difficulty in different molecules? By examining a diverse benchmark set of molecules: π-electron systems, benchmark main-group and transition metal dimers, and the Mn-oxo-salen and Fe-porphine organometallic compounds, we provide some answers to these questions, and show how the density matrix renormalization group is used in practice.

  4. Validity of the M-3Y force equivalent G-matrix element for the calculations of nuclear structure in the s-d shell

    International Nuclear Information System (INIS)

    Song Hong-qiu; Wang Zixing; Cai Yanhuang; Huang Weizhi

    1987-01-01

    The matrix elements of the M-3Y force are adopted as the equivalent G-matrix elements and the folded diagram method is used to calculate the spectra of 18 O and 18 F. The results show that the matrix elements of the M-3Y force as the equivalent G-matrix elements are suitable for microscopic calculations of the nuclei in the s-d shell

  5. Modelling prospects for in situ matrix diffusion at Palmottu natural analogue site, SW Finland

    International Nuclear Information System (INIS)

    Rasilainen, K.; Suksi, J.

    1994-01-01

    Concentration distributions of natural decay chains 4n+2 and 4n+3 in crystalline rock intersected by a natural fracture were measured. Calcite coating on the same fracture surface was dated. Material properties of the rock matrix, and nuclide concentrations in groundwater were measured. The interpretation of the concentration distributions is based on the classical matrix diffusion concept. Although support was obtained, this calibration exercise does not yet validate the model. Besides initial and boundary conditions, matrix properties are uncertain due to the small amount of rock material. Experimental sorption data was not available, but its importance and the need for systematic studies was demonstrated. (orig.) (10 refs., 5 figs., 5 tabs.)

  6. Mechanical properties study of particles reinforced aluminum matrix composites by micro-indentation experiments

    Directory of Open Access Journals (Sweden)

    Yuan Zhanwei

    2014-04-01

    Full Text Available By using instrumental micro-indentation technique, the microhardness and Young’s modulus of SiC particles reinforced aluminum matrix composites were investigated with micro-compression-tester (MCT. The micro-indentation experiments were performed with different maximum loads, and with three loading speeds of 2.231, 4.462 and 19.368 mN/s respectively. During the investigation, matrix, particle and interface were tested by micro-indentation experiments. The results exhibit that the variations of Young’s modulus and microhardness at particle, matrix and interface were highly dependent on the loading conditions (maximum load and loading speed and the locations of indentation. Micro-indentation hardness experiments of matrix show the indentation size effects, i.e. the indentation hardness decreased with the indentation depth increasing. During the analysis, the effect of loading conditions on Young’s modulus and microhardness were explained. Besides, the elastic–plastic properties of matrix were analyzed. The validity of calculated results was identified by finite element simulation. And the simulation results had been preliminarily analyzed from statistical aspect.

  7. Hecke algebraic properties of dynamical R-matrices. Application to related quantum matrix algebras

    International Nuclear Information System (INIS)

    Khadzhiivanov, L.K.; Todorov, I.T.; Isaev, A.P.; Pyatov, P.N.; Ogievetskij, O.V.

    1998-01-01

    The quantum dynamical Yang-Baxter (or Gervais-Neveu-Felder) equation defines an R-matrix R cap (p), where p stands for a set of mutually commuting variables. A family of SL (n)-type solutions of this equation provides a new realization of the Hecke algebra. We define quantum antisymmetrizers, introduce the notion of quantum determinant and compute the inverse quantum matrix for matrix algebras of the type R cap (p) a 1 a 2 = a 1 a 2 R cap. It is pointed out that such a quantum matrix algebra arises in the operator realization of the chiral zero modes of the WZNW model

  8. Computational physics an introduction to Monte Carlo simulations of matrix field theory

    CERN Document Server

    Ydri, Badis

    2017-01-01

    This book is divided into two parts. In the first part we give an elementary introduction to computational physics consisting of 21 simulations which originated from a formal course of lectures and laboratory simulations delivered since 2010 to physics students at Annaba University. The second part is much more advanced and deals with the problem of how to set up working Monte Carlo simulations of matrix field theories which involve finite dimensional matrix regularizations of noncommutative and fuzzy field theories, fuzzy spaces and matrix geometry. The study of matrix field theory in its own right has also become very important to the proper understanding of all noncommutative, fuzzy and matrix phenomena. The second part, which consists of 9 simulations, was delivered informally to doctoral students who are working on various problems in matrix field theory. Sample codes as well as sample key solutions are also provided for convenience and completness. An appendix containing an executive arabic summary of t...

  9. Structure of nuclear transition matrix elements for neutrinoless ...

    Indian Academy of Sciences (India)

    Abstract. The structure of nuclear transition matrix elements (NTMEs) required for the study of neutrinoless double-β decay within light Majorana neutrino mass mechanism is disassembled in the PHFB model. The NTMEs are calculated using a set of HFB intrinsic wave functions, the reliability of which has been previously ...

  10. Pseudo-Hermitian random matrix theory

    International Nuclear Information System (INIS)

    Srivastava, S.C.L.; Jain, S.R.

    2013-01-01

    Complex extension of quantum mechanics and the discovery of pseudo-unitarily invariant random matrix theory has set the stage for a number of applications of these concepts in physics. We briefly review the basic ideas and present applications to problems in statistical mechanics where new results have become possible. We have found it important to mention the precise directions where advances could be made if further results become available. (Copyright copyright 2013 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim)

  11. Hexagonal response matrix using symmetries

    International Nuclear Information System (INIS)

    Gotoh, Y.

    1991-01-01

    A response matrix for use in core calculations for nuclear reactors with hexagonal fuel assemblies is presented. It is based on the incoming currents averaged over the half-surface of a hexagonal node by applying symmetry theory. The boundary conditions of the incoming currents on the half-surface of the node are expressed by a complete set of orthogonal vectors which are constructed from symmetrized functions. The expansion coefficients of the functions are determined by the boundary conditions of incoming currents. (author)

  12. Development and validation of a casemix classification to predict costs of specialist palliative care provision across inpatient hospice, hospital and community settings in the UK: a study protocol.

    Science.gov (United States)

    Guo, Ping; Dzingina, Mendwas; Firth, Alice M; Davies, Joanna M; Douiri, Abdel; O'Brien, Suzanne M; Pinto, Cathryn; Pask, Sophie; Higginson, Irene J; Eagar, Kathy; Murtagh, Fliss E M

    2018-03-17

    Provision of palliative care is inequitable with wide variations across conditions and settings in the UK. Lack of a standard way to classify by case complexity is one of the principle obstacles to addressing this. We aim to develop and validate a casemix classification to support the prediction of costs of specialist palliative care provision. Phase I: A cohort study to determine the variables and potential classes to be included in a casemix classification. Data are collected from clinicians in palliative care services across inpatient hospice, hospital and community settings on: patient demographics, potential complexity/casemix criteria and patient-level resource use. Cost predictors are derived using multivariate regression and then incorporated into a classification using classification and regression trees. Internal validation will be conducted by bootstrapping to quantify any optimism in the predictive performance (calibration and discrimination) of the developed classification. Phase II: A mixed-methods cohort study across settings for external validation of the classification developed in phase I. Patient and family caregiver data will be collected longitudinally on demographics, potential complexity/casemix criteria and patient-level resource use. This will be triangulated with data collected from clinicians on potential complexity/casemix criteria and patient-level resource use, and with qualitative interviews with patients and caregivers about care provision across difference settings. The classification will be refined on the basis of its performance in the validation data set. The study has been approved by the National Health Service Health Research Authority Research Ethics Committee. The results are expected to be disseminated in 2018 through papers for publication in major palliative care journals; policy briefs for clinicians, commissioning leads and policy makers; and lay summaries for patients and public. ISRCTN90752212. © Article author

  13. Investigation of fracture-matrix interaction: Preliminary experiments in a simple system

    International Nuclear Information System (INIS)

    Foltz, S.D.

    1992-01-01

    Paramount to the modeling of unsaturated flow and transport through fractured porous media is a clear understanding of the processes controlling fracture-matrix interaction. As a first step toward such an understanding, two preliminary experiments have been performed to investigate the influence of matrix imbibition on water percolation through unsaturated fractures in the plane normal to the fracture. Test systems consisted of thin slabs of either tuff or an analog material cut by a single vertical fracture into which a constant fluid flux was introduced. Transient moisture content and solute concentration fields were imaged by means of x-ray absorption. Flow fields associated with the two different media were significantly different owing to differences in material properties relative to the imposed flux. Richards' equation was found to be a valid means of modeling the imbibition of water into the tuff matrix from a saturated fracture for the current experiment

  14. Decomposition cross-correlation for analysis of collagen matrix deformation by single smooth muscle cells

    NARCIS (Netherlands)

    van den Akker, Jeroen; Pistea, Adrian; Bakker, Erik N. T. P.; VanBavel, Ed

    2008-01-01

    Microvascular remodeling is known to depend on cellular interactions with matrix tissue. However, it is difficult to study the role of specific cells or matrix elements in an in vivo setting. The aim of this study is to develop an automated technique that can be employed to obtain and analyze local

  15. Ceramic matrix composite article and process of fabricating a ceramic matrix composite article

    Science.gov (United States)

    Cairo, Ronald Robert; DiMascio, Paul Stephen; Parolini, Jason Robert

    2016-01-12

    A ceramic matrix composite article and a process of fabricating a ceramic matrix composite are disclosed. The ceramic matrix composite article includes a matrix distribution pattern formed by a manifold and ceramic matrix composite plies laid up on the matrix distribution pattern, includes the manifold, or a combination thereof. The manifold includes one or more matrix distribution channels operably connected to a delivery interface, the delivery interface configured for providing matrix material to one or more of the ceramic matrix composite plies. The process includes providing the manifold, forming the matrix distribution pattern by transporting the matrix material through the manifold, and contacting the ceramic matrix composite plies with the matrix material.

  16. A Comparison of Methods for Estimating the Determinant of High-Dimensional Covariance Matrix

    KAUST Repository

    Hu, Zongliang; Dong, Kai; Dai, Wenlin; Tong, Tiejun

    2017-01-01

    The determinant of the covariance matrix for high-dimensional data plays an important role in statistical inference and decision. It has many real applications including statistical tests and information theory. Due to the statistical and computational challenges with high dimensionality, little work has been proposed in the literature for estimating the determinant of high-dimensional covariance matrix. In this paper, we estimate the determinant of the covariance matrix using some recent proposals for estimating high-dimensional covariance matrix. Specifically, we consider a total of eight covariance matrix estimation methods for comparison. Through extensive simulation studies, we explore and summarize some interesting comparison results among all compared methods. We also provide practical guidelines based on the sample size, the dimension, and the correlation of the data set for estimating the determinant of high-dimensional covariance matrix. Finally, from a perspective of the loss function, the comparison study in this paper may also serve as a proxy to assess the performance of the covariance matrix estimation.

  17. A Comparison of Methods for Estimating the Determinant of High-Dimensional Covariance Matrix

    KAUST Repository

    Hu, Zongliang

    2017-09-27

    The determinant of the covariance matrix for high-dimensional data plays an important role in statistical inference and decision. It has many real applications including statistical tests and information theory. Due to the statistical and computational challenges with high dimensionality, little work has been proposed in the literature for estimating the determinant of high-dimensional covariance matrix. In this paper, we estimate the determinant of the covariance matrix using some recent proposals for estimating high-dimensional covariance matrix. Specifically, we consider a total of eight covariance matrix estimation methods for comparison. Through extensive simulation studies, we explore and summarize some interesting comparison results among all compared methods. We also provide practical guidelines based on the sample size, the dimension, and the correlation of the data set for estimating the determinant of high-dimensional covariance matrix. Finally, from a perspective of the loss function, the comparison study in this paper may also serve as a proxy to assess the performance of the covariance matrix estimation.

  18. A Comparison of Methods for Estimating the Determinant of High-Dimensional Covariance Matrix.

    Science.gov (United States)

    Hu, Zongliang; Dong, Kai; Dai, Wenlin; Tong, Tiejun

    2017-09-21

    The determinant of the covariance matrix for high-dimensional data plays an important role in statistical inference and decision. It has many real applications including statistical tests and information theory. Due to the statistical and computational challenges with high dimensionality, little work has been proposed in the literature for estimating the determinant of high-dimensional covariance matrix. In this paper, we estimate the determinant of the covariance matrix using some recent proposals for estimating high-dimensional covariance matrix. Specifically, we consider a total of eight covariance matrix estimation methods for comparison. Through extensive simulation studies, we explore and summarize some interesting comparison results among all compared methods. We also provide practical guidelines based on the sample size, the dimension, and the correlation of the data set for estimating the determinant of high-dimensional covariance matrix. Finally, from a perspective of the loss function, the comparison study in this paper may also serve as a proxy to assess the performance of the covariance matrix estimation.

  19. Matrix-isolation FT-IR spectra and theoretical study of dimethyl sulfate

    Science.gov (United States)

    Borba, Ana; Gómez-Zavaglia, Andrea; Simões, Pedro N. N. L.; Fausto, Rui

    2005-05-01

    The preferred conformations of dimethyl sulfate and their vibrational spectra were studied by matrix-isolation FT-IR spectroscopy and theoretical methods (DFT and MP2, with basis sets of different sizes, including the quadruple-zeta, aug-cc-pVQZ basis). Conformer GG (of C 2 symmetry and exhibiting O sbnd S sbnd O sbnd C dihedral angles of 74.3°) was found to be the most stable conformer in both the gaseous phase and isolated in argon. Upon annealing of the matrix, the less stable observed conformer (GT; with C 1 symmetry) quickly converts to the GG conformer, with the resulting species being embedded in a matrix-cage which corresponds to the most stable matrix-site for GG form. The highest energy TT conformer, which was assumed to be the most stable conformer in previous studies, is predicted by the calculations to have a relative energy of ca. 10 kJ mol -1 and was not observed in the spectra of the matrix-isolated compound.

  20. Matrix metalloproteinases in lung biology

    Directory of Open Access Journals (Sweden)

    Parks William C

    2000-12-01

    Full Text Available Abstract Despite much information on their catalytic properties and gene regulation, we actually know very little of what matrix metalloproteinases (MMPs do in tissues. The catalytic activity of these enzymes has been implicated to function in normal lung biology by participating in branching morphogenesis, homeostasis, and repair, among other events. Overexpression of MMPs, however, has also been blamed for much of the tissue destruction associated with lung inflammation and disease. Beyond their role in the turnover and degradation of extracellular matrix proteins, MMPs also process, activate, and deactivate a variety of soluble factors, and seldom is it readily apparent by presence alone if a specific proteinase in an inflammatory setting is contributing to a reparative or disease process. An important goal of MMP research will be to identify the actual substrates upon which specific enzymes act. This information, in turn, will lead to a clearer understanding of how these extracellular proteinases function in lung development, repair, and disease.

  1. MMPI-2 Symptom Validity (FBS) Scale: psychometric characteristics and limitations in a Veterans Affairs neuropsychological setting.

    Science.gov (United States)

    Gass, Carlton S; Odland, Anthony P

    2014-01-01

    The Minnesota Multiphasic Personality Inventory-2 (MMPI-2) Symptom Validity (Fake Bad Scale [FBS]) Scale is widely used to assist in determining noncredible symptom reporting, despite a paucity of detailed research regarding its itemmetric characteristics. Originally designed for use in civil litigation, the FBS is often used in a variety of clinical settings. The present study explored its fundamental psychometric characteristics in a sample of 303 patients who were consecutively referred for a comprehensive examination in a Veterans Affairs (VA) neuropsychology clinic. FBS internal consistency (reliability) was .77. Its underlying factor structure consisted of three unitary dimensions (Tiredness/Distractibility, Stomach/Head Discomfort, and Claimed Virtue of Self/Others) accounting for 28.5% of the total variance. The FBS's internal structure showed factoral discordance, as Claimed Virtue was negatively related to most of the FBS and to its somatic complaint components. Scores on this 12-item FBS component reflected a denial of socially undesirable attitudes and behaviors (Antisocial Practices Scale) that is commonly expressed by the 1,138 males in the MMPI-2 normative sample. These 12 items significantly reduced FBS reliability, introducing systematic error variance. In this VA neuropsychological referral setting, scores on the FBS have ambiguous meaning because of its structural discordance.

  2. Analytic webs support the synthesis of ecological data sets.

    Science.gov (United States)

    Ellison, Aaron M; Osterweil, Leon J; Clarke, Lori; Hadley, Julian L; Wise, Alexander; Boose, Emery; Foster, David R; Hanson, Allen; Jensen, David; Kuzeja, Paul; Riseman, Edward; Schultz, Howard

    2006-06-01

    A wide variety of data sets produced by individual investigators are now synthesized to address ecological questions that span a range of spatial and temporal scales. It is important to facilitate such syntheses so that "consumers" of data sets can be confident that both input data sets and synthetic products are reliable. Necessary documentation to ensure the reliability and validation of data sets includes both familiar descriptive metadata and formal documentation of the scientific processes used (i.e., process metadata) to produce usable data sets from collections of raw data. Such documentation is complex and difficult to construct, so it is important to help "producers" create reliable data sets and to facilitate their creation of required metadata. We describe a formal representation, an "analytic web," that aids both producers and consumers of data sets by providing complete and precise definitions of scientific processes used to process raw and derived data sets. The formalisms used to define analytic webs are adaptations of those used in software engineering, and they provide a novel and effective support system for both the synthesis and the validation of ecological data sets. We illustrate the utility of an analytic web as an aid to producing synthetic data sets through a worked example: the synthesis of long-term measurements of whole-ecosystem carbon exchange. Analytic webs are also useful validation aids for consumers because they support the concurrent construction of a complete, Internet-accessible audit trail of the analytic processes used in the synthesis of the data sets. Finally we describe our early efforts to evaluate these ideas through the use of a prototype software tool, SciWalker. We indicate how this tool has been used to create analytic webs tailored to specific data-set synthesis and validation activities, and suggest extensions to it that will support additional forms of validation. The process metadata created by SciWalker is

  3. Symmetries of the second-difference matrix and the finite Fourier transform

    International Nuclear Information System (INIS)

    Aguilar, A.; Wolf, K.B.

    1979-01-01

    The finite Fourier transformation is well known to diagonalize the second-difference matrix and has been thus applied extensively to describe finite crystal lattices and electric networks. In setting out to find all transformations having this property, we obtain a multiparameter class of them. While permutations and unitary scaling of the eigenvectors constitute the trivial freedom of choice common to all diagonalization processes, the second-difference matrix has a larger symmetry group among whose elements we find the dihedral manifest symmetry transformations of the lattice. The latter are nevertheless sufficient for the unique specification of eigenvectors in various symmetry-adapted bases for the constrained lattice. The free symmetry parameters are shown to lead to a complete set of conserved quantities for the physical lattice motion. (author)

  4. Validity of Two WPPSI Short Forms in Outpatient Clinic Settings.

    Science.gov (United States)

    Haynes, Jack P.; Atkinson, David

    1983-01-01

    Investigated the validity of subtest short forms for the Wechsler Preschool and Primary Scale of Intelligence in an outpatient population of 116 children. Data showed that the short forms underestimated actual level of intelligence and supported use of a short form only as a brief screening device. (LLL)

  5. High-dimensional statistical inference: From vector to matrix

    Science.gov (United States)

    Zhang, Anru

    Statistical inference for sparse signals or low-rank matrices in high-dimensional settings is of significant interest in a range of contemporary applications. It has attracted significant recent attention in many fields including statistics, applied mathematics and electrical engineering. In this thesis, we consider several problems in including sparse signal recovery (compressed sensing under restricted isometry) and low-rank matrix recovery (matrix recovery via rank-one projections and structured matrix completion). The first part of the thesis discusses compressed sensing and affine rank minimization in both noiseless and noisy cases and establishes sharp restricted isometry conditions for sparse signal and low-rank matrix recovery. The analysis relies on a key technical tool which represents points in a polytope by convex combinations of sparse vectors. The technique is elementary while leads to sharp results. It is shown that, in compressed sensing, delta kA 0, delta kA < 1/3 + epsilon, deltak A + thetak,kA < 1 + epsilon, or deltatkA< √(t - 1) / t + epsilon are not sufficient to guarantee the exact recovery of all k-sparse signals for large k. Similar result also holds for matrix recovery. In addition, the conditions delta kA<1/3, deltak A+ thetak,kA<1, delta tkA < √(t - 1)/t and deltarM<1/3, delta rM+ thetar,rM<1, delta trM< √(t - 1)/ t are also shown to be sufficient respectively for stable recovery of approximately sparse signals and low-rank matrices in the noisy case. For the second part of the thesis, we introduce a rank-one projection model for low-rank matrix recovery and propose a constrained nuclear norm minimization method for stable recovery of low-rank matrices in the noisy case. The procedure is adaptive to the rank and robust against small perturbations. Both upper and lower bounds for the estimation accuracy under the Frobenius norm loss are obtained. The proposed estimator is shown to be rate-optimal under certain conditions. The

  6. Experimental validation of control strategies for a microgrid test facility including a storage system and renewable generation sets

    DEFF Research Database (Denmark)

    Baccino, Francesco; Marinelli, Mattia; Silvestro, Federico

    2012-01-01

    The paper is aimed at describing and validating some control strategies in the SYSLAB experimental test facility characterized by the presence of a low voltage network with a 15 kW-190 kWh Vanadium Redox Flow battery system and a 11 kW wind turbine. The generation set is connected to the local...... network and is fully controllable by the SCADA system. The control strategies, implemented on a local pc interfaced to the SCADA, are realized in Matlab-Simulink. The main purpose is to control the charge/discharge action of the storage system in order to present at the point of common coupling...... the desired power or energy profiles....

  7. A Binary Cat Swarm Optimization Algorithm for the Non-Unicost Set Covering Problem

    Directory of Open Access Journals (Sweden)

    Broderick Crawford

    2015-01-01

    Full Text Available The Set Covering Problem consists in finding a subset of columns in a zero-one matrix such that they cover all the rows of the matrix at a minimum cost. To solve the Set Covering Problem we use a metaheuristic called Binary Cat Swarm Optimization. This metaheuristic is a recent swarm metaheuristic technique based on the cat behavior. Domestic cats show the ability to hunt and are curious about moving objects. Based on this, the cats have two modes of behavior: seeking mode and tracing mode. We are the first ones to use this metaheuristic to solve this problem; our algorithm solves a set of 65 Set Covering Problem instances from OR-Library.

  8. Defining a turnover index for the correlation of biomaterial degradation and cell based extracellular matrix synthesis using fluorescent tagging techniques.

    Science.gov (United States)

    Bardsley, Katie; Wimpenny, Ian; Wechsler, Roni; Shachaf, Yonatan; Yang, Ying; El Haj, Alicia J

    2016-11-01

    Non-destructive protocols which can define a biomaterial's degradation and its associated ability to support proliferation and/or promote extracellular matrix deposition will be an essential in vitro tool. In this study we investigate fluorescently tagged biomaterials, with varying rates of degradation and their ability to support cell proliferation and osteogenic differentiation. Changes in fluorescence of the biomaterials and the release of fluorescent soluble by-products were confirmed as accurate methods to quantify degradation. It was demonstrated that increasing rates of the selected biomaterials' degradation led to a decrease in cell proliferation and concurrently an increase in osteogenic matrix production. A novel turnover index (TI), which directly describes the effect of degradation of a biomaterial on cell behaviour, was calculated. Lower TIs for proliferation and high TIs for osteogenic marker production were observed on faster degrading biomaterials, indicating that these biomaterials supported an upregulation of osteogenic markers. This TI was further validated using an ex vivo chick femur model, where the faster degrading biomaterial, fibrin, led to an increased TI for mineralisation within an epiphyseal defect. This in vitro tool, TI, for monitoring the effect of biomaterial degradation on extracellular matrix production may well act as predictor of the selected biomaterials' performance during in vivo studies. This paper outlines a novel metric, Turnover Index (TI), which can be utilised in tissue-engineering for the comparison of a range of biomaterials. The metric sets out to define the relationship between the rate of degradation of biomaterials with the rate of cell proliferation and ECM synthesis, ultimately allowing us to tailor material for set clinical requirements. We have discovered some novel comparative findings that cells cultured on biomaterials with increased rates of degradation have lower rates of proliferation but alternatively

  9. Rotation of hard particles in a soft matrix

    Science.gov (United States)

    Yang, Weizhu; Liu, Qingchang; Yue, Zhufeng; Li, Xiaodong; Xu, Baoxing

    Soft-hard materials integration is ubiquitous in biological materials and structures in nature and has also attracted growing attention in the bio-inspired design of advanced functional materials, structures and devices. Due to the distinct difference in their mechanical properties, the rotation of hard phases in soft matrixes upon deformation has been acknowledged, yet is lack of theory in mechanics. In this work, we propose a theoretical mechanics framework that can describe the rotation of hard particles in a soft matrix. The rotation of multiple arbitrarily shaped, located and oriented particles with perfectly bonded interfaces in an elastic soft matrix subjected to a far-field tensile loading is established and analytical solutions are derived by using complex potentials and conformal mapping methods. Strong couplings and competitions of the rotation of hard particles among each other are discussed by investigating numbers, relative locations and orientations of particles in the matrix at different loading directions. Extensive finite element analyses are performed to validate theoretical solutions and good agreement of both rotation and stress field between them are achieved. Possible extensions of the present theory to non-rigid particles, viscoelastic matrix and imperfect bonding are also discussed. Finally, by taking advantage of the rotation of hard particles, we exemplify an application in a conceptual design of soft-hard material integrated phononic crystal and demonstrate that phononic band gaps can be successfully tuned with a high accuracy through the mechanical tension-induced rotation of hard particles. The present theory established herein is expected to be of immediate interests to the design of soft-hard materials integration based functional materials, structures and devices with tunable performance via mechanical rotation of hard phases.

  10. Leakage localisation method in a water distribution system based on sensitivity matrix: methodology and real test

    OpenAIRE

    Pascual Pañach, Josep

    2010-01-01

    Leaks are present in all water distribution systems. In this paper a method for leakage detection and localisation is presented. It uses pressure measurements and simulation models. Leakage localisation methodology is based on pressure sensitivity matrix. Sensitivity is normalised and binarised using a common threshold for all nodes, so a signatures matrix is obtained. A pressure sensor optimal distribution methodology is developed too, but it is not used in the real test. To validate this...

  11. Projection matrix acquisition for cone-beam computed tomography iterative reconstruction

    Science.gov (United States)

    Yang, Fuqiang; Zhang, Dinghua; Huang, Kuidong; Shi, Wenlong; Zhang, Caixin; Gao, Zongzhao

    2017-02-01

    Projection matrix is an essential and time-consuming part in computed tomography (CT) iterative reconstruction. In this article a novel calculation algorithm of three-dimensional (3D) projection matrix is proposed to quickly acquire the matrix for cone-beam CT (CBCT). The CT data needed to be reconstructed is considered as consisting of the three orthogonal sets of equally spaced and parallel planes, rather than the individual voxels. After getting the intersections the rays with the surfaces of the voxels, the coordinate points and vertex is compared to obtain the index value that the ray traversed. Without considering ray-slope to voxel, it just need comparing the position of two points. Finally, the computer simulation is used to verify the effectiveness of the algorithm.

  12. Parallel clustering algorithm for large-scale biological data sets.

    Science.gov (United States)

    Wang, Minchao; Zhang, Wu; Ding, Wang; Dai, Dongbo; Zhang, Huiran; Xie, Hao; Chen, Luonan; Guo, Yike; Xie, Jiang

    2014-01-01

    Recent explosion of biological data brings a great challenge for the traditional clustering algorithms. With increasing scale of data sets, much larger memory and longer runtime are required for the cluster identification problems. The affinity propagation algorithm outperforms many other classical clustering algorithms and is widely applied into the biological researches. However, the time and space complexity become a great bottleneck when handling the large-scale data sets. Moreover, the similarity matrix, whose constructing procedure takes long runtime, is required before running the affinity propagation algorithm, since the algorithm clusters data sets based on the similarities between data pairs. Two types of parallel architectures are proposed in this paper to accelerate the similarity matrix constructing procedure and the affinity propagation algorithm. The memory-shared architecture is used to construct the similarity matrix, and the distributed system is taken for the affinity propagation algorithm, because of its large memory size and great computing capacity. An appropriate way of data partition and reduction is designed in our method, in order to minimize the global communication cost among processes. A speedup of 100 is gained with 128 cores. The runtime is reduced from serval hours to a few seconds, which indicates that parallel algorithm is capable of handling large-scale data sets effectively. The parallel affinity propagation also achieves a good performance when clustering large-scale gene data (microarray) and detecting families in large protein superfamilies.

  13. Kinetic-energy matrix elements for atomic Hylleraas-CI wave functions

    Energy Technology Data Exchange (ETDEWEB)

    Harris, Frank E., E-mail: harris@qtp.ufl.edu [Department of Physics, University of Utah, Salt Lake City, Utah 84112, USA and Quantum Theory Project, University of Florida, P.O. Box 118435, Gainesville, Florida 32611 (United States)

    2016-05-28

    Hylleraas-CI is a superposition-of-configurations method in which each configuration is constructed from a Slater-type orbital (STO) product to which is appended (linearly) at most one interelectron distance r{sub ij}. Computations of the kinetic energy for atoms by this method have been difficult due to the lack of formulas expressing these matrix elements for general angular momentum in terms of overlap and potential-energy integrals. It is shown here that a strategic application of angular-momentum theory, including the use of vector spherical harmonics, enables the reduction of all atomic kinetic-energy integrals to overlap and potential-energy matrix elements. The new formulas are validated by showing that they yield correct results for a large number of integrals published by other investigators.

  14. S-AMP: Approximate Message Passing for General Matrix Ensembles

    DEFF Research Database (Denmark)

    Cakmak, Burak; Winther, Ole; Fleury, Bernard H.

    2014-01-01

    the approximate message-passing (AMP) algorithm to general matrix ensembles with a well-defined large system size limit. The generalization is based on the S-transform (in free probability) of the spectrum of the measurement matrix. Furthermore, we show that the optimality of S-AMP follows directly from its......We propose a novel iterative estimation algorithm for linear observation models called S-AMP. The fixed points of S-AMP are the stationary points of the exact Gibbs free energy under a set of (first- and second-) moment consistency constraints in the large system limit. S-AMP extends...

  15. CATS Deliverable 5.1 : CATS verification of test matrix and protocol

    OpenAIRE

    Uittenbogaard, J.; Camp, O.M.G.C. op den; Montfort, S. van

    2016-01-01

    This report summarizes the work conducted within work package (WP) 5 "Verification of test matrix and protocol" of the Cyclist AEB testing system (CATS) project. It describes the verification process of the draft CATS test matrix resulting from WP1 and WP2, and the feasibility of meeting requirements set by CATS consortium based on requirements in Euro NCAP AEB protocols regarding accuracy, repeatability and reproducibility using the developed test hardware. For the cases where verification t...

  16. Spatial and thematic assessment of object-based forest stand delineation using an OFA-matrix

    Science.gov (United States)

    Hernando, A.; Tiede, D.; Albrecht, F.; Lang, S.

    2012-10-01

    The delineation and classification of forest stands is a crucial aspect of forest management. Object-based image analysis (OBIA) can be used to produce detailed maps of forest stands from either orthophotos or very high resolution satellite imagery. However, measures are then required for evaluating and quantifying both the spatial and thematic accuracy of the OBIA output. In this paper we present an approach for delineating forest stands and a new Object Fate Analysis (OFA) matrix for accuracy assessment. A two-level object-based orthophoto analysis was first carried out to delineate stands on the Dehesa Boyal public land in central Spain (Avila Province). Two structural features were first created for use in class modelling, enabling good differentiation between stands: a relational tree cover cluster feature, and an arithmetic ratio shadow/tree feature. We then extended the OFA comparison approach with an OFA-matrix to enable concurrent validation of thematic and spatial accuracies. Its diagonal shows the proportion of spatial and thematic coincidence between a reference data and the corresponding classification. New parameters for Spatial Thematic Loyalty (STL), Spatial Thematic Loyalty Overall (STLOVERALL) and Maximal Interfering Object (MIO) are introduced to summarise the OFA-matrix accuracy assessment. A stands map generated by OBIA (classification data) was compared with a map of the same area produced from photo interpretation and field data (reference data). In our example the OFA-matrix results indicate good spatial and thematic accuracies (>65%) for all stand classes except for the shrub stands (31.8%), and a good STLOVERALL (69.8%). The OFA-matrix has therefore been shown to be a valid tool for OBIA accuracy assessment.

  17. Validity of measures of pain and symptoms in HIV/AIDS infected households in resources poor settings: results from the Dominican Republic and Cambodia

    Directory of Open Access Journals (Sweden)

    Morineau Guy

    2006-03-01

    Full Text Available Abstract Background HIV/AIDS treatment programs are currently being mounted in many developing nations that include palliative care services. While measures of palliative care have been developed and validated for resource rich settings, very little work exists to support an understanding of measurement for Africa, Latin America or Asia. Methods This study investigates the construct validity of measures of reported pain, pain control, symptoms and symptom control in areas with high HIV-infected prevalence in Dominican Republic and Cambodia Measures were adapted from the POS (Palliative Outcome Scale. Households were selected through purposive sampling from networks of people living with HIV/AIDS. Consistencies in patterns in the data were tested used Chi Square and Mantel Haenszel tests. Results The sample persons who reported chronic illness were much more likely to report pain and symptoms compared to those not chronically ill. When controlling for the degrees of pain, pain control did not differ between the chronically ill and non-chronically ill using a Mantel Haenszel test in both countries. Similar results were found for reported symptoms and symptom control for the Dominican Republic. These findings broadly support the construct validity of an adapted version of the POS in these two less developed countries. Conclusion The results of the study suggest that the selected measures can usefully be incorporated into population-based surveys and evaluation tools needed to monitor palliative care and used in settings with high HIV/AIDS prevalence.

  18. 11th GCC Closed Forum: cumulative stability; matrix stability; immunogenicity assays; laboratory manuals; biosimilars; chiral methods; hybrid LBA/LCMS assays; fit-for-purpose validation; China Food and Drug Administration bioanalytical method validation.

    Science.gov (United States)

    Islam, Rafiq; Briscoe, Chad; Bower, Joseph; Cape, Stephanie; Arnold, Mark; Hayes, Roger; Warren, Mark; Karnik, Shane; Stouffer, Bruce; Xiao, Yi Qun; van der Strate, Barry; Sikkema, Daniel; Fang, Xinping; Tudoroniu, Ariana; Tayyem, Rabab; Brant, Ashley; Spriggs, Franklin; Barry, Colin; Khan, Masood; Keyhani, Anahita; Zimmer, Jennifer; Caturla, Maria Cruz; Couerbe, Philippe; Khadang, Ardeshir; Bourdage, James; Datin, Jim; Zemo, Jennifer; Hughes, Nicola; Fatmi, Saadya; Sheldon, Curtis; Fountain, Scott; Satterwhite, Christina; Colletti, Kelly; Vija, Jenifer; Yu, Mathilde; Stamatopoulos, John; Lin, Jenny; Wilfahrt, Jim; Dinan, Andrew; Ohorodnik, Susan; Hulse, James; Patel, Vimal; Garofolo, Wei; Savoie, Natasha; Brown, Michael; Papac, Damon; Buonarati, Mike; Hristopoulos, George; Beaver, Chris; Boudreau, Nadine; Williard, Clark; Liu, Yansheng; Ray, Gene; Warrino, Dominic; Xu, Allan; Green, Rachel; Hayward-Sewell, Joanne; Marcelletti, John; Sanchez, Christina; Kennedy, Michael; Charles, Jessica St; Bouhajib, Mohammed; Nehls, Corey; Tabler, Edward; Tu, Jing; Joyce, Philip; Iordachescu, Adriana; DuBey, Ira; Lindsay, John; Yamashita, Jim; Wells, Edward

    2018-04-01

    The 11th Global CRO Council Closed Forum was held in Universal City, CA, USA on 3 April 2017. Representatives from international CRO members offering bioanalytical services were in attendance in order to discuss scientific and regulatory issues specific to bioanalysis. The second CRO-Pharma Scientific Interchange Meeting was held on 7 April 2017, which included Pharma representatives' sharing perspectives on the topics discussed earlier in the week with the CRO members. The issues discussed at the meetings included cumulative stability evaluations, matrix stability evaluations, the 2016 US FDA Immunogenicity Guidance and recent and unexpected FDA Form 483s on immunogenicity assays, the bioanalytical laboratory's role in writing PK sample collection instructions, biosimilars, CRO perspectives on the use of chiral versus achiral methods, hybrid LBA/LCMS assays, applications of fit-for-purpose validation and, at the Global CRO Council Closed Forum only, the status and trend of current regulated bioanalytical practice in China under CFDA's new BMV policy. Conclusions from discussions of these topics at both meetings are included in this report.

  19. Optimization of Coil Element Configurations for a Matrix Gradient Coil.

    Science.gov (United States)

    Kroboth, Stefan; Layton, Kelvin J; Jia, Feng; Littin, Sebastian; Yu, Huijun; Hennig, Jurgen; Zaitsev, Maxim

    2018-01-01

    Recently, matrix gradient coils (also termed multi-coils or multi-coil arrays) were introduced for imaging and B 0 shimming with 24, 48, and even 84 coil elements. However, in imaging applications, providing one amplifier per coil element is not always feasible due to high cost and technical complexity. In this simulation study, we show that an 84-channel matrix gradient coil (head insert for brain imaging) is able to create a wide variety of field shapes even if the number of amplifiers is reduced. An optimization algorithm was implemented that obtains groups of coil elements, such that a desired target field can be created by driving each group with an amplifier. This limits the number of amplifiers to the number of coil element groups. Simulated annealing is used due to the NP-hard combinatorial nature of the given problem. A spherical harmonic basis set up to the full third order within a sphere of 20-cm diameter in the center of the coil was investigated as target fields. We show that the median normalized least squares error for all target fields is below approximately 5% for 12 or more amplifiers. At the same time, the dissipated power stays within reasonable limits. With a relatively small set of amplifiers, switches can be used to sequentially generate spherical harmonics up to third order. The costs associated with a matrix gradient coil can be lowered, which increases the practical utility of matrix gradient coils.

  20. Teaching the extracellular matrix and introducing online databases within a multidisciplinary course with i-cell-MATRIX: A student-centered approach.

    Science.gov (United States)

    Sousa, João Carlos; Costa, Manuel João; Palha, Joana Almeida

    2010-03-01

    The biochemistry and molecular biology of the extracellular matrix (ECM) is difficult to convey to students in a classroom setting in ways that capture their interest. The understanding of the matrix's roles in physiological and pathological conditions study will presumably be hampered by insufficient knowledge of its molecular structure. Internet-available resources can bridge the division between the molecular details and ECM's biological properties and associated processes. This article presents an approach to teach the ECM developed for first year medical undergraduates who, working in teams: (i) Explore a specific molecular component of the matrix, (ii) identify a disease in which the component is implicated, (iii) investigate how the component's structure/function contributes to ECM' supramolecular organization in physiological and in pathological conditions, and (iv) share their findings with colleagues. The approach-designated i-cell-MATRIX-is focused on the contribution of individual components to the overall organization and biological functions of the ECM. i-cell-MATRIX is student centered and uses 5 hours of class time. Summary of results and take home message: A "1-minute paper" has been used to gather student feedback on the impact of i-cell-MATRIX. Qualitative analysis of student feedback gathered in three consecutive years revealed that students appreciate the approach's reliance on self-directed learning, the interactivity embedded and the demand for deeper insights on the ECM. Learning how to use internet biomedical resources is another positive outcome. Ninety percent of students recommend the activity for subsequent years. i-cell-MATRIX is adaptable by other medical schools which may be looking for an approach that achieves higher student engagement with the ECM. Copyright © 2010 International Union of Biochemistry and Molecular Biology, Inc.

  1. Characteristic analysis on UAV-MIMO channel based on normalized correlation matrix.

    Science.gov (United States)

    Gao, Xi jun; Chen, Zi li; Hu, Yong Jiang

    2014-01-01

    Based on the three-dimensional GBSBCM (geometrically based double bounce cylinder model) channel model of MIMO for unmanned aerial vehicle (UAV), the simple form of UAV space-time-frequency channel correlation function which includes the LOS, SPE, and DIF components is presented. By the methods of channel matrix decomposition and coefficient normalization, the analytic formula of UAV-MIMO normalized correlation matrix is deduced. This formula can be used directly to analyze the condition number of UAV-MIMO channel matrix, the channel capacity, and other characteristic parameters. The simulation results show that this channel correlation matrix can be applied to describe the changes of UAV-MIMO channel characteristics under different parameter settings comprehensively. This analysis method provides a theoretical basis for improving the transmission performance of UAV-MIMO channel. The development of MIMO technology shows practical application value in the field of UAV communication.

  2. On the q-exponential of matrix q-Lie algebras

    Directory of Open Access Journals (Sweden)

    Ernst Thomas

    2017-01-01

    Full Text Available In this paper, we define several new concepts in the borderline between linear algebra, Lie groups and q-calculus.We first introduce the ring epimorphism r, the set of all inversions of the basis q, and then the important q-determinant and corresponding q-scalar products from an earlier paper. Then we discuss matrix q-Lie algebras with a modified q-addition, and compute the matrix q-exponential to form the corresponding n × n matrix, a so-called q-Lie group, or manifold, usually with q-determinant 1. The corresponding matrix multiplication is twisted under τ, which makes it possible to draw diagrams similar to Lie group theory for the q-exponential, or the so-called q-morphism. There is no definition of letter multiplication in a general alphabet, but in this article we introduce new q-number systems, the biring of q-integers, and the extended q-rational numbers. Furthermore, we provide examples of matrices in suq(4, and its corresponding q-Lie group. We conclude with an example of system of equations with Ward number coeficients.

  3. Development of a Reference Data Set (RDS) for dental age estimation (DAE) and testing of this with a separate Validation Set (VS) in a southern Chinese population.

    Science.gov (United States)

    Jayaraman, Jayakumar; Wong, Hai Ming; King, Nigel M; Roberts, Graham J

    2016-10-01

    Many countries have recently experienced a rapid increase in the demand for forensic age estimates of unaccompanied minors. Hong Kong is a major tourist and business center where there has been an increase in the number of people intercepted with false travel documents. An accurate estimation of age is only possible when a dataset for age estimation that has been derived from the corresponding ethnic population. Thus, the aim of this study was to develop and validate a Reference Data Set (RDS) for dental age estimation for southern Chinese. A total of 2306 subjects were selected from the patient archives of a large dental hospital and the chronological age for each subject was recorded. This age was assigned to each specific stage of dental development for each tooth to create a RDS. To validate this RDS, a further 484 subjects were randomly chosen from the patient archives and their dental age was assessed based on the scores from the RDS. Dental age was estimated using meta-analysis command corresponding to random effects statistical model. Chronological age (CA) and Dental Age (DA) were compared using the paired t-test. The overall difference between the chronological and dental age (CA-DA) was 0.05 years (2.6 weeks) for males and 0.03 years (1.6 weeks) for females. The paired t-test indicated that there was no statistically significant difference between the chronological and dental age (p > 0.05). The validated southern Chinese reference dataset based on dental maturation accurately estimated the chronological age. Copyright © 2016 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.

  4. Empirical Coulomb matrix elements and the mass of 22Al

    International Nuclear Information System (INIS)

    Whitehead, R.R.; Watt, A.; Kelvin, D.; Rutherford, H.J.

    1976-01-01

    An attempt has been made to obtain a set of Coulomb matrix elements which fit the known Coulomb energy shifts in the nuclei of mass 18 to 22. The interaction obtained fits the data well with only a few exceptions, one of these being the Coulomb shift of the notorious third 0 + state in 18 Ne. These Coulomb matrix elements are used together with the Chung-Wildenthal interaction to obtain a new prediction for the mass excess of 22 Al. The results indicate that 22 Al should be bound against proton emission. (Auth.)

  5. Data on a Laves phase intermetallic matrix composite in situ toughened by ductile precipitates.

    Science.gov (United States)

    Knowles, Alexander J; Bhowmik, Ayan; Purkayastha, Surajit; Jones, Nicholas G; Giuliani, Finn; Clegg, William J; Dye, David; Stone, Howard J

    2017-10-01

    The data presented in this article are related to the research article entitled "Laves phase intermetallic matrix composite in situ toughened by ductile precipitates" (Knowles et al.) [1]. The composite comprised a Fe 2 (Mo, Ti) matrix with bcc (Mo, Ti) precipitated laths produced in situ by an aging heat treatment, which was shown to confer a toughening effect (Knowles et al.) [1]. Here, details are given on a focused ion beam (FIB) slice and view experiment performed on the composite so as to determine that the 3D morphology of the bcc (Mo, Ti) precipitates were laths rather than needles. Scanning transmission electron microscopy (S(TEM)) micrographs of the microstructure as well as energy dispersive X-ray spectroscopy (EDX) maps are presented that identify the elemental partitioning between the C14 Laves matrix and the bcc laths, with Mo rejected from the matrix into laths. A TEM selected area diffraction pattern (SADP) and key is provided that was used to validate the orientation relation between the matrix and laths identified in (Knowles et al.) [1] along with details of the transformation matrix determined.

  6. Data on a Laves phase intermetallic matrix composite in situ toughened by ductile precipitates

    Directory of Open Access Journals (Sweden)

    Alexander J. Knowles

    2017-10-01

    Full Text Available The data presented in this article are related to the research article entitled “Laves phase intermetallic matrix composite in situ toughened by ductile precipitates” (Knowles et al. [1]. The composite comprised a Fe2(Mo, Ti matrix with bcc (Mo, Ti precipitated laths produced in situ by an aging heat treatment, which was shown to confer a toughening effect (Knowles et al. [1]. Here, details are given on a focused ion beam (FIB slice and view experiment performed on the composite so as to determine that the 3D morphology of the bcc (Mo, Ti precipitates were laths rather than needles. Scanning transmission electron microscopy (S(TEM micrographs of the microstructure as well as energy dispersive X-ray spectroscopy (EDX maps are presented that identify the elemental partitioning between the C14 Laves matrix and the bcc laths, with Mo rejected from the matrix into laths. A TEM selected area diffraction pattern (SADP and key is provided that was used to validate the orientation relation between the matrix and laths identified in (Knowles et al. [1] along with details of the transformation matrix determined.

  7. Ceramic matrix and resin matrix composites - A comparison

    Science.gov (United States)

    Hurwitz, Frances I.

    1987-01-01

    The underlying theory of continuous fiber reinforcement of ceramic matrix and resin matrix composites, their fabrication, microstructure, physical and mechanical properties are contrasted. The growing use of organometallic polymers as precursors to ceramic matrices is discussed as a means of providing low temperature processing capability without the fiber degradation encountered with more conventional ceramic processing techniques. Examples of ceramic matrix composites derived from particulate-filled, high char yield polymers and silsesquioxane precursors are provided.

  8. Ceramic matrix and resin matrix composites: A comparison

    Science.gov (United States)

    Hurwitz, Frances I.

    1987-01-01

    The underlying theory of continuous fiber reinforcement of ceramic matrix and resin matrix composites, their fabrication, microstructure, physical and mechanical properties are contrasted. The growing use of organometallic polymers as precursors to ceramic matrices is discussed as a means of providing low temperature processing capability without the fiber degradation encountered with more conventional ceramic processing techniques. Examples of ceramic matrix composites derived from particulate-filled, high char yield polymers and silsesquioxane precursors are provided.

  9. Verification and validation of multi-group library MUSE1.0 created from ENDF/B-VII.0

    International Nuclear Information System (INIS)

    Chen Yixue; Wu Jun; Yang Shouhai; Zhang Bin; Lu Daogang; Chen Chaobin

    2010-01-01

    A multi-group library set named MUSE1.0 with 172-neutron group and 42-photon group is produced based on ENDF/B-VII.0 using NJOY code. Weight function of the multi-group library set is taken from the Vitanim-e library and the max legendre order of scattering matrix is six. All the nuclides have thermal scattering data created using free-gas scattering law and 10 Bondarenko background cross sections se lected to generate the self-shielded multi-group cross sections. The final libraries have GENDF-format, MATXS-format and ACE-multi-group sub-libraries and each sub-library generated under 4 temperatures(293 K,600 K,800 K and 900 K). This paper provides a summary of the procedure to produce the library set and a detail description of the validation of the multi-group library set by several critical benchmark devices and shielding benchmark devices using MCNP code. The ability to handle the thermal neutron transport and resonance self-shielding problems are investigated specially. In the end, we draw the conclusion that the multi-group libraries produced is credible and can be used in the R and D process of Supercritical Water Reactor Design. (authors)

  10. Matrix of transmission in structural dynamics

    International Nuclear Information System (INIS)

    Mukherjee, S.

    1975-01-01

    Within the last few years numerous papers have been published on the subject of matrix method in elasto-mechanics. 'Matrix of Transmission' is one of the methods in this field which has gained considerable attention in recent years. The basic philosophy adopted in this method is based on the idea of breaking up a complicated system into component parts with simple elastic and dynamic properties which can be readily expressed in matrix form. These component matrices are considered as building blocks, which are fitted together according to a set of predetermined rules which then provide the static and dynamic properties of the entire system. A common type of system occuring in engineering practice consists of a number of elements linked together end to end in the form of a chain. The 'Transfer Matrix' is ideally suited for such a system, because only successive multiplication is necessary to connect these elements together. The number of degrees of freedom and intermediate conditions present no difficulty. Although the 'Transfer Matrix' method is suitable for the treatment of branched and coupled systems its application to systems which do not have predominant chain topology is not effective. Apart from the requirement that the system be linearely elastic, no other restrictions are made. In this paper, it is intended to give a general outline and theoretical formulation of 'Transfer Matrix' and then its application to actual problems in structural dynamics related to seismic analysis. The natural frequencies of a freely vibrating elastic system can be found by applying proper end conditions. The end conditions will yield the frequency determinate to zero. By using a suitable numerical method, the natural frequencies and mode shapes are determined by making a frequency sweep within the range of interest. Results of an analysis of a typical nuclear building by this method show very close agreement with the results obtained by using ASKA and SAP IV program. Therefore

  11. Neuroprotective effects of collagen matrix in rats after traumatic brain injury.

    Science.gov (United States)

    Shin, Samuel S; Grandhi, Ramesh; Henchir, Jeremy; Yan, Hong Q; Badylak, Stephen F; Dixon, C Edward

    2015-01-01

    In previous studies, collagen based matrices have been implanted into the site of lesion in different models of brain injury. We hypothesized that semisynthetic collagen matrix can have neuroprotective function in the setting of traumatic brain injury. Rats were subjected to sham injury or controlled cortical impact. They either received extracellular matrix graft (DuraGen) over the injury site or did not receive any graft and underwent beam balance/beam walking test at post injury days 1-5 and Morris water maze at post injury days 14-18. Animals were sacrificed at day 18 for tissue analysis. Collagen matrix implantation in injured rats did not affect motor function (beam balance test: p = 0.627, beam walking test: p = 0.921). However, injured group with collagen matrix had significantly better spatial memory acquisition (p < 0.05). There was a significant reduction in lesion volume, as well as neuronal loss in CA1 (p < 0.001) and CA3 (p < 0.05) regions of the hippocampus in injured group with collagen matrix (p < 0.05). Collagen matrix reduces contusional lesion volume, neuronal loss, and cognitive deficit after traumatic brain injury. Further studies are needed to demonstrate the mechanisms of neuroprotection by collagen matrix.

  12. Initial validation of the prekindergarten Classroom Observation Tool and goal setting system for data-based coaching.

    Science.gov (United States)

    Crawford, April D; Zucker, Tricia A; Williams, Jeffrey M; Bhavsar, Vibhuti; Landry, Susan H

    2013-12-01

    Although coaching is a popular approach for enhancing the quality of Tier 1 instruction, limited research has addressed observational measures specifically designed to focus coaching on evidence-based practices. This study explains the development of the prekindergarten (pre-k) Classroom Observation Tool (COT) designed for use in a data-based coaching model. We examined psychometric characteristics of the COT and explored how coaches and teachers used the COT goal-setting system. The study included 193 coaches working with 3,909 pre-k teachers in a statewide professional development program. Classrooms served 3 and 4 year olds (n = 56,390) enrolled mostly in Title I, Head Start, and other need-based pre-k programs. Coaches used the COT during a 2-hr observation at the beginning of the academic year. Teachers collected progress-monitoring data on children's language, literacy, and math outcomes three times during the year. Results indicated a theoretically supported eight-factor structure of the COT across language, literacy, and math instructional domains. Overall interrater reliability among coaches was good (.75). Although correlations with an established teacher observation measure were small, significant positive relations between COT scores and children's literacy outcomes indicate promising predictive validity. Patterns of goal-setting behaviors indicate teachers and coaches set an average of 43.17 goals during the academic year, and coaches reported that 80.62% of goals were met. Both coaches and teachers reported the COT was a helpful measure for enhancing quality of Tier 1 instruction. Limitations of the current study and implications for research and data-based coaching efforts are discussed. PsycINFO Database Record (c) 2013 APA, all rights reserved.

  13. Validation of HEDR models

    International Nuclear Information System (INIS)

    Napier, B.A.; Simpson, J.C.; Eslinger, P.W.; Ramsdell, J.V. Jr.; Thiede, M.E.; Walters, W.H.

    1994-05-01

    The Hanford Environmental Dose Reconstruction (HEDR) Project has developed a set of computer models for estimating the possible radiation doses that individuals may have received from past Hanford Site operations. This document describes the validation of these models. In the HEDR Project, the model validation exercise consisted of comparing computational model estimates with limited historical field measurements and experimental measurements that are independent of those used to develop the models. The results of any one test do not mean that a model is valid. Rather, the collection of tests together provide a level of confidence that the HEDR models are valid

  14. Calibration, validation, and sensitivity analysis: What's what

    International Nuclear Information System (INIS)

    Trucano, T.G.; Swiler, L.P.; Igusa, T.; Oberkampf, W.L.; Pilch, M.

    2006-01-01

    One very simple interpretation of calibration is to adjust a set of parameters associated with a computational science and engineering code so that the model agreement is maximized with respect to a set of experimental data. One very simple interpretation of validation is to quantify our belief in the predictive capability of a computational code through comparison with a set of experimental data. Uncertainty in both the data and the code are important and must be mathematically understood to correctly perform both calibration and validation. Sensitivity analysis, being an important methodology in uncertainty analysis, is thus important to both calibration and validation. In this paper, we intend to clarify the language just used and express some opinions on the associated issues. We will endeavor to identify some technical challenges that must be resolved for successful validation of a predictive modeling capability. One of these challenges is a formal description of a 'model discrepancy' term. Another challenge revolves around the general adaptation of abstract learning theory as a formalism that potentially encompasses both calibration and validation in the face of model uncertainty

  15. The static response function in Kohn-Sham theory: An appropriate basis for its matrix representation in case of finite AO basis sets

    International Nuclear Information System (INIS)

    Kollmar, Christian; Neese, Frank

    2014-01-01

    The role of the static Kohn-Sham (KS) response function describing the response of the electron density to a change of the local KS potential is discussed in both the theory of the optimized effective potential (OEP) and the so-called inverse Kohn-Sham problem involving the task to find the local KS potential for a given electron density. In a general discussion of the integral equation to be solved in both cases, it is argued that a unique solution of this equation can be found even in case of finite atomic orbital basis sets. It is shown how a matrix representation of the response function can be obtained if the exchange-correlation potential is expanded in terms of a Schmidt-orthogonalized basis comprising orbitals products of occupied and virtual orbitals. The viability of this approach in both OEP theory and the inverse KS problem is illustrated by numerical examples

  16. Linear Matrix Inequalities in Multirate Control over Networks

    Directory of Open Access Journals (Sweden)

    Ángel Cuenca

    2012-01-01

    Full Text Available This paper faces two of the main drawbacks in networked control systems: bandwidth constraints and timevarying delays. The bandwidth limitations are solved by using multirate control techniques. The resultant multirate controller must ensure closed-loop stability in the presence of time-varying delays. Some stability conditions and a state feedback controller design are formulated in terms of linear matrix inequalities. The theoretical proposal is validated in two different experimental environments: a crane-based test-bed over Ethernet, and a maglev based platform over Profibus.

  17. Genetic Background is a Key Determinant of Glomerular Extracellular Matrix Composition and Organization.

    Science.gov (United States)

    Randles, Michael J; Woolf, Adrian S; Huang, Jennifer L; Byron, Adam; Humphries, Jonathan D; Price, Karen L; Kolatsi-Joannou, Maria; Collinson, Sophie; Denny, Thomas; Knight, David; Mironov, Aleksandr; Starborg, Toby; Korstanje, Ron; Humphries, Martin J; Long, David A; Lennon, Rachel

    2015-12-01

    Glomerular disease often features altered histologic patterns of extracellular matrix (ECM). Despite this, the potential complexities of the glomerular ECM in both health and disease are poorly understood. To explore whether genetic background and sex determine glomerular ECM composition, we investigated two mouse strains, FVB and B6, using RNA microarrays of isolated glomeruli combined with proteomic glomerular ECM analyses. These studies, undertaken in healthy young adult animals, revealed unique strain- and sex-dependent glomerular ECM signatures, which correlated with variations in levels of albuminuria and known predisposition to progressive nephropathy. Among the variation, we observed changes in netrin 4, fibroblast growth factor 2, tenascin C, collagen 1, meprin 1-α, and meprin 1-β. Differences in protein abundance were validated by quantitative immunohistochemistry and Western blot analysis, and the collective differences were not explained by mutations in known ECM or glomerular disease genes. Within the distinct signatures, we discovered a core set of structural ECM proteins that form multiple protein-protein interactions and are conserved from mouse to man. Furthermore, we found striking ultrastructural changes in glomerular basement membranes in FVB mice. Pathway analysis of merged transcriptomic and proteomic datasets identified potential ECM regulatory pathways involving inhibition of matrix metalloproteases, liver X receptor/retinoid X receptor, nuclear factor erythroid 2-related factor 2, notch, and cyclin-dependent kinase 5. These pathways may therefore alter ECM and confer susceptibility to disease. Copyright © 2015 by the American Society of Nephrology.

  18. Towards Tuning the Mechanical Properties of Three-Dimensional Collagen Scaffolds Using a Coupled Fiber-Matrix Model

    Directory of Open Access Journals (Sweden)

    Shengmao Lin

    2015-08-01

    Full Text Available Scaffold mechanical properties are essential in regulating the microenvironment of three-dimensional cell culture. A coupled fiber-matrix numerical model was developed in this work for predicting the mechanical response of collagen scaffolds subjected to various levels of non-enzymatic glycation and collagen concentrations. The scaffold was simulated by a Voronoi network embedded in a matrix. The computational model was validated using published experimental data. Results indicate that both non-enzymatic glycation-induced matrix stiffening and fiber network density, as regulated by collagen concentration, influence scaffold behavior. The heterogeneous stress patterns of the scaffold were induced by the interfacial mechanics between the collagen fiber network and the matrix. The knowledge obtained in this work could help to fine-tune the mechanical properties of collagen scaffolds for improved tissue regeneration applications.

  19. Solution of the inverse scattering problem at fixed energy with non-physical S matrix elements

    International Nuclear Information System (INIS)

    Eberspaecher, M.; Amos, K.; Apagyi, B.

    1999-12-01

    The quantum mechanical inverse elastic scattering problem is solved with the modified Newton-Sabatier method. A set of S matrix elements calculated from a realistic analytic optical model potential serves as input data. It is demonstrated that the quality of the inversion potential can be improved by including non-physical S matrix elements to half, quarter and eighth valued partial waves if the original set does not contain enough information to determine the interaction potential. We demonstrate that results can be very sensitive to the choice of those non-physical S matrix values both with the analytic potential model and in a real application in which the experimental cross section for the symmetrical scattering system of 12 C+ 12 C at E=7.998 MeV is analyzed

  20. Matrix Approach of Seismic Wave Imaging: Application to Erebus Volcano

    Science.gov (United States)

    Blondel, T.; Chaput, J.; Derode, A.; Campillo, M.; Aubry, A.

    2017-12-01

    This work aims at extending to seismic imaging a matrix approach of wave propagation in heterogeneous media, previously developed in acoustics and optics. More specifically, we will apply this approach to the imaging of the Erebus volcano in Antarctica. Volcanoes are actually among the most challenging media to explore seismically in light of highly localized and abrupt variations in density and wave velocity, extreme topography, extensive fractures, and the presence of magma. In this strongly scattering regime, conventional imaging methods suffer from the multiple scattering of waves. Our approach experimentally relies on the measurement of a reflection matrix associated with an array of geophones located at the surface of the volcano. Although these sensors are purely passive, a set of Green's functions can be measured between all pairs of geophones from ice-quake coda cross-correlations (1-10 Hz) and forms the reflection matrix. A set of matrix operations can then be applied for imaging purposes. First, the reflection matrix is projected, at each time of flight, in the ballistic focal plane by applying adaptive focusing at emission and reception. It yields a response matrix associated with an array of virtual geophones located at the ballistic depth. This basis allows us to get rid of most of the multiple scattering contribution by applying a confocal filter to seismic data. Iterative time reversal is then applied to detect and image the strongest scatterers. Mathematically, it consists in performing a singular value decomposition of the reflection matrix. The presence of a potential target is assessed from a statistical analysis of the singular values, while the corresponding eigenvectors yield the corresponding target images. When stacked, the results obtained at each depth give a three-dimensional image of the volcano. While conventional imaging methods lead to a speckle image with no connection to the actual medium's reflectivity, our method enables to

  1. Carrier-based modulation schemes for various three-level matrix converters

    DEFF Research Database (Denmark)

    Blaabjerg, Frede; Loh, P.C.; Rong, R.C.

    2008-01-01

    different performance merits. To avoid confusion and hence fasten the converter applications in the industry, it would surely be better for modulation schemes to be developed from a common set of modulation principles that unfortunately has not yet been thoroughly defined. Contributing to that area...... a limited set of switching vectors because of its lower semiconductor count. Through simulation and experimental testing, all the evaluated matrix converters are shown to produce satisfactory sinusoidal input and output quantities using the same set of generic modulation principles, which can conveniently...

  2. Quantitative co-localization and pattern analysis of endo-lysosomal cargo in subcellular image cytometry and validation on synthetic image sets

    DEFF Research Database (Denmark)

    Lund, Frederik W.; Wüstner, Daniel

    2017-01-01

    /LYSs. Analysis of endocytic trafficking relies heavily on quantitative fluorescence microscopy, but evaluation of the huge image data sets is challenging and demands computer-assisted statistical tools. Here, we describe how to use SpatTrack (www.sdu.dk/bmb/spattrack), an imaging toolbox, which we developed...... such synthetic vesicle patterns as “ground truth” for validation of two-channel analysis tools in SpatTrack, revealing their high reliability. An improved version of SpatTrack for microscopy-based quantification of cargo transport through the endo-lysosomal system accompanies this protocol....

  3. Interactions of rat repetitive sequence MspI8 with nuclear matrix proteins during spermatogenesis

    International Nuclear Information System (INIS)

    Rogolinski, J.; Widlak, P.; Rzeszowska-Wolny, J.

    1996-01-01

    Using the Southwestern blot analysis we have studied the interactions between rat repetitive sequence MspI8 and the nuclear matrix proteins of rats testis cells. Starting from 2 weeks the young to adult animal showed differences in type of testis nuclear matrix proteins recognizing the MspI8 sequence. The same sets of nuclear matrix proteins were detected in some enriched in spermatocytes and spermatids and obtained after fractionation of cells of adult animal by the velocity sedimentation technique. (author). 21 refs, 5 figs

  4. Matrix completion-based reconstruction for undersampled magnetic resonance fingerprinting data.

    Science.gov (United States)

    Doneva, Mariya; Amthor, Thomas; Koken, Peter; Sommer, Karsten; Börnert, Peter

    2017-09-01

    An iterative reconstruction method for undersampled magnetic resonance fingerprinting data is presented. The method performs the reconstruction entirely in k-space and is related to low rank matrix completion methods. A low dimensional data subspace is estimated from a small number of k-space locations fully sampled in the temporal direction and used to reconstruct the missing k-space samples before MRF dictionary matching. Performing the iterations in k-space eliminates the need for applying a forward and an inverse Fourier transform in each iteration required in previously proposed iterative reconstruction methods for undersampled MRF data. A projection onto the low dimensional data subspace is performed as a matrix multiplication instead of a singular value thresholding typically used in low rank matrix completion, further reducing the computational complexity of the reconstruction. The method is theoretically described and validated in phantom and in-vivo experiments. The quality of the parameter maps can be significantly improved compared to direct matching on undersampled data. Copyright © 2017 Elsevier Inc. All rights reserved.

  5. Life Modeling and Design Analysis for Ceramic Matrix Composite Materials

    Science.gov (United States)

    2005-01-01

    The primary research efforts focused on characterizing and modeling static failure, environmental durability, and creep-rupture behavior of two classes of ceramic matrix composites (CMC), silicon carbide fibers in a silicon carbide matrix (SiC/SiC) and carbon fibers in a silicon carbide matrix (C/SiC). An engineering life prediction model (Probabilistic Residual Strength model) has been developed specifically for CMCs. The model uses residual strength as the damage metric for evaluating remaining life and is posed probabilistically in order to account for the stochastic nature of the material s response. In support of the modeling effort, extensive testing of C/SiC in partial pressures of oxygen has been performed. This includes creep testing, tensile testing, half life and residual tensile strength testing. C/SiC is proposed for airframe and propulsion applications in advanced reusable launch vehicles. Figures 1 and 2 illustrate the models predictive capabilities as well as the manner in which experimental tests are being selected in such a manner as to ensure sufficient data is available to aid in model validation.

  6. System Matrix Analysis for Computed Tomography Imaging

    Science.gov (United States)

    Flores, Liubov; Vidal, Vicent; Verdú, Gumersindo

    2015-01-01

    In practical applications of computed tomography imaging (CT), it is often the case that the set of projection data is incomplete owing to the physical conditions of the data acquisition process. On the other hand, the high radiation dose imposed on patients is also undesired. These issues demand that high quality CT images can be reconstructed from limited projection data. For this reason, iterative methods of image reconstruction have become a topic of increased research interest. Several algorithms have been proposed for few-view CT. We consider that the accurate solution of the reconstruction problem also depends on the system matrix that simulates the scanning process. In this work, we analyze the application of the Siddon method to generate elements of the matrix and we present results based on real projection data. PMID:26575482

  7. Combinatorial theory of the semiclassical evaluation of transport moments. I. Equivalence with the random matrix approach

    Energy Technology Data Exchange (ETDEWEB)

    Berkolaiko, G., E-mail: berko@math.tamu.edu [Department of Mathematics, Texas A and M University, College Station, Texas 77843-3368 (United States); Kuipers, J., E-mail: Jack.Kuipers@physik.uni-regensburg.de [Institut für Theoretische Physik, Universität Regensburg, D-93040 Regensburg (Germany)

    2013-11-15

    To study electronic transport through chaotic quantum dots, there are two main theoretical approaches. One involves substituting the quantum system with a random scattering matrix and performing appropriate ensemble averaging. The other treats the transport in the semiclassical approximation and studies correlations among sets of classical trajectories. There are established evaluation procedures within the semiclassical evaluation that, for several linear and nonlinear transport moments to which they were applied, have always resulted in the agreement with random matrix predictions. We prove that this agreement is universal: any semiclassical evaluation within the accepted procedures is equivalent to the evaluation within random matrix theory. The equivalence is shown by developing a combinatorial interpretation of the trajectory sets as ribbon graphs (maps) with certain properties and exhibiting systematic cancellations among their contributions. Remaining trajectory sets can be identified with primitive (palindromic) factorisations whose number gives the coefficients in the corresponding expansion of the moments of random matrices. The equivalence is proved for systems with and without time reversal symmetry.

  8. Estimation of covariance matrix on the experimental data for nuclear data evaluation

    International Nuclear Information System (INIS)

    Murata, T.

    1985-01-01

    In order to evaluate fission and capture cross sections of some U and Pu isotopes for JENDL-3, we have a plan for evaluating them simultaneously with a least-squares method. For the simultaneous evaluation, the covariance matrix is required for each experimental data set. In the present work, we have studied the procedures for deriving the covariance matrix from the error data given in the experimental papers. The covariance matrices were obtained using the partial errors and estimated correlation coefficients between the same type partial errors for different neutron energy. Some examples of the covariance matrix estimation are explained and the preliminary results of the simultaneous evaluation are presented. (author)

  9. Absorption properties of waste matrix materials

    Energy Technology Data Exchange (ETDEWEB)

    Briggs, J.B. [Idaho National Engineering Lab., Idaho Falls, ID (United States)

    1997-06-01

    This paper very briefly discusses the need for studies of the limiting critical concentration of radioactive waste matrix materials. Calculated limiting critical concentration values for some common waste materials are listed. However, for systems containing large quantities of waste materials, differences up to 10% in calculated k{sub eff} values are obtained by changing cross section data sets. Therefore, experimental results are needed to compare with calculation results for resolving these differences and establishing realistic biases.

  10. Extended biorthogonal matrix polynomials

    Directory of Open Access Journals (Sweden)

    Ayman Shehata

    2017-01-01

    Full Text Available The pair of biorthogonal matrix polynomials for commutative matrices were first introduced by Varma and Tasdelen in [22]. The main aim of this paper is to extend the properties of the pair of biorthogonal matrix polynomials of Varma and Tasdelen and certain generating matrix functions, finite series, some matrix recurrence relations, several important properties of matrix differential recurrence relations, biorthogonality relations and matrix differential equation for the pair of biorthogonal matrix polynomials J(A,B n (x, k and K(A,B n (x, k are discussed. For the matrix polynomials J(A,B n (x, k, various families of bilinear and bilateral generating matrix functions are constructed in the sequel.

  11. Validity of Chinese Version of the Composite International Diagnostic Interview-3.0 in Psychiatric Settings

    Institute of Scientific and Technical Information of China (English)

    Jin Lu; Yue-Qin Huang; Zhao-Rui Liu; Xiao-Lan Cao

    2015-01-01

    Background:The Composite International Diagnostic Interview-3.0 (CIDI-3.0) is a fully structured lay-administered diagnostic interview for the assessment of mental disorders according to ICD-10 and Diagnostic and Statistical Manual of Mental Disorders,Fourth Edition (DSM-Ⅳ) criteria.The aim of the study was to investigate the concurrent validity of the Chinese CIDI in diagnosing mental disorders in psychiatric settings.Methods:We recruited 208 participants,of whom 148 were patients from two psychiatric hospitals and 60 healthy people from communities.These participants were administered with CIDI by six trained lay interviewers and the Structured Clinical Interview for DSM-Ⅳ Axis I Disorders (SCID-I,gold standard) by two psychiatrists.Agreement between CIDI and SCID-I was assessed with sensitivity,specificity,positive predictive value and negative predictive value.Individual-level CIDI-SCID diagnostic concordance was evaluated using the area under the receiver operator characteristic curve and Cohen's K.Results:Substantial to excellent CIDI to SCID concordance was found for any substance use disorder (area under the receiver operator characteristic curve [AUC] =0.926),any anxiety disorder (AUC =0.807) and any mood disorder (AUC =0.806).The concordance between the CIDI and the SCID for psychotic and eating disorders is moderate.However,for individual mental disorders,the CIDI-SCID concordance for bipolar disorders (AUC =0.55) and anorexia nervosa (AUC =0.50) was insufficient.Conclusions:Overall,the Chinese version of CIDI-3.0 has acceptable validity in diagnosing the substance use disorder,anxiety disorder and mood disorder among Chinese adult population.However,we should be cautious when using it for bipolar disorders and anorexia nervosa.

  12. Validity of Chinese Version of the Composite International Diagnostic Interview-3.0 in Psychiatric Settings

    Directory of Open Access Journals (Sweden)

    Jin Lu

    2015-01-01

    Full Text Available Background: The Composite International Diagnostic Interview-3.0 (CIDI-3.0 is a fully structured lay-administered diagnostic interview for the assessment of mental disorders according to ICD-10 and Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition (DSM-IV criteria. The aim of the study was to investigate the concurrent validity of the Chinese CIDI in diagnosing mental disorders in psychiatric settings. Methods: We recruited 208 participants, of whom 148 were patients from two psychiatric hospitals and 60 healthy people from communities. These participants were administered with CIDI by six trained lay interviewers and the Structured Clinical Interview for DSM-IV Axis I Disorders (SCID-I, gold standard by two psychiatrists. Agreement between CIDI and SCID-I was assessed with sensitivity, specificity, positive predictive value and negative predictive value. Individual-level CIDI-SCID diagnostic concordance was evaluated using the area under the receiver operator characteristic curve and Cohen′s K. Results: Substantial to excellent CIDI to SCID concordance was found for any substance use disorder (area under the receiver operator characteristic curve [AUC] = 0.926, any anxiety disorder (AUC = 0.807 and any mood disorder (AUC = 0.806. The concordance between the CIDI and the SCID for psychotic and eating disorders is moderate. However, for individual mental disorders, the CIDI-SCID concordance for bipolar disorders (AUC = 0.55 and anorexia nervosa (AUC = 0.50 was insufficient. Conclusions: Overall, the Chinese version of CIDI-3.0 has acceptable validity in diagnosing the substance use disorder, anxiety disorder and mood disorder among Chinese adult population. However, we should be cautious when using it for bipolar disorders and anorexia nervosa.

  13. Models based on multichannel R-matrix theory for evaluating light element reactions

    International Nuclear Information System (INIS)

    Dodder, D.C.; Hale, G.M.; Nisley, R.A.; Witte, K.; Young, P.G.

    1975-01-01

    Multichannel R-matrix theory has been used as a basis for models for analysis and evaluation of light nuclear systems. These models have the characteristic that data predictions can be made utilizing information derived from other reactions related to the one of primary interest. Several examples are given where such an approach is valid and appropriate. (auth.)

  14. One-point functions in AdS/dCFT from matrix product states

    International Nuclear Information System (INIS)

    Buhl-Mortensen, Isak; Leeuw, Marius de; Kristjansen, Charlotte; Zarembo, Konstantin

    2016-01-01

    One-point functions of certain non-protected scalar operators in the defect CFT dual to the D3-D5 probe brane system with k units of world volume flux can be expressed as overlaps between Bethe eigenstates of the Heisenberg spin chain and a matrix product state. We present a closed expression of determinant form for these one-point functions, valid for any value of k. The determinant formula factorizes into the k=2 result times a k-dependent pre-factor. Making use of the transfer matrix of the Heisenberg spin chain we recursively relate the matrix product state for higher even and odd k to the matrix product state for k=2 and k=3 respectively. We furthermore find evidence that the matrix product states for k=2 and k=3 are related via a ratio of Baxter’s Q-operators. The general k formula has an interesting thermodynamical limit involving a non-trivial scaling of k, which indicates that the match between string and field theory one-point functions found for chiral primaries might be tested for non-protected operators as well. We revisit the string computation for chiral primaries and discuss how it can be extended to non-protected operators.

  15. Constrained Perturbation Regularization Approach for Signal Estimation Using Random Matrix Theory

    KAUST Repository

    Suliman, Mohamed Abdalla Elhag; Ballal, Tarig; Kammoun, Abla; Al-Naffouri, Tareq Y.

    2016-01-01

    random matrix theory are applied to derive the near-optimum regularizer that minimizes the mean-squared error of the estimator. Simulation results demonstrate that the proposed approach outperforms a set of benchmark regularization methods for various

  16. Renewable energy selection Matrix based on multi-attribute analysis for fish preservation

    International Nuclear Information System (INIS)

    Vega-Clavijo, Lili Tatiana; Prías-Caicedo, Omar Fredy; Sierra-Vargas, Fabio Emiro

    2016-01-01

    The article presents the application of the methodology of multi attribute utility theory validated by a matrix system established by researchers, to identify the best alternative of energy supply to 10 kwe in the generation of ice for preservation of fish in coastal and rural areas of the Chocó. The comparison between the potentials of different renewable energy sources and diesel, natural gas and propane fuels took place, based on economic, technological, environmental and social criteria, being validated by experts and the community on field work. It was concluded that the best alternative is diesel followed by biomass. (author)

  17. Wind and solar resource data sets: Wind and solar resource data sets

    Energy Technology Data Exchange (ETDEWEB)

    Clifton, Andrew [National Renewable Energy Laboratory, Golden CO USA; Hodge, Bri-Mathias [National Renewable Energy Laboratory, Golden CO USA; Power Systems Engineering Center, National Renewable Energy Laboratory, Golden CO USA; Draxl, Caroline [National Renewable Energy Laboratory, Golden CO USA; National Wind Technology Center, National Renewable Energy Laboratory, Golden CO USA; Badger, Jake [Department of Wind Energy, Danish Technical University, Copenhagen Denmark; Habte, Aron [National Renewable Energy Laboratory, Golden CO USA; Power Systems Engineering Center, National Renewable Energy Laboratory, Golden CO USA

    2017-12-05

    The range of resource data sets spans from static cartography showing the mean annual wind speed or solar irradiance across a region to high temporal and high spatial resolution products that provide detailed information at a potential wind or solar energy facility. These data sets are used to support continental-scale, national, or regional renewable energy development; facilitate prospecting by developers; and enable grid integration studies. This review first provides an introduction to the wind and solar resource data sets, then provides an overview of the common methods used for their creation and validation. A brief history of wind and solar resource data sets is then presented, followed by areas for future research.

  18. Universality in chaos: Lyapunov spectrum and random matrix theory.

    Science.gov (United States)

    Hanada, Masanori; Shimada, Hidehiko; Tezuka, Masaki

    2018-02-01

    We propose the existence of a new universality in classical chaotic systems when the number of degrees of freedom is large: the statistical property of the Lyapunov spectrum is described by random matrix theory. We demonstrate it by studying the finite-time Lyapunov exponents of the matrix model of a stringy black hole and the mass-deformed models. The massless limit, which has a dual string theory interpretation, is special in that the universal behavior can be seen already at t=0, while in other cases it sets in at late time. The same pattern is demonstrated also in the product of random matrices.

  19. Universality in chaos: Lyapunov spectrum and random matrix theory

    Science.gov (United States)

    Hanada, Masanori; Shimada, Hidehiko; Tezuka, Masaki

    2018-02-01

    We propose the existence of a new universality in classical chaotic systems when the number of degrees of freedom is large: the statistical property of the Lyapunov spectrum is described by random matrix theory. We demonstrate it by studying the finite-time Lyapunov exponents of the matrix model of a stringy black hole and the mass-deformed models. The massless limit, which has a dual string theory interpretation, is special in that the universal behavior can be seen already at t =0 , while in other cases it sets in at late time. The same pattern is demonstrated also in the product of random matrices.

  20. 1024 matrix image reconstruction: usefulness in high resolution chest CT

    International Nuclear Information System (INIS)

    Jeong, Sun Young; Chung, Myung Jin; Chong, Se Min; Sung, Yon Mi; Lee, Kyung Soo

    2006-01-01

    We tried to evaluate whether high resolution chest CT with a 1,024 matrix has a significant advantage in image quality compared to a 512 matrix. Each set of 512 and 1024 matrix high resolution chest CT scans with both 0.625 mm and 1.25 mm slice thickness were obtained from 26 patients. Seventy locations that contained twenty-four low density lesions without sharp boundary such as emphysema, and forty-six sharp linear densities such as linear fibrosis were selected; these were randomly displayed on a five mega pixel LCD monitor. All the images were masked for information concerning the matrix size and slice thickness. Two chest radiologists scored the image quality of each ar rowed lesion as follows: (1) undistinguishable, (2) poorly distinguishable, (3) fairly distinguishable, (4) well visible and (5) excellently visible. The scores were compared from the aspects of matrix size, slice thickness and the different observers by using ANOVA tests. The average and standard deviation of image quality were 3.09 (± .92) for the 0.625 mm x 512 matrix, 3.16 (± .84) for the 0.625 mm x 1024 matrix, 2.49 (± 1.02) for the 1.25 mm x 512 matrix, and 2.35 (± 1.02) for the 1.25 mm x 1024 matrix, respectively. The image quality on both matrices of the high resolution chest CT scans with a 0.625 mm slice thickness was significantly better than that on the 1.25 mm slice thickness (ρ < 0.001). However, the image quality on the 1024 matrix high resolution chest CT scans was not significantly different from that on the 512 matrix high resolution chest CT scans (ρ = 0.678). The interobserver variation between the two observers was not significant (ρ = 0.691). We think that 1024 matrix image reconstruction for high resolution chest CT may not be clinical useful

  1. Neighborhood Regularized Logistic Matrix Factorization for Drug-Target Interaction Prediction.

    Science.gov (United States)

    Liu, Yong; Wu, Min; Miao, Chunyan; Zhao, Peilin; Li, Xiao-Li

    2016-02-01

    In pharmaceutical sciences, a crucial step of the drug discovery process is the identification of drug-target interactions. However, only a small portion of the drug-target interactions have been experimentally validated, as the experimental validation is laborious and costly. To improve the drug discovery efficiency, there is a great need for the development of accurate computational approaches that can predict potential drug-target interactions to direct the experimental verification. In this paper, we propose a novel drug-target interaction prediction algorithm, namely neighborhood regularized logistic matrix factorization (NRLMF). Specifically, the proposed NRLMF method focuses on modeling the probability that a drug would interact with a target by logistic matrix factorization, where the properties of drugs and targets are represented by drug-specific and target-specific latent vectors, respectively. Moreover, NRLMF assigns higher importance levels to positive observations (i.e., the observed interacting drug-target pairs) than negative observations (i.e., the unknown pairs). Because the positive observations are already experimentally verified, they are usually more trustworthy. Furthermore, the local structure of the drug-target interaction data has also been exploited via neighborhood regularization to achieve better prediction accuracy. We conducted extensive experiments over four benchmark datasets, and NRLMF demonstrated its effectiveness compared with five state-of-the-art approaches.

  2. Covariance Estimation and Autocorrelation of NORAD Two-Line Element Sets

    National Research Council Canada - National Science Library

    Osweiler, Victor P

    2006-01-01

    This thesis investigates NORAD two-line element sets (TLE) containing satellite mean orbital elements for the purpose of estimating a covariance matrix and formulating an autocorrelation relationship...

  3. M(atrix) theory: matrix quantum mechanics as a fundamental theory

    International Nuclear Information System (INIS)

    Taylor, Washington

    2001-01-01

    This article reviews the matrix model of M theory. M theory is an 11-dimensional quantum theory of gravity that is believed to underlie all superstring theories. M theory is currently the most plausible candidate for a theory of fundamental physics which reconciles gravity and quantum field theory in a realistic fashion. Evidence for M theory is still only circumstantial -- no complete background-independent formulation of the theory exists as yet. Matrix theory was first developed as a regularized theory of a supersymmetric quantum membrane. More recently, it has appeared in a different guise as the discrete light-cone quantization of M theory in flat space. These two approaches to matrix theory are described in detail and compared. It is shown that matrix theory is a well-defined quantum theory that reduces to a supersymmetric theory of gravity at low energies. Although its fundamental degrees of freedom are essentially pointlike, higher-dimensional fluctuating objects (branes) arise through the non-Abelian structure of the matrix degrees of freedom. The problem of formulating matrix theory in a general space-time background is discussed, and the connections between matrix theory and other related models are reviewed

  4. Reduction of multipartite qubit density matrixes to bipartite qubit density matrixes and criteria of partial separability of multipartite qubit density matrixes

    OpenAIRE

    Zhong, Zai-Zhe

    2004-01-01

    The partial separability of multipartite qubit density matrixes is strictly defined. We give a reduction way from N-partite qubit density matrixes to bipartite qubit density matrixes, and prove a necessary condition that a N-partite qubit density matrix to be partially separable is its reduced density matrix to satisfy PPT condition.

  5. Bioanalytical method development and validation for the determination of glycine in human cerebrospinal fluid by ion-pair reversed-phase liquid chromatography-tandem mass spectrometry.

    Science.gov (United States)

    Jiang, Jian; James, Christopher A; Wong, Philip

    2016-09-05

    A LC-MS/MS method has been developed and validated for the determination of glycine in human cerebrospinal fluid (CSF). The validated method used artificial cerebrospinal fluid as a surrogate matrix for calibration standards. The calibration curve range for the assay was 100-10,000ng/mL and (13)C2, (15)N-glycine was used as an internal standard (IS). Pre-validation experiments were performed to demonstrate parallelism with surrogate matrix and standard addition methods. The mean endogenous glycine concentration in a pooled human CSF determined on three days by using artificial CSF as a surrogate matrix and the method of standard addition was found to be 748±30.6 and 768±18.1ng/mL, respectively. A percentage difference of -2.6% indicated that artificial CSF could be used as a surrogate calibration matrix for the determination of glycine in human CSF. Quality control (QC) samples, except the lower limit of quantitation (LLOQ) QC and low QC samples, were prepared by spiking glycine into aliquots of pooled human CSF sample. The low QC sample was prepared from a separate pooled human CSF sample containing low endogenous glycine concentrations, while the LLOQ QC sample was prepared in artificial CSF. Standard addition was used extensively to evaluate matrix effects during validation. The validated method was used to determine the endogenous glycine concentrations in human CSF samples. Incurred sample reanalysis demonstrated reproducibility of the method. Copyright © 2016 Elsevier B.V. All rights reserved.

  6. CTF Void Drift Validation Study

    Energy Technology Data Exchange (ETDEWEB)

    Salko, Robert K. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Gosdin, Chris [Pennsylvania State Univ., University Park, PA (United States); Avramova, Maria N. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Gergar, Marcus [Pennsylvania State Univ., University Park, PA (United States)

    2015-10-26

    This milestone report is a summary of work performed in support of expansion of the validation and verification (V&V) matrix for the thermal-hydraulic subchannel code, CTF. The focus of this study is on validating the void drift modeling capabilities of CTF and verifying the supporting models that impact the void drift phenomenon. CTF uses a simple turbulent-diffusion approximation to model lateral cross-flow due to turbulent mixing and void drift. The void drift component of the model is based on the Lahey and Moody model. The models are a function of two-phase mass, momentum, and energy distribution in the system; therefore, it is necessary to correctly model the ow distribution in rod bundle geometry as a first step to correctly calculating the void distribution due to void drift.

  7. Development of Reliable and Validated Tools to Evaluate Technical Resuscitation Skills in a Pediatric Simulation Setting: Resuscitation and Emergency Simulation Checklist for Assessment in Pediatrics.

    Science.gov (United States)

    Faudeux, Camille; Tran, Antoine; Dupont, Audrey; Desmontils, Jonathan; Montaudié, Isabelle; Bréaud, Jean; Braun, Marc; Fournier, Jean-Paul; Bérard, Etienne; Berlengi, Noémie; Schweitzer, Cyril; Haas, Hervé; Caci, Hervé; Gatin, Amélie; Giovannini-Chami, Lisa

    2017-09-01

    To develop a reliable and validated tool to evaluate technical resuscitation skills in a pediatric simulation setting. Four Resuscitation and Emergency Simulation Checklist for Assessment in Pediatrics (RESCAPE) evaluation tools were created, following international guidelines: intraosseous needle insertion, bag mask ventilation, endotracheal intubation, and cardiac massage. We applied a modified Delphi methodology evaluation to binary rating items. Reliability was assessed comparing the ratings of 2 observers (1 in real time and 1 after a video-recorded review). The tools were assessed for content, construct, and criterion validity, and for sensitivity to change. Inter-rater reliability, evaluated with Cohen kappa coefficients, was perfect or near-perfect (>0.8) for 92.5% of items and each Cronbach alpha coefficient was ≥0.91. Principal component analyses showed that all 4 tools were unidimensional. Significant increases in median scores with increasing levels of medical expertise were demonstrated for RESCAPE-intraosseous needle insertion (P = .0002), RESCAPE-bag mask ventilation (P = .0002), RESCAPE-endotracheal intubation (P = .0001), and RESCAPE-cardiac massage (P = .0037). Significantly increased median scores over time were also demonstrated during a simulation-based educational program. RESCAPE tools are reliable and validated tools for the evaluation of technical resuscitation skills in pediatric settings during simulation-based educational programs. They might also be used for medical practice performance evaluations. Copyright © 2017 Elsevier Inc. All rights reserved.

  8. Solution of the Stieltjes truncated matrix moment problem

    Directory of Open Access Journals (Sweden)

    Vadim M. Adamyan

    2005-01-01

    Full Text Available The truncated Stieltjes matrix moment problem consisting in the description of all matrix distributions \\(\\boldsymbol{\\sigma}(t\\ on \\([0,\\infty\\ with given first \\(2n+1\\ power moments \\((\\mathbf{C}_j_{n=0}^j\\ is solved using known results on the corresponding Hamburger problem for which \\(\\boldsymbol{\\sigma}(t\\ are defined on \\((-\\infty,\\infty\\. The criterion of solvability of the Stieltjes problem is given and all its solutions in the non-degenerate case are described by selection of the appropriate solutions among those of the Hamburger problem for the same set of moments. The results on extensions of non-negative operators are used and a purely algebraic algorithm for the solution of both Hamburger and Stieltjes problems is proposed.

  9. ED leadership competency matrix: an administrative management tool.

    Science.gov (United States)

    Propp, Douglas A; Glickman, Seth; Uehara, Dennis T

    2003-10-01

    A successful ED relies on its leaders to master and demonstrate core competencies to be effective in the many arenas in which they interact and are responsible. A unique matrix model for the assessment of an ED leadership's key administrative skill sets is presented. The model incorporates capabilities related to the individual's cognitive aptitude, experience, acquired technical skills, behavioral characteristics, as well as the ability to manage relationships effectively. Based on the personnel inventory using the matrix, focused evaluation, development, and recruitment of ED key leaders occurs. This dynamic tool has provided a unique perspective for the evaluation and enhancement of overall ED leadership performance. It is hoped that incorporation of such a model will similarly improve the accomplishments of EDs at other institutions.

  10. Nonlinear recurrent neural networks for finite-time solution of general time-varying linear matrix equations.

    Science.gov (United States)

    Xiao, Lin; Liao, Bolin; Li, Shuai; Chen, Ke

    2018-02-01

    In order to solve general time-varying linear matrix equations (LMEs) more efficiently, this paper proposes two nonlinear recurrent neural networks based on two nonlinear activation functions. According to Lyapunov theory, such two nonlinear recurrent neural networks are proved to be convergent within finite-time. Besides, by solving differential equation, the upper bounds of the finite convergence time are determined analytically. Compared with existing recurrent neural networks, the proposed two nonlinear recurrent neural networks have a better convergence property (i.e., the upper bound is lower), and thus the accurate solutions of general time-varying LMEs can be obtained with less time. At last, various different situations have been considered by setting different coefficient matrices of general time-varying LMEs and a great variety of computer simulations (including the application to robot manipulators) have been conducted to validate the better finite-time convergence of the proposed two nonlinear recurrent neural networks. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. Development and validation of factor analysis for dynamic in-vivo imaging data sets

    Science.gov (United States)

    Goldschmied, Lukas; Knoll, Peter; Mirzaei, Siroos; Kalchenko, Vyacheslav

    2018-02-01

    In-vivo optical imaging method provides information about the anatomical structures and function of tissues ranging from single cell to entire organisms. Dynamic Fluorescent Imaging (DFI) is used to examine dynamic events related to normal physiology or disease progression in real time. In this work we improve this method by using factor analysis (FA) to automatically separate overlying structures.The proposed method is based on a previously introduced Transcranial Optical Vascular Imaging (TOVI), which employs natural and sufficient transparency through the intact cranial bones of a mouse. Fluorescent image acquisition is performed after intravenous fluorescent tracer administration. Afterwards FA is used to extract structures with different temporal characteristics from dynamic contrast enhanced studies without making any a priori assumptions about physiology. The method was validated by a dynamic light phantom based on the Arduino hardware platform and dynamic fluorescent cerebral hemodynamics data sets. Using the phantom data FA can separate various light channels without user intervention. FA applied on an image sequence obtained after fluorescent tracer administration is allowing extracting valuable information about cerebral blood vessels anatomy and functionality without a-priory assumptions of their anatomy or physiology while keeping the mouse cranium intact. Unsupervised color-coding based on FA enhances visibility and distinguishing of blood vessels belonging to different compartments. DFI based on FA especially in case of transcranial imaging can be used to separate dynamic structures.

  12. Validation of self - confidence scale for clean urinary intermittent self - catheterization for patients and health - caregivers.

    Science.gov (United States)

    Biaziolo, Cintia Fernandes Baccarin; Mazzo, Alessandra; Martins, José Carlos Amado; Jorge, Beatriz Maria; Batista, Rui Carlos Negrão; Tucci, Silvio Júnior

    2017-01-01

    To validate a measurement instrument for clean intermittent self-catheterization for patients and health-caregivers. Methodological study of instrument validation performed at a Rehabilitation Center in a University hospital for patients submitted to clean intermittent self-catheterization and their health-caregivers. Following ethical criteria, data were collected during interview with nurse staff using a Likert question form containing 16 items with 5 points each: "no confidence"=1, "little confidence"=2, "confident"=3, "very confident"=4 and "completely confident"=5. Questionnaire called "Self- Confident Scale for Clean Intermittent Self-catheterization" (SCSCISC) was constructed based on literature and previously validated (appearance and content). The instrument was validated by 122 patients and 119 health-caregivers, in a proportion of 15:1. It was observed a good linear association and sample adequacy KMO 0.931 and X2=2881.63, p<0.001. Anti-image matrix showed high values at diagonal suggesting inclusion of all factors. Screen plot analysis showed a suggestion of items maintenance in a single set. It was observed high correlation of all items with the total, alpha-Cronbach 0.944. The same results were obtained in subsamples of patients and health-caregivers. The instrument showed good psychometric adequacy corroborating its use for evaluation of self-confidence during clean intermittent self-catheterization. Copyright® by the International Brazilian Journal of Urology.

  13. Validation of self - confidence scale for clean urinary intermittent self - catheterization for patients and health - caregivers

    Directory of Open Access Journals (Sweden)

    Cintia Fernandes Baccarin Biaziolo

    Full Text Available ABSTRACT Objective To validate a measurement instrument for clean intermittent self-catheterization for patients and health-caregivers. Material and Methods Methodological study of instrument validation performed at a Rehabilitation Center in a University hospital for patients submitted to clean intermittent self-catheterization and their health-caregivers. Following ethical criteria, data were collected during interview with nurse staff using a Likert question form containing 16 items with 5 points each: “no confidence”=1, “little confidence”=2, “confident”=3, “very confident”=4 and “completely confident”=5. Questionnaire called “Self-Confident Scale for Clean Intermittent Self-catheterization” (SCSCISC was constructed based on literature and previously validated (appearance and content. Results The instrument was validated by 122 patients and 119 health-caregivers, in a proportion of 15:1. It was observed a good linear association and sample adequacy KMO 0.931 and X2=2881.63, p<0.001. Anti-image matrix showed high values at diagonal suggesting inclusion of all factors. Screen plot analysis showed a suggestion of items maintenance in a single set. It was observed high correlation of all items with the total, alpha-Cronbach 0.944. The same results were obtained in subsamples of patients and health-caregivers. Conclusion The instrument showed good psychometric adequacy corroborating its use for evaluation of self-confidence during clean intermittent self-catheterization.

  14. Current Concerns in Validity Theory.

    Science.gov (United States)

    Kane, Michael

    Validity is concerned with the clarification and justification of the intended interpretations and uses of observed scores. It has not been easy to formulate a general methodology set of principles for validation, but progress has been made, especially as the field has moved from relatively limited criterion-related models to sophisticated…

  15. Non-negative matrix factorization by maximizing correntropy for cancer clustering

    KAUST Repository

    Wang, Jim Jing-Yan; Wang, Xiaolei; Gao, Xin

    2013-01-01

    Background: Non-negative matrix factorization (NMF) has been shown to be a powerful tool for clustering gene expression data, which are widely used to classify cancers. NMF aims to find two non-negative matrices whose product closely approximates the original matrix. Traditional NMF methods minimize either the l2 norm or the Kullback-Leibler distance between the product of the two matrices and the original matrix. Correntropy was recently shown to be an effective similarity measurement due to its stability to outliers or noise.Results: We propose a maximum correntropy criterion (MCC)-based NMF method (NMF-MCC) for gene expression data-based cancer clustering. Instead of minimizing the l2 norm or the Kullback-Leibler distance, NMF-MCC maximizes the correntropy between the product of the two matrices and the original matrix. The optimization problem can be solved by an expectation conditional maximization algorithm.Conclusions: Extensive experiments on six cancer benchmark sets demonstrate that the proposed method is significantly more accurate than the state-of-the-art methods in cancer clustering. 2013 Wang et al.; licensee BioMed Central Ltd.

  16. Non-negative matrix factorization by maximizing correntropy for cancer clustering

    KAUST Repository

    Wang, Jim Jing-Yan

    2013-03-24

    Background: Non-negative matrix factorization (NMF) has been shown to be a powerful tool for clustering gene expression data, which are widely used to classify cancers. NMF aims to find two non-negative matrices whose product closely approximates the original matrix. Traditional NMF methods minimize either the l2 norm or the Kullback-Leibler distance between the product of the two matrices and the original matrix. Correntropy was recently shown to be an effective similarity measurement due to its stability to outliers or noise.Results: We propose a maximum correntropy criterion (MCC)-based NMF method (NMF-MCC) for gene expression data-based cancer clustering. Instead of minimizing the l2 norm or the Kullback-Leibler distance, NMF-MCC maximizes the correntropy between the product of the two matrices and the original matrix. The optimization problem can be solved by an expectation conditional maximization algorithm.Conclusions: Extensive experiments on six cancer benchmark sets demonstrate that the proposed method is significantly more accurate than the state-of-the-art methods in cancer clustering. 2013 Wang et al.; licensee BioMed Central Ltd.

  17. Friction Stir Processing of Copper-Coated SiC Particulate-Reinforced Aluminum Matrix Composite

    Directory of Open Access Journals (Sweden)

    Chih-Wei Huang

    2018-04-01

    Full Text Available In the present work, we proposed a novel friction stir processing (FSP to produce a locally reinforced aluminum matrix composite (AMC by stirring copper-coated SiC particulate reinforcement into Al6061 alloy matrix. Electroless-plating process was applied to deposit the copper surface coating on the SiC particulate reinforcement for the purpose of improving the interfacial adhesion between SiC particles and Al matrix. The core-shell SiC structure provides a layer for the atomic diffusion between aluminum and copper to enhance the cohesion between reinforcing particles and matrix on one hand, the dispersion of fine copper in the Al matrix during FSP provides further dispersive strengthening and solid solution strengthening, on the other hand. Hardness distribution and tensile results across the stir zone validated the novel concept in improving the mechanical properties of AMC that was realized via FSP. Optical microscope (OM and Transmission Electron Microscopy (TEM investigations were conducted to investigate the microstructure. Energy dispersive spectrometer (EDS, electron probe micro-analyzer (EPMA, and X-ray diffraction (XRD were explored to analyze the atomic inter-diffusion and the formation of intermetallic at interface. The possible strengthening mechanisms of the AMC containing Cu-coated SiC particulate reinforcement were interpreted. The concept of strengthening developed in this work may open a new way of fabricating of particulate reinforced metal matrix composites.

  18. Sample Preparation and Extraction in Small Sample Volumes Suitable for Pediatric Clinical Studies: Challenges, Advances, and Experiences of a Bioanalytical HPLC-MS/MS Method Validation Using Enalapril and Enalaprilat

    Science.gov (United States)

    Burckhardt, Bjoern B.; Laeer, Stephanie

    2015-01-01

    In USA and Europe, medicines agencies force the development of child-appropriate medications and intend to increase the availability of information on the pediatric use. This asks for bioanalytical methods which are able to deal with small sample volumes as the trial-related blood lost is very restricted in children. Broadly used HPLC-MS/MS, being able to cope with small volumes, is susceptible to matrix effects. The latter restrains the precise drug quantification through, for example, causing signal suppression. Sophisticated sample preparation and purification utilizing solid-phase extraction was applied to reduce and control matrix effects. A scale-up from vacuum manifold to positive pressure manifold was conducted to meet the demands of high-throughput within a clinical setting. Faced challenges, advances, and experiences in solid-phase extraction are exemplarily presented on the basis of the bioanalytical method development and validation of low-volume samples (50 μL serum). Enalapril, enalaprilat, and benazepril served as sample drugs. The applied sample preparation and extraction successfully reduced the absolute and relative matrix effect to comply with international guidelines. Recoveries ranged from 77 to 104% for enalapril and from 93 to 118% for enalaprilat. The bioanalytical method comprising sample extraction by solid-phase extraction was fully validated according to FDA and EMA bioanalytical guidelines and was used in a Phase I study in 24 volunteers. PMID:25873972

  19. Sample Preparation and Extraction in Small Sample Volumes Suitable for Pediatric Clinical Studies: Challenges, Advances, and Experiences of a Bioanalytical HPLC-MS/MS Method Validation Using Enalapril and Enalaprilat

    Directory of Open Access Journals (Sweden)

    Bjoern B. Burckhardt

    2015-01-01

    Full Text Available In USA and Europe, medicines agencies force the development of child-appropriate medications and intend to increase the availability of information on the pediatric use. This asks for bioanalytical methods which are able to deal with small sample volumes as the trial-related blood lost is very restricted in children. Broadly used HPLC-MS/MS, being able to cope with small volumes, is susceptible to matrix effects. The latter restrains the precise drug quantification through, for example, causing signal suppression. Sophisticated sample preparation and purification utilizing solid-phase extraction was applied to reduce and control matrix effects. A scale-up from vacuum manifold to positive pressure manifold was conducted to meet the demands of high-throughput within a clinical setting. Faced challenges, advances, and experiences in solid-phase extraction are exemplarily presented on the basis of the bioanalytical method development and validation of low-volume samples (50 μL serum. Enalapril, enalaprilat, and benazepril served as sample drugs. The applied sample preparation and extraction successfully reduced the absolute and relative matrix effect to comply with international guidelines. Recoveries ranged from 77 to 104% for enalapril and from 93 to 118% for enalaprilat. The bioanalytical method comprising sample extraction by solid-phase extraction was fully validated according to FDA and EMA bioanalytical guidelines and was used in a Phase I study in 24 volunteers.

  20. A matrix S for all simple current extensions

    International Nuclear Information System (INIS)

    Fuchs, J.; Schellekens, A.N.; Schweigert, C.

    1996-01-01

    A formula is presented for the modular transformation matrix S for any simple current extension of the chiral algebra of a conformal field theory. This provides in particular an algorithm for resolving arbitrary simple current fixed points, in such a way that the matrix S we obtain is unitary and symmetric and furnishes a modular group representation. The formalism works in principle for any conformal field theory. A crucial ingredient is a set of matrices S ab J , where J is a simple current and a and b are fixed points of J. We expect that these input matrices realize the modular group for the torus one-point functions of the simple currents. In the case of WZW-models these matrices can be identified with the S-matrices of the orbit Lie algebras that were introduced recently. As a special case of our conjecture we obtain the modular matrix S for WZW-theories based on group manifolds that are not simply connected, as well as for most coset models. (orig.)

  1. Factors Analysis And Profit Achievement For Trading Company By Using Rough Set Method

    Directory of Open Access Journals (Sweden)

    Muhammad Ardiansyah Sembiring

    2017-06-01

    Full Text Available This research has been done to analysis the financial raport fortrading company and it is  intimately  related  to  some  factors  which  determine  the profit of company. The result of this reseach is showed about  New Knowledge and perform of the rule. In  discussion, by followed data mining process and using Rough Set method. Rough Set is to analyzed the performance of the result. This  reseach will be assist to the manager of company with draw the intactandobjective. Rough set method is also to difined  the rule of discovery process and started the formation about Decision System, Equivalence Class, Discernibility Matrix,  Discernibility Matrix Modulo D, Reduction and General Rules. Rough set method is efective model about the performing analysis in the company.   Keywords : Data Mining, General Rules, Profit,. Rough Set.

  2. Imaging Matrix Metalloproteases in Spontaneous Colon Tumors: Validation by Correlation with Histopathology.

    Science.gov (United States)

    Hensley, Harvey; Cooper, Harry S; Chang, Wen-Chi L; Clapper, Margie L

    2017-01-01

    The use of fluorescent probes in conjunction with white-light colonoscopy is a promising strategy for improving the detection of precancerous colorectal lesions, in particular flat (sessile) lesions that do not protrude into the lumen of the colon. We describe a method for determining the sensitivity and specificity of an enzymatically activated near-infrared probe (MMPSense680) for the detection of colon lesions in a mouse model (APC +/Min-FCCC ) of spontaneous colorectal cancer. Fluorescence intensity correlates directly with the activity of matrix metalloproteinases (MMPs). Overexpression of MMPs is an early event in the development of colorectal lesions. Although the probe employed serves as a reporter of the activity of MMPs, our method can be applied to any fluorescent probe that targets an early molecular event in the development of colorectal tumors.

  3. Matrix theory

    CERN Document Server

    Franklin, Joel N

    2003-01-01

    Mathematically rigorous introduction covers vector and matrix norms, the condition-number of a matrix, positive and irreducible matrices, much more. Only elementary algebra and calculus required. Includes problem-solving exercises. 1968 edition.

  4. Nonlocal low-rank and sparse matrix decomposition for spectral CT reconstruction

    Science.gov (United States)

    Niu, Shanzhou; Yu, Gaohang; Ma, Jianhua; Wang, Jing

    2018-02-01

    Spectral computed tomography (CT) has been a promising technique in research and clinics because of its ability to produce improved energy resolution images with narrow energy bins. However, the narrow energy bin image is often affected by serious quantum noise because of the limited number of photons used in the corresponding energy bin. To address this problem, we present an iterative reconstruction method for spectral CT using nonlocal low-rank and sparse matrix decomposition (NLSMD), which exploits the self-similarity of patches that are collected in multi-energy images. Specifically, each set of patches can be decomposed into a low-rank component and a sparse component, and the low-rank component represents the stationary background over different energy bins, while the sparse component represents the rest of the different spectral features in individual energy bins. Subsequently, an effective alternating optimization algorithm was developed to minimize the associated objective function. To validate and evaluate the NLSMD method, qualitative and quantitative studies were conducted by using simulated and real spectral CT data. Experimental results show that the NLSMD method improves spectral CT images in terms of noise reduction, artifact suppression and resolution preservation.

  5. Deviations of the lepton mapping matrix form the harrison-perkins-scott form

    International Nuclear Information System (INIS)

    Friedberg, R.; Lee, T.D.

    2010-01-01

    We propose a simple set of hypotheses governing the deviations of the leptonic mapping matrix from the Harrison-Perkins-Scott (HPS) form. These deviations are supposed to arise entirely from a perturbation of the mass matrix in the charged lepton sector. The perturbing matrix is assumed to be purely imaginary (thus maximally T-violating) and to have a strength in energy scale no greater (but perhaps smaller) than the muon mass. As we shall show,it then follows that the absolute value of the mapping matrix elements pertaining to the tau lepton deviate by no more than O((m μ /m τ ) 2 ) ≅ 3.5 x 10 -3 from their HPS values. Assuming that(m μ /m τ ) 2 can be neglected, we derive two simple constraints on the four parameters θ12, θ23, θ31, and δ of the mapping matrix. These constraints are independent of the details of the imaginary T-violating perturbation of the charged lepton mass matrix. We also show that the e and μ parts of the mapping matrix have a definite form governed by two parameters α and β; any deviation of order m μ /m τ can be accommodated by adjusting these two parameters. (authors)

  6. Petz recovery versus matrix reconstruction

    Science.gov (United States)

    Holzäpfel, Milan; Cramer, Marcus; Datta, Nilanjana; Plenio, Martin B.

    2018-04-01

    The reconstruction of the state of a multipartite quantum mechanical system represents a fundamental task in quantum information science. At its most basic, it concerns a state of a bipartite quantum system whose subsystems are subjected to local operations. We compare two different methods for obtaining the original state from the state resulting from the action of these operations. The first method involves quantum operations called Petz recovery maps, acting locally on the two subsystems. The second method is called matrix (or state) reconstruction and involves local, linear maps that are not necessarily completely positive. Moreover, we compare the quantities on which the maps employed in the two methods depend. We show that any state that admits Petz recovery also admits state reconstruction. However, the latter is successful for a strictly larger set of states. We also compare these methods in the context of a finite spin chain. Here, the state of a finite spin chain is reconstructed from the reduced states of a few neighbouring spins. In this setting, state reconstruction is the same as the matrix product operator reconstruction proposed by Baumgratz et al. [Phys. Rev. Lett. 111, 020401 (2013)]. Finally, we generalize both these methods so that they employ long-range measurements instead of relying solely on short-range correlations embodied in such local reduced states. Long-range measurements enable the reconstruction of states which cannot be reconstructed from measurements of local few-body observables alone and hereby we improve existing methods for quantum state tomography of quantum many-body systems.

  7. Cross validation for the classical model of structured expert judgment

    International Nuclear Information System (INIS)

    Colson, Abigail R.; Cooke, Roger M.

    2017-01-01

    We update the 2008 TU Delft structured expert judgment database with data from 33 professionally contracted Classical Model studies conducted between 2006 and March 2015 to evaluate its performance relative to other expert aggregation models. We briefly review alternative mathematical aggregation schemes, including harmonic weighting, before focusing on linear pooling of expert judgments with equal weights and performance-based weights. Performance weighting outperforms equal weighting in all but 1 of the 33 studies in-sample. True out-of-sample validation is rarely possible for Classical Model studies, and cross validation techniques that split calibration questions into a training and test set are used instead. Performance weighting incurs an “out-of-sample penalty” and its statistical accuracy out-of-sample is lower than that of equal weighting. However, as a function of training set size, the statistical accuracy of performance-based combinations reaches 75% of the equal weight value when the training set includes 80% of calibration variables. At this point the training set is sufficiently powerful to resolve differences in individual expert performance. The information of performance-based combinations is double that of equal weighting when the training set is at least 50% of the set of calibration variables. Previous out-of-sample validation work used a Total Out-of-Sample Validity Index based on all splits of the calibration questions into training and test subsets, which is expensive to compute and includes small training sets of dubious value. As an alternative, we propose an Out-of-Sample Validity Index based on averaging the product of statistical accuracy and information over all training sets sized at 80% of the calibration set. Performance weighting outperforms equal weighting on this Out-of-Sample Validity Index in 26 of the 33 post-2006 studies; the probability of 26 or more successes on 33 trials if there were no difference between performance

  8. Optimized Basis Sets for the Environment in the Domain-Specific Basis Set Approach of the Incremental Scheme.

    Science.gov (United States)

    Anacker, Tony; Hill, J Grant; Friedrich, Joachim

    2016-04-21

    Minimal basis sets, denoted DSBSenv, based on the segmented basis sets of Ahlrichs and co-workers have been developed for use as environmental basis sets for the domain-specific basis set (DSBS) incremental scheme with the aim of decreasing the CPU requirements of the incremental scheme. The use of these minimal basis sets within explicitly correlated (F12) methods has been enabled by the optimization of matching auxiliary basis sets for use in density fitting of two-electron integrals and resolution of the identity. The accuracy of these auxiliary sets has been validated by calculations on a test set containing small- to medium-sized molecules. The errors due to density fitting are about 2-4 orders of magnitude smaller than the basis set incompleteness error of the DSBSenv orbital basis sets. Additional reductions in computational cost have been tested with the reduced DSBSenv basis sets, in which the highest angular momentum functions of the DSBSenv auxiliary basis sets have been removed. The optimized and reduced basis sets are used in the framework of the domain-specific basis set of the incremental scheme to decrease the computation time without significant loss of accuracy. The computation times and accuracy of the previously used environmental basis and that optimized in this work have been validated with a test set of medium- to large-sized systems. The optimized and reduced DSBSenv basis sets decrease the CPU time by about 15.4% and 19.4% compared with the old environmental basis and retain the accuracy in the absolute energy with standard deviations of 0.99 and 1.06 kJ/mol, respectively.

  9. Validation philosophy

    International Nuclear Information System (INIS)

    Vornehm, D.

    1994-01-01

    To determine when a set of calculations falls within an umbrella of an existing validation documentation, it is necessary to generate a quantitative definition of range of applicability (our definition is only qualitative) for two reasons: (1) the current trend in our regulatory environment will soon make it impossible to support the legitimacy of a validation without quantitative guidelines; and (2) in my opinion, the lack of support by DOE for further critical experiment work is directly tied to our inability to draw a quantitative open-quotes line-in-the-sandclose quotes beyond which we will not use computer-generated values

  10. The detection of influential subsets in linear regression using an influence matrix

    OpenAIRE

    Peña, Daniel; Yohai, Víctor J.

    1991-01-01

    This paper presents a new method to identify influential subsets in linear regression problems. The procedure uses the eigenstructure of an influence matrix which is defined as the matrix of uncentered covariance of the effect on the whole data set of deleting each observation, normalized to include the univariate Cook's statistics in the diagonal. It is shown that points in an influential subset will appear with large weight in at least one of the eigenvector linked to the largest eigenvalue...

  11. Certification & validation of biosafety level-2 & biosafety level-3 laboratories in Indian settings & common issues.

    Science.gov (United States)

    Mourya, Devendra T; Yadav, Pragya D; Khare, Ajay; Khan, Anwar H

    2017-10-01

    With increasing awareness regarding biorisk management worldwide, many biosafety laboratories are being setup in India. It is important for the facility users, project managers and the executing agencies to understand the process of validation and certification of such biosafety laboratories. There are some international guidelines available, but there are no national guidelines or reference standards available in India on certification and validation of biosafety laboratories. There is no accredited government/private agency available in India to undertake validation and certification of biosafety laboratories. Therefore, the reliance is mostly on indigenous experience, talent and expertise available, which is in short supply. This article elucidates the process of certification and validation of biosafety laboratories in a concise manner for the understanding of the concerned users and suggests the important parameters and criteria that should be considered and addressed during the laboratory certification and validation process.

  12. Binding of matrix metalloproteinase inhibitors to extracellular matrix: 3D-QSAR analysis.

    Science.gov (United States)

    Zhang, Yufen; Lukacova, Viera; Bartus, Vladimir; Nie, Xiaoping; Sun, Guorong; Manivannan, Ethirajan; Ghorpade, Sandeep R; Jin, Xiaomin; Manyem, Shankar; Sibi, Mukund P; Cook, Gregory R; Balaz, Stefan

    2008-10-01

    Binding to the extracellular matrix, one of the most abundant human protein complexes, significantly affects drug disposition. Specifically, the interactions with extracellular matrix determine the free concentrations of small molecules acting in tissues, including signaling peptides, inhibitors of tissue remodeling enzymes such as matrix metalloproteinases, and other drug candidates. The nature of extracellular matrix binding was elucidated for 63 matrix metalloproteinase inhibitors, for which the association constants to an extracellular matrix mimic were reported here. The data did not correlate with lipophilicity as a common determinant of structure-nonspecific, orientation-averaged binding. A hypothetical structure of the binding site of the solidified extracellular matrix surrogate was analyzed using the Comparative Molecular Field Analysis, which needed to be applied in our multi-mode variant. This fact indicates that the compounds bind to extracellular matrix in multiple modes, which cannot be considered as completely orientation-averaged and exhibit structural dependence. The novel comparative molecular field analysis models, exhibiting satisfactory descriptive and predictive abilities, are suitable for prediction of the extracellular matrix binding for the untested chemicals, which are within applicability domains. The results contribute to a better prediction of the pharmacokinetic parameters such as the distribution volume and the tissue-blood partition coefficients, in addition to a more imminent benefit for the development of more effective matrix metalloproteinase inhibitors.

  13. A Revalidation of the SET37 Questionnaire for Student Evaluations of Teaching

    Science.gov (United States)

    Mortelmans, Dimitri; Spooren, Pieter

    2009-01-01

    In this study, the authors report on the validity and reliability of a paper-and-pencil instrument called SET37 used for Student Evaluation of Teaching (SET) in higher education. Using confirmatory factor analysis on 2525 questionnaires, a revalidation of the SET37 shows construct and discriminant validity of the 12 dimensions included in the…

  14. Matrix calculus

    CERN Document Server

    Bodewig, E

    1959-01-01

    Matrix Calculus, Second Revised and Enlarged Edition focuses on systematic calculation with the building blocks of a matrix and rows and columns, shunning the use of individual elements. The publication first offers information on vectors, matrices, further applications, measures of the magnitude of a matrix, and forms. The text then examines eigenvalues and exact solutions, including the characteristic equation, eigenrows, extremum properties of the eigenvalues, bounds for the eigenvalues, elementary divisors, and bounds for the determinant. The text ponders on approximate solutions, as well

  15. Neutrino mass matrix

    International Nuclear Information System (INIS)

    Strobel, E.L.

    1985-01-01

    Given the many conflicting experimental results, examination is made of the neutrino mass matrix in order to determine possible masses and mixings. It is assumed that the Dirac mass matrix for the electron, muon, and tau neutrinos is similar in form to those of the quarks and charged leptons, and that the smallness of the observed neutrino masses results from the Gell-Mann-Ramond-Slansky mechanism. Analysis of masses and mixings for the neutrinos is performed using general structures for the Majorana mass matrix. It is shown that if certain tentative experimental results concerning the neutrino masses and mixing angles are confirmed, significant limitations may be placed on the Majorana mass matrix. The most satisfactory simple assumption concerning the Majorana mass matrix is that it is approximately proportional to the Dirac mass matrix. A very recent experimental neutrino mass result and its implications are discussed. Some general properties of matrices with structure similar to the Dirac mass matrices are discussed

  16. The cellulose resource matrix.

    Science.gov (United States)

    Keijsers, Edwin R P; Yılmaz, Gülden; van Dam, Jan E G

    2013-03-01

    feedstock and the performance in the end-application. The cellulose resource matrix should become a practical tool for stakeholders to make choices regarding raw materials, process or market. Although there is a vast amount of scientific and economic information available on cellulose and lignocellulosic resources, the accessibility for the interested layman or entrepreneur is very difficult and the relevance of the numerous details in the larger context is limited. Translation of science to practical accessible information with modern data management and data integration tools is a challenge. Therefore, a detailed matrix structure was composed in which the different elements or entries of the matrix were identified and a tentative rough set up was made. The inventory includes current commodities and new cellulose containing and raw materials as well as exotic sources and specialties. Important chemical and physical properties of the different raw materials were identified for the use in processes and products. When available, the market data such as price and availability were recorded. Established and innovative cellulose extraction and refining processes were reviewed. The demands on the raw material for suitable processing were collected. Processing parameters known to affect the cellulose properties were listed. Current and expected emerging markets were surveyed as well as their different demands on cellulose raw materials and processes. The setting up of the cellulose matrix as a practical tool requires two steps. Firstly, the reduction of the needed data by clustering of the characteristics of raw materials, processes and markets and secondly, the building of a database that can provide the answers to the questions from stakeholders with an indicative character. This paper describes the steps taken to achieve the defined clusters of most relevant and characteristic properties. These data can be expanded where required. More detailed specification can be obtained

  17. Body fluid matrix evaluation on a Roche cobas 8000 system.

    Science.gov (United States)

    Owen, William E; Thatcher, Mindy L; Crabtree, Karolyn J; Greer, Ryan W; Strathmann, Frederick G; Straseski, Joely A; Genzen, Jonathan R

    2015-09-01

    Chemical analysis of body fluids is commonly requested by physicians. Because most commercial FDA-cleared clinical laboratory assays are not validated by diagnostic manufacturers for "non-serum" and "non-plasma" specimens, laboratories may need to complete additional validation studies to comply with regulatory requirements regarding body fluid testing. The objective of this report is to perform recovery studies to evaluate potential body fluid matrix interferences for commonly requested chemistry analytes. Using an IRB-approved protocol, previously collected clinical body fluid specimens (biliary/hepatic, cerebrospinal, dialysate, drain, pancreatic, pericardial, peritoneal, pleural, synovial, and vitreous) were de-identified and frozen (-20°C) until experiments were performed. Recovery studies (spiking with high concentration serum, control, and/or calibrator) were conducted using 10% spiking solution by volume; n=5 specimens per analyte/body fluid investigated. Specimens were tested on a Roche cobas 8000 system (c502, c702, e602, and ISE modules). In all 80 analyte/body fluid combinations investigated (including amylase, total bilirubin, urea nitrogen, carbohydrate antigen 19-9, carcinoembryonic antigen, cholesterol, chloride, creatinine, glucose, potassium, lactate dehydrogenase, lipase, rheumatoid factor, sodium, total protein, triglycerides, and uric acid), the average percent recovery was within predefined acceptable limits (less than ±10% from the calculated ideal recovery). The present study provides evidence against the presence of any systematic matrix interference in the analyte/body fluid combinations investigated on the Roche cobas 8000 system. Such findings support the utility of ongoing body fluid validation initiatives conducted to maintain compliance with regulatory requirements. Copyright © 2015 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.

  18. Certification & validation of biosafety level-2 & biosafety level-3 laboratories in Indian settings & common issues

    Directory of Open Access Journals (Sweden)

    Devendra T Mourya

    2017-01-01

    Full Text Available With increasing awareness regarding biorisk management worldwide, many biosafety laboratories are being setup in India. It is important for the facility users, project managers and the executing agencies to understand the process of validation and certification of such biosafety laboratories. There are some international guidelines available, but there are no national guidelines or reference standards available in India on certification and validation of biosafety laboratories. There is no accredited government/private agency available in India to undertake validation and certification of biosafety laboratories. Therefore, the reliance is mostly on indigenous experience, talent and expertise available, which is in short supply. This article elucidates the process of certification and validation of biosafety laboratories in a concise manner for the understanding of the concerned users and suggests the important parameters and criteria that should be considered and addressed during the laboratory certification and validation process.

  19. Discriminating real victims from feigners of psychological injury in gender violence: Validating a protocol for forensic setting

    Directory of Open Access Journals (Sweden)

    Ramon Arce

    2009-07-01

    Full Text Available Standard clinical assessment of psychological injury does not provide valid evidence in forensic settings, and screening of genuine from feigned complaints must be undertaken prior to the diagnosis of mental state (American Psychological Association, 2002. Whereas psychological injury is Post-traumatic Stress Disorder (PTSD, a clinical diagnosis may encompass other nosologies (e.g., depression and anxiety. The assessment of psychological injury in forensic contexts requires a multimethod approach consisting of a psychometric measure and an interview. To assess the efficacy of the multimethod approach in discriminating real from false victims, 25 real victims of gender violence and 24 feigners were assessed using a the Symptom Checklist-90-Revised (SCL-90-R, a recognition task; and a forensic clinical interview, a knowledge task. The results revealed that feigners reported more clinical symptoms on the SCL-90-R than real victims. Moreover, the feigning indicators on the SCL-90-R, GSI, PST, and PSDI were higher in feigners, but not sufficient to provide a screening test for invalidating feigning protocols. In contrast, real victims reported more clinical symptoms related to PTSD in the forensic clinical interview than feigners. Notwithstanding, in the forensic clinical interview feigners were able to feign PTSD which was not detected by the analysis of feigning strategies. The combination of both measures and their corresponding validity controls enabled the discrimination of real victims from feigners. Hence, a protocol for discriminating the psychological sequelae of real victims from feigners of gender violence is described.

  20. IMPACT OF MATRIX INVERSION ON THE COMPLEXITY OF THE FINITE ELEMENT METHOD

    Directory of Open Access Journals (Sweden)

    M. Sybis

    2016-04-01

    Full Text Available Purpose. The development of a wide construction market and a desire to design innovative architectural building constructions has resulted in the need to create complex numerical models of objects having increasingly higher computational complexity. The purpose of this work is to show that choosing a proper method for solving the set of equations can improve the calculation time (reduce the complexity by a few levels of magnitude. Methodology. The article presents an analysis of the impact of matrix inversion algorithm on the deflection calculation in the beam, using the finite element method (FEM. Based on the literature analysis, common methods of calculating set of equations were determined. From the found solutions the Gaussian elimination, LU and Cholesky decomposition methods have been implemented to determine the effect of the matrix inversion algorithm used for solving the equations set on the number of computational operations performed. In addition, each of the implemented method has been further optimized thereby reducing the number of necessary arithmetic operations. Findings. These optimizations have been performed on the use of certain properties of the matrix, such as symmetry or significant number of zero elements in the matrix. The results of the analysis are presented for the division of the beam to 5, 50, 100 and 200 nodes, for which the deflection has been calculated. Originality. The main achievement of this work is that it shows the impact of the used methodology on the complexity of solving the problem (or equivalently, time needed to obtain results. Practical value. The difference between the best (the less complex and the worst (the most complex is in the row of few orders of magnitude. This result shows that choosing wrong methodology may enlarge time needed to perform calculation significantly.

  1. Matrix fluid chemistry experiment. Final report June 1998 - March 2003

    International Nuclear Information System (INIS)

    Smellie, John A.T.; Waber, H. Niklaus; Frape, Shaun K.

    2003-06-01

    The Matrix Fluid Chemistry Experiment set out to determine the composition and evolution of matrix pore fluids/waters in low permeable rock located at repository depths in the Aespoe Hard Rock Laboratory (HRL). Matrix pore fluids/waters can be highly saline in composition and, if accessible, may influence the near-field groundwater chemistry of a repository system. Characterising pore fluids/waters involved in-situ borehole sampling and analysis integrated with laboratory studies and experiments on rock matrix drill core material. Relating the rate of in-situ pore water accumulation during sampling to the measured rock porosity indicated a hydraulic conductivity of 10 -14 -10 -13 m/s for the rock matrix. This was in accordance with earlier estimated predictions. The sampled matrix pore water, brackish in type, mostly represents older palaeo- groundwater mixtures preserved in the rock matrix and dating back to at least the last glaciation. A component of matrix pore 'fluid' is also present. One borehole section suggests a younger groundwater component which has accessed the rock matrix during the experiment. There is little evidence that the salinity of the matrix pore waters has been influenced significantly by fluid inclusion populations hosted by quartz. Crush/leach, cation exchange, pore water diffusion and pore water displacement laboratory experiments were carried out to compare extracted/calculated matrix pore fluids/waters with in-situ sampling. Of these the pore water diffusion experiments appear to be the most promising approach and a recommended site characterisation protocol has been formulated. The main conclusions from the Matrix Fluid Chemistry Experiment are: Groundwater movement within the bedrock hosting the experimental site has been enhanced by increased hydraulic gradients generated by the presence of the tunnel, and to a much lesser extent by the borehole itself. Over experimental timescales ∼4 years) solute transport through the rock matrix

  2. Matrix fluid chemistry experiment. Final report June 1998 - March 2003

    Energy Technology Data Exchange (ETDEWEB)

    Smellie, John A.T. [Conterra AB, Luleaa (Sweden); Waber, H. Niklaus [Univ. of Bern (Switzerland). Inst. of Geology; Frape, Shaun K. [Univ. of Waterloo (Canada). Dept. of Earth Sciences

    2003-06-01

    The Matrix Fluid Chemistry Experiment set out to determine the composition and evolution of matrix pore fluids/waters in low permeable rock located at repository depths in the Aespoe Hard Rock Laboratory (HRL). Matrix pore fluids/waters can be highly saline in composition and, if accessible, may influence the near-field groundwater chemistry of a repository system. Characterising pore fluids/waters involved in-situ borehole sampling and analysis integrated with laboratory studies and experiments on rock matrix drill core material. Relating the rate of in-situ pore water accumulation during sampling to the measured rock porosity indicated a hydraulic conductivity of 10{sup -14}-10{sup -13} m/s for the rock matrix. This was in accordance with earlier estimated predictions. The sampled matrix pore water, brackish in type, mostly represents older palaeo- groundwater mixtures preserved in the rock matrix and dating back to at least the last glaciation. A component of matrix pore 'fluid' is also present. One borehole section suggests a younger groundwater component which has accessed the rock matrix during the experiment. There is little evidence that the salinity of the matrix pore waters has been influenced significantly by fluid inclusion populations hosted by quartz. Crush/leach, cation exchange, pore water diffusion and pore water displacement laboratory experiments were carried out to compare extracted/calculated matrix pore fluids/waters with in-situ sampling. Of these the pore water diffusion experiments appear to be the most promising approach and a recommended site characterisation protocol has been formulated. The main conclusions from the Matrix Fluid Chemistry Experiment are: Groundwater movement within the bedrock hosting the experimental site has been enhanced by increased hydraulic gradients generated by the presence of the tunnel, and to a much lesser extent by the borehole itself. Over experimental timescales {approx}4 years) solute transport

  3. An algorithm for calculation of the Jordan canonical form of a matrix

    Science.gov (United States)

    Sridhar, B.; Jordan, D.

    1973-01-01

    Jordan canonical forms are used extensively in the literature on control systems. However, very few methods are available to compute them numerically. Most numerical methods compute a set of basis vectors in terms of which the given matrix is diagonalized when such a change of basis is possible. Here, a simple and efficient method is suggested for computing the Jordan canonical form and the corresponding transformation matrix. The method is based on the definition of a generalized eigenvector, and a natural extension of Gauss elimination techniques.

  4. "A New Class of Creep Resistant Oxide/Oxide Ceramic Matrix Composites"

    Energy Technology Data Exchange (ETDEWEB)

    Dr. Mohit Jain, Dr. Ganesh Skandan, Prof. Roger Cannon, Rutgers University

    2007-03-30

    Despite recent progress in the development of SiC-SiC ceramic matrix composites (CMCs), their application in industrial gas turbines for distributed energy (DE) systems has been limited. The poor oxidation resistance of the non-oxide ceramics warrants the use of envrionmental barrier coatings (EBCs), which in turn lead to issues pertaining to life expectancy of the coatings. On the other hand, oxide/oxide CMCs are potential replacements, but their use has been limited until now due to the poor creep resistance at high temperatures, particularly above 1200 oC: the lack of a creep resistant matrix has been a major limiting factor. Using yttrium aluminum garnet (YAG) as the matrix material system, we have advanced the state-of-the-art in oxide/oxide CMCs by introducing innovations in both the structure and composition of the matrix material, thereby leading to high temperature matrix creep properties not achieved until now. An array of YAG-based powders with a unique set of particle characteristics were produced in-house and sintered to full density and compressive creep data was obtained. Aided in part by the composition and the microstructure, the creep rates were found to be two orders of magnitude smaller than the most creep resistant oxide fiber available commercially. Even after accounting for porosity and a smaller matrix grain size in a practical CMC component, the YAG-based matrix material was found to creep slower than the most creep resistant oxide fiber available commercially.

  5. A Revised Set of Dendroclimatic Reconstructions of Summer Drought over the Conterminous U.S.

    Science.gov (United States)

    Zhang, Z.; Mann, M. E.; Cook, E. R.

    2002-12-01

    We describe a revised set of dendroclimatic reconstructions of drought patterns over the conterminous U.S back to 1700. These reconstructions are based on a set of 483 drought-sensitive tree ring chronologies available across the continental U.S. used previously by Cook et al [Cook, E.R., D.M. Meko, D.W. Stahle, and M.K. Cleaveland, Drought Reconstructions for the Continental United States, Journal of Climate, 12, 1145-1162, 1999]. In contrast with the "Point by Point" (PPR) local regression technique used by Cook et al (1999), the tree ring data were calibrated against the instrumental record of summer drought[June-August Palmer Drought Severity Index (PDSI)] based on application of the "Regularized Expectation Maximization" (RegEM) algorithm to relate proxy and instrumental data over a common (20th century) interval. This approach calibrates the proxy data set against the instrumental record by treating the reconstruction as initially missing data in the combined proxy/instrumental data matrix, and optimally estimating the mean and covariances of the combined data matrix through an iterative procedure which yields a reconstruction of the PDSI field with minimal error variance [Schneider, T., Analysis of Incomplete Climate Data: Estimation of Mean Values and Covariance Matrices and Imputation of Missing Values, Journal of Climate, 14, 853-871, 2001; Mann, M.E., Rutherford, S., Climate Reconstruction Using 'Pseudoproxies', Geophysical Research Letters, 29, 139-1-139-4, 2002; Rutherford, S., Mann, M.E., Delworth, T.L., Stouffer, R., The Performance of Covariance-Based Methods of Climate Field Reconstruction Under Stationary and Nonstationary Forcing, J. Climate, accepted, 2002]. As in Cook et al (1999), a screening procedure was first used to select an optimal subset of candidate tree-ring drought predictors, and the predictors (tree ring data) and predictand (instrumental PDSI) were pre-whitened prior to calibration (with serial correlation added back into the

  6. Atomization of Cd in U+Zr matrix after chemical separation using GF-AAS

    International Nuclear Information System (INIS)

    Thulasidas, S.K.; Gupta, Santosh Kumar; Natarajan, V.

    2014-01-01

    Studies on the direct atomization of Cd in U+Zr matrix were carried out and the effect of matrix composition and matrix concentration on the analyte absorbance were investigated. Development of a method using graphite furnace atomic absorption spectrometry (GF-AAS) for determination of Cd is required for FBR fuel (U+20%Zr) materials. It was reported that the absorbance signal for Cd is reduced with matrix, 50% at 20 mg/mL of U and 10 mg/mL of Zr matrix as compared to matrix free solution. To use the method for U+Zr mixed oxide samples, effect of varying composition of Zr in U+Zr mixed matrix was studied. The results indicated that Cd absorbance signal remained unaffected in the range 0-40% Zr in (U+Zr) mixed matrix with 20 mg/mL total matrix. Based on these studies, an analytical method was developed for the direct determination of Cd with 20% Zr in 20 mg/mL of U+Zr solution with optimized experimental parameters. The range of analysis was found to be 0.005-0.1 g/mL for Cd with 20 mg/mL matrix; this leads to detection limits of 0.25 ppm. To meet the specification limits at 0.1 ppm level for Cd, it was necessary to separate the matrix from the sample using solvent extraction method. It was reported that with 30%TBP+70%CCl 4 in 7M HNO 3 , a selective simultaneous extraction of U and Zr into the organic phase can be achieved. In the present studies, same extraction procedure was used with 100 mg U+Zr sample. The effect of U+Zr in raffinate on Cd was also estimated. To validate the method, the extracted aqueous samples were also analyzed by ICP-AES SPECTRO ARCOS SOP technique independently and the results were compared. It was seen that Cd estimation was not affected in the presence of 10-50 μg/mL U+Zr by ICP-AES as well

  7. Validation and Application of a Dried Blood Spot Ceftriaxone Assay

    Science.gov (United States)

    Page-Sharp, Madhu; Nunn, Troy; Salman, Sam; Moore, Brioni R.; Batty, Kevin T.; Davis, Timothy M. E.

    2015-01-01

    Dried blood spot (DBS) antibiotic assays can facilitate pharmacokinetic/pharmacodynamic (PK/PD) studies in situations where venous blood sampling is logistically and/or ethically problematic. In this study, we aimed to develop, validate, and apply a DBS ceftriaxone assay. A liquid chromatography-tandem mass spectroscopy (LC-MS/MS) DBS ceftriaxone assay was assessed for matrix effects, process efficiency, recovery, variability, and limits of quantification (LOQ) and detection (LOD). The effects of hematocrit, protein binding, red cell partitioning, and chad positioning were evaluated, and thermal stability was assessed. Plasma, DBS, and cell pellet ceftriaxone concentrations in 10 healthy adults were compared, and plasma concentration-time profiles of DBS and plasma ceftriaxone were incorporated into population PK models. The LOQ and LOD for ceftriaxone in DBS were 0.14 mg/liter and 0.05 mg/liter, respectively. Adjusting for hematocrit, red cell partitioning, and relative recovery, DBS-predicted plasma concentrations were comparable to measured plasma concentrations (r > 0.95, P 95% initial concentrations in DBS for 14 h, 35 h, 30 days, 21 weeks, and >11 months, respectively. The present DBS ceftriaxone assay is robust and can be used as a surrogate for plasma concentrations to provide valid PK and PK/PD data in a variety of clinical situations, including in studies of young children and of those in remote or resource-poor settings. PMID:26438505

  8. Mean deformation metrics for quantifying 3D cell–matrix interactions without requiring information about matrix material properties

    Science.gov (United States)

    Stout, David A.; Bar-Kochba, Eyal; Estrada, Jonathan B.; Toyjanova, Jennet; Kesari, Haneesh; Reichner, Jonathan S.; Franck, Christian

    2016-01-01

    Mechanobiology relates cellular processes to mechanical signals, such as determining the effect of variations in matrix stiffness with cell tractions. Cell traction recorded via traction force microscopy (TFM) commonly takes place on materials such as polyacrylamide- and polyethylene glycol-based gels. Such experiments remain limited in physiological relevance because cells natively migrate within complex tissue microenvironments that are spatially heterogeneous and hierarchical. Yet, TFM requires determination of the matrix constitutive law (stress–strain relationship), which is not always readily available. In addition, the currently achievable displacement resolution limits the accuracy of TFM for relatively small cells. To overcome these limitations, and increase the physiological relevance of in vitro experimental design, we present a new approach and a set of associated biomechanical signatures that are based purely on measurements of the matrix's displacements without requiring any knowledge of its constitutive laws. We show that our mean deformation metrics (MDM) approach can provide significant biophysical information without the need to explicitly determine cell tractions. In the process of demonstrating the use of our MDM approach, we succeeded in expanding the capability of our displacement measurement technique such that it can now measure the 3D deformations around relatively small cells (∼10 micrometers), such as neutrophils. Furthermore, we also report previously unseen deformation patterns generated by motile neutrophils in 3D collagen gels. PMID:26929377

  9. Validated RP-HPLC/DAD Method for the Quantification of Insect Repellent Ethyl 2-Aminobenzoate in Membrane-Moderated Matrix Type Monolithic Polymeric Device.

    Science.gov (United States)

    Islam, Johirul; Zaman, Kamaruz; Chakrabarti, Srijita; Sharma Bora, Nilutpal; Mandal, Santa; Pratim Pathak, Manash; Srinivas Raju, Pakalapati; Chattopadhyay, Pronobesh

    2017-07-01

    A simple, accurate and sensitive reversed-phase high-performance liquid chromatographic (RP-HPLC) method has been developed for the estimation of ethyl 2-aminobenzoate (EAB) in a matrix type monolithic polymeric device and validated as per the International Conference on Harmonization guidelines. The analysis was performed isocratically on a ZORBAX Eclipse plus C18 analytical column (250 × 4.4 mm, 5 μm) and a diode array detector (DAD) using acetonitrile and water (75:25 v/v) as the mobile phase by keeping the flow-rate constant at 1.0 mL/min. Determination of EAB was not interfered in the presence of excipients. Inter- and intra-day relative standard deviations were not higher than 2%. Mean recovery was between 98.7 and 101.3%. Calibration curve was linear in the concentration range of 0.5-10 µg/mL. Limits of detection and quantification were 0.19 and 0.60 µg/mL, respectively. Thus, the present report put forward a novel method for the estimation of EAB, an emerging insect repellent, by using RP-HPLC technique. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  10. The Virtual Care Climate Questionnaire: Development and Validation of a Questionnaire Measuring Perceived Support for Autonomy in a Virtual Care Setting.

    Science.gov (United States)

    Smit, Eline Suzanne; Dima, Alexandra Lelia; Immerzeel, Stephanie Annette Maria; van den Putte, Bas; Williams, Geoffrey Colin

    2017-05-08

    Web-based health behavior change interventions may be more effective if they offer autonomy-supportive communication facilitating the internalization of motivation for health behavior change. Yet, at this moment no validated tools exist to assess user-perceived autonomy-support of such interventions. The aim of this study was to develop and validate the virtual climate care questionnaire (VCCQ), a measure of perceived autonomy-support in a virtual care setting. Items were developed based on existing questionnaires and expert consultation and were pretested among experts and target populations. The virtual climate care questionnaire was administered in relation to Web-based interventions aimed at reducing consumption of alcohol (Study 1; N=230) or cannabis (Study 2; N=228). Item properties, structural validity, and reliability were examined with item-response and classical test theory methods, and convergent and divergent validity via correlations with relevant concepts. In Study 1, 20 of 23 items formed a one-dimensional scale (alpha=.97; omega=.97; H=.66; mean 4.9 [SD 1.0]; range 1-7) that met the assumptions of monotonicity and invariant item ordering. In Study 2, 16 items fitted these criteria (alpha=.92; H=.45; omega=.93; mean 4.2 [SD 1.1]; range 1-7). Only 15 items remained in the questionnaire in both studies, thus we proceeded to the analyses of the questionnaire's reliability and construct validity with a 15-item version of the virtual climate care questionnaire. Convergent validity of the resulting 15-item virtual climate care questionnaire was confirmed by positive associations with autonomous motivation (Study 1: r=.66, Pperceived competence for reducing alcohol intake (Study 1: r=.52, Pperceived competence for learning (Study 2: r=.05, P=.48). The virtual climate care questionnaire accurately assessed participants' perceived autonomy-support offered by two Web-based health behavior change interventions. Overall, the scale showed the expected properties

  11. Validating the Copenhagen Psychosocial Questionnaire (COPSOQ-II) Using Set-ESEM: Identifying Psychosocial Risk Factors in a Sample of School Principals.

    Science.gov (United States)

    Dicke, Theresa; Marsh, Herbert W; Riley, Philip; Parker, Philip D; Guo, Jiesi; Horwood, Marcus

    2018-01-01

    School principals world-wide report high levels of strain and attrition resulting in a shortage of qualified principals. It is thus crucial to identify psychosocial risk factors that reflect principals' occupational wellbeing. For this purpose, we used the Copenhagen Psychosocial Questionnaire (COPSOQ-II), a widely used self-report measure covering multiple psychosocial factors identified by leading occupational stress theories. We evaluated the COPSOQ-II regarding factor structure and longitudinal, discriminant, and convergent validity using latent structural equation modeling in a large sample of Australian school principals ( N = 2,049). Results reveal that confirmatory factor analysis produced marginally acceptable model fit. A novel approach we call set exploratory structural equation modeling (set-ESEM), where cross-loadings were only allowed within a priori defined sets of factors, fit well, and was more parsimonious than a full ESEM. Further multitrait-multimethod models based on the set-ESEM confirm the importance of a principal's psychosocial risk factors; Stressors and depression were related to demands and ill-being, while confidence and autonomy were related to wellbeing. We also show that working in the private sector was beneficial for showing a low psychosocial risk, while other demographics have little effects. Finally, we identify five latent risk profiles (high risk to no risk) of school principals based on all psychosocial factors. Overall the research presented here closes the theory application gap of a strong multi-dimensional measure of psychosocial risk-factors.

  12. Fuzzy risk matrix

    International Nuclear Information System (INIS)

    Markowski, Adam S.; Mannan, M. Sam

    2008-01-01

    A risk matrix is a mechanism to characterize and rank process risks that are typically identified through one or more multifunctional reviews (e.g., process hazard analysis, audits, or incident investigation). This paper describes a procedure for developing a fuzzy risk matrix that may be used for emerging fuzzy logic applications in different safety analyses (e.g., LOPA). The fuzzification of frequency and severity of the consequences of the incident scenario are described which are basic inputs for fuzzy risk matrix. Subsequently using different design of risk matrix, fuzzy rules are established enabling the development of fuzzy risk matrices. Three types of fuzzy risk matrix have been developed (low-cost, standard, and high-cost), and using a distillation column case study, the effect of the design on final defuzzified risk index is demonstrated

  13. A matrix problem over a discrete valuation ring

    International Nuclear Information System (INIS)

    Zavadskii, A G; Revitskaya, U S

    1999-01-01

    A flat matrix problem of mixed type (over a discrete valuation ring and its skew field of fractions) is considered which naturally arises in connection with several problems in the theory of integer-valued representations and in ring theory. For this problem, a criterion for module boundedness is proved, which is stated in terms of a pair of partially ordered sets (P(A),P(B)) associated with the pair of transforming algebras (A,B) defining the problem. The corresponding statement coincides in effect with the formulation of Kleiner's well-known finite-type criterion for representations of pairs of partially ordered sets over a field. The proof is based on a reduction (which uses the techniques of differentiation) to representations of semimaximal rings (tiled orders) and partially ordered sets

  14. CSNI Integral Test Facility Matrices for Validation of Best-Estimate Thermal-Hydraulic Computer Codes

    International Nuclear Information System (INIS)

    Glaeser, H.

    2008-01-01

    Internationally agreed Integral Test Facility (ITF) matrices for validation of realistic thermal hydraulic system computer codes were established. ITF development is mainly for Pressurised Water Reactors (PWRs) and Boiling Water Reactors (BWRs). A separate activity was for Russian Pressurised Water-cooled and Water-moderated Energy Reactors (WWER). Firstly, the main physical phenomena that occur during considered accidents are identified, test types are specified, and test facilities suitable for reproducing these aspects are selected. Secondly, a list of selected experiments carried out in these facilities has been set down. The criteria to achieve the objectives are outlined. In this paper some specific examples from the ITF matrices will also be provided. The matrices will be a guide for code validation, will be a basis for comparisons of code predictions performed with different system codes, and will contribute to the quantification of the uncertainty range of code model predictions. In addition to this objective, the construction of such a matrix is an attempt to record information which has been generated around the world over the last years, so that it is more accessible to present and future workers in that field than would otherwise be the case.

  15. Quantifying intracellular metabolites in yeast using a matrix with minimal interference from naturally occurring analytes

    DEFF Research Database (Denmark)

    Magdenoska, Olivera; Knudsen, Peter Boldsen; Svenssen, Daniel Killerup

    2015-01-01

    in [13C6]glucose/nonlabeled glucose (50:50, w/w) growth medium. The areas of both 12C6 and 13C6 fractions of ATP in the matrix were measured to be 2% of the sum of the areas of all ATP isotopes detected. The matrix allowed for spiking of both the nonlabeled and SIL-ISs and more straightforward validation...... of the redox compounds was challenging due to the oxidation of NADH and NADPH, when dissolved in water or tributylamine. The oxidation was reduced by dissolving them in ammonium acetate solution (pH 8.0)....

  16. A random-matrix theory of the number sense.

    Science.gov (United States)

    Hannagan, T; Nieder, A; Viswanathan, P; Dehaene, S

    2017-02-19

    Number sense, a spontaneous ability to process approximate numbers, has been documented in human adults, infants and newborns, and many other animals. Species as distant as monkeys and crows exhibit very similar neurons tuned to specific numerosities. How number sense can emerge in the absence of learning or fine tuning is currently unknown. We introduce a random-matrix theory of self-organized neural states where numbers are coded by vectors of activation across multiple units, and where the vector codes for successive integers are obtained through multiplication by a fixed but random matrix. This cortical implementation of the 'von Mises' algorithm explains many otherwise disconnected observations ranging from neural tuning curves in monkeys to looking times in neonates and cortical numerotopy in adults. The theory clarifies the origin of Weber-Fechner's Law and yields a novel and empirically validated prediction of multi-peak number neurons. Random matrices constitute a novel mechanism for the emergence of brain states coding for quantity.This article is part of a discussion meeting issue 'The origins of numerical abilities'. © 2017 The Author(s).

  17. Data Set for Emperical Validation of Double Skin Facade Model

    DEFF Research Database (Denmark)

    Kalyanova, Olena; Jensen, Rasmus Lund; Heiselberg, Per

    2008-01-01

    During the recent years the attention to the double skin facade (DSF) concept has greatly increased. Nevertheless, the application of the concept depends on whether a reliable model for simulation of the DSF performance will be developed or pointed out. This is, however, not possible to do, until...... the International Energy Agency (IEA) Task 34 Annex 43. This paper describes the full-scale outdoor experimental test facility ‘the Cube', where the experiments were conducted, the experimental set-up and the measurements procedure for the data sets. The empirical data is composed for the key-functioning modes...

  18. Tissue specificity of the hormonal response in sex accessory tissues is associated with nuclear matrix protein patterns.

    Science.gov (United States)

    Getzenberg, R H; Coffey, D S

    1990-09-01

    The DNA of interphase nuclei have very specific three-dimensional organizations that are different in different cell types, and it is possible that this varying DNA organization is responsible for the tissue specificity of gene expression. The nuclear matrix organizes the three-dimensional structure of the DNA and is believed to be involved in the control of gene expression. This study compares the nuclear structural proteins between two sex accessory tissues in the same animal responding to the same androgen stimulation by the differential expression of major tissue-specific secretory proteins. We demonstrate here that the nuclear matrix is tissue specific in the rat ventral prostate and seminal vesicle, and undergoes characteristic alterations in its protein composition upon androgen withdrawal. Three types of nuclear matrix proteins were observed: 1) nuclear matrix proteins that are different and tissue specific in the rat ventral prostate and seminal vesicle, 2) a set of nuclear matrix proteins that either appear or disappear upon androgen withdrawal, and 3) a set of proteins that are common to both the ventral prostate and seminal vesicle and do not change with the hormonal state of the animal. Since the nuclear matrix is known to bind androgen receptors in a tissue- and steroid-specific manner, we propose that the tissue specificity of the nuclear matrix arranges the DNA in a unique conformation, which may be involved in the specific interaction of transcription factors with DNA sequences, resulting in tissue-specific patterns of secretory protein expression.

  19. Molecular dynamics simulations of matrix assisted laser desorption ionization: Matrix-analyte interactions

    International Nuclear Information System (INIS)

    Nangia, Shivangi; Garrison, Barbara J.

    2011-01-01

    There is synergy between matrix assisted laser desorption ionization (MALDI) experiments and molecular dynamics (MD) simulations. To understand analyte ejection from the matrix, MD simulations have been employed. Prior calculations show that the ejected analyte molecules remain solvated by the matrix molecules in the ablated plume. In contrast, the experimental data show free analyte ions. The main idea of this work is that analyte molecule ejection may depend on the microscopic details of analyte interaction with the matrix. Intermolecular matrix-analyte interactions have been studied by focusing on 2,5-dihydroxybenzoic acid (DHB; matrix) and amino acids (AA; analyte) using Chemistry at HARvard Molecular Mechanics (CHARMM) force field. A series of AA molecules have been studied to analyze the DHB-AA interaction. A relative scale of AA molecule affinity towards DHB has been developed.

  20. Higher genus correlators for the hermitian matrix model with multiple cuts

    International Nuclear Information System (INIS)

    Akemann, G.

    1996-01-01

    An iterative scheme is set up for solving the loop equation of the hermitian one-matrix model with a multi-cut structure. Explicit results are presented for genus one for an arbitrary but finite number of cuts. Due to the complicated form of the boundary conditions, the loop correlators now contain elliptic integrals. This demonstrates the existence of new universality classes for the hermitian matrix model. The two-cut solution is investigated in more detail, including the double scaling limit. It is shown that in special cases it differs from the known continuum solution with one cut. (orig.)

  1. Construct Validity of Medical Clinical Competence Measures: A Multitrait-Multimethod Matrix Study Using Confirmatory Factor Analysis.

    Science.gov (United States)

    Forsythe, George B.; And Others

    1986-01-01

    Construct validity was investigated for three tests of clinical competence in medicine: National Board of Medical Examiners examination (NBME), California Psychological Inventory (CPI), and Resident Evaluation Form (REF). Scores from 166 residents were analyzed. Results suggested low construct validity for CPI and REF scales, and moderate…

  2. System of multifunctional Jones matrix tomography of phase anisotropy in diagnostics of endometriosis

    Science.gov (United States)

    Ushenko, V. O.; Koval, G. D.; Ushenko, Yu. O.; Pidkamin, L. Y.; Sidor, M. I.; Vanchuliak, O.; Motrich, A. V.; Gorsky, M. P.; Meglinskiy, I.

    2017-09-01

    The paper presents the results of Jones-matrix mapping of uterine wall histological sections with second-degree and third-degree endometriosis. The technique of experimental measurement of coordinate distributions of the modulus and phase values of Jones matrix elements is suggested. Within the statistical and cross-correlation approaches the modulus and phase maps of Jones matrix images of optically thin biological layers of polycrystalline films of plasma and cerebrospinal fluid are analyzed. A set of objective parameters (statistical and generalized correlation moments), which are the most sensitive to changes in the phase of anisotropy, associated with the features of polycrystalline structure of uterine wall histological sections with second-degree and third-degree endometriosis are determined.

  3. Thermal and mechanical behavior of metal matrix and ceramic matrix composites

    Science.gov (United States)

    Kennedy, John M. (Editor); Moeller, Helen H. (Editor); Johnson, W. S. (Editor)

    1990-01-01

    The present conference discusses local stresses in metal-matrix composites (MMCs) subjected to thermal and mechanical loads, the computational simulation of high-temperature MMCs' cyclic behavior, an analysis of a ceramic-matrix composite (CMC) flexure specimen, and a plasticity analysis of fibrous composite laminates under thermomechanical loads. Also discussed are a comparison of methods for determining the fiber-matrix interface frictional stresses of CMCs, the monotonic and cyclic behavior of an SiC/calcium aluminosilicate CMC, the mechanical and thermal properties of an SiC particle-reinforced Al alloy MMC, the temperature-dependent tensile and shear response of a graphite-reinforced 6061 Al-alloy MMC, the fiber/matrix interface bonding strength of MMCs, and fatigue crack growth in an Al2O3 short fiber-reinforced Al-2Mg matrix MMC.

  4. Study of ionization process of matrix molecules in matrix-assisted laser desorption ionization

    Energy Technology Data Exchange (ETDEWEB)

    Murakami, Kazumasa; Sato, Asami; Hashimoto, Kenro; Fujino, Tatsuya, E-mail: fujino@tmu.ac.jp

    2013-06-20

    Highlights: ► Proton transfer and adduction reaction of matrix in MALDI were studied. ► Hydroxyl group forming intramolecular hydrogen bond was related to the ionization. ► Intramolecular proton transfer in the electronic excited state was the initial step. ► Non-volatile analytes stabilized protonated matrix in the ground state. ► A possible mechanism, “analyte support mechanism”, has been proposed. - Abstract: Proton transfer and adduction reaction of matrix molecules in matrix-assisted laser desorption ionization were studied. By using 2,4,6-trihydroxyacetophenone (THAP), 2,5-dihydroxybenzoic acid (DHBA), and their related compounds in which the position of a hydroxyl group is different, it was clarified that a hydroxyl group forming an intramolecular hydrogen bond is related to the ionization of matrix molecules. Intramolecular proton transfer in the electronic excited state of the matrix and subsequent proton adduction from a surrounding solvent to the charge-separated matrix are the initial steps for the ionization of matrix molecules. Nanosecond pump–probe NIR–UV mass spectrometry confirmed that the existence of analyte molecules having large dipole moment in their structures is necessary for the stabilization of [matrix + H]{sup +} in the electronic ground state.

  5. HEDR model validation plan

    International Nuclear Information System (INIS)

    Napier, B.A.; Gilbert, R.O.; Simpson, J.C.; Ramsdell, J.V. Jr.; Thiede, M.E.; Walters, W.H.

    1993-06-01

    The Hanford Environmental Dose Reconstruction (HEDR) Project has developed a set of computational ''tools'' for estimating the possible radiation dose that individuals may have received from past Hanford Site operations. This document describes the planned activities to ''validate'' these tools. In the sense of the HEDR Project, ''validation'' is a process carried out by comparing computational model predictions with field observations and experimental measurements that are independent of those used to develop the model

  6. Measurement of the top quark mass in the dilepton final state using the matrix element method

    Energy Technology Data Exchange (ETDEWEB)

    Grohsjean, Alexander [Ludwig Maximilian Univ., Munich (Germany)

    2008-12-15

    The top quark, discovered in 1995 by the CDF and D0 experiments at the Fermilab Tevatron Collider, is the heaviest known fundamental particle. The precise knowledge of its mass yields important constraints on the mass of the yet-unobserved Higgs boson and allows to probe for physics beyond the Standard Model. The first measurement of the top quark mass in the dilepton channel with the Matrix Element method at the D0 experiment is presented. After a short description of the experimental environment and the reconstruction chain from hits in the detector to physical objects, a detailed review of the Matrix Element method is given. The Matrix Element method is based on the likelihood to observe a given event under the assumption of the quantity to be measured, e.g. the mass of the top quark. The method has undergone significant modifications and improvements compared to previous measurements in the lepton+jets channel: the two undetected neutrinos require a new reconstruction scheme for the four-momenta of the final state particles, the small event sample demands the modeling of additional jets in the signal likelihood, and a new likelihood is designed to account for the main source of background containing tauonic Z decay. The Matrix Element method is validated on Monte Carlo simulated events at the generator level. For the measurement, calibration curves are derived from events that are run through the full D0 detector simulation. The analysis makes use of the Run II data set recorded between April 2002 and May 2008 corresponding to an integrated luminosity of 2.8 fb-1. A total of 107 t$\\bar{t}$ candidate events with one electron and one muon in the final state are selected. Applying the Matrix Element method to this data set, the top quark mass is measured to be mtopRun IIa = 170.6 ± 6.1(stat.)-1.5+2.1(syst.)GeV; mtopRun IIb = 174.1 ± 4.4(stat.)-1.8+2.5(syst.)GeV; m

  7. Unified continuum damage model for matrix cracking in composite rotor blades

    Energy Technology Data Exchange (ETDEWEB)

    Pollayi, Hemaraju; Harursampath, Dineshkumar [Nonlinear Multifunctional Composites - Analysis and Design Lab (NMCAD Lab) Department of Aerospace Engineering Indian Institute of Science Bangalore - 560012, Karnataka (India)

    2015-03-10

    This paper deals with modeling of the first damage mode, matrix micro-cracking, in helicopter rotor/wind turbine blades and how this effects the overall cross-sectional stiffness. The helicopter/wind turbine rotor system operates in a highly dynamic and unsteady environment leading to severe vibratory loads present in the system. Repeated exposure to this loading condition can induce damage in the composite rotor blades. These rotor/turbine blades are generally made of fiber-reinforced laminated composites and exhibit various competing modes of damage such as matrix micro-cracking, delamination, and fiber breakage. There is a need to study the behavior of the composite rotor system under various key damage modes in composite materials for developing Structural Health Monitoring (SHM) system. Each blade is modeled as a beam based on geometrically non-linear 3-D elasticity theory. Each blade thus splits into 2-D analyzes of cross-sections and non-linear 1-D analyzes along the beam reference curves. Two different tools are used here for complete 3-D analysis: VABS for 2-D cross-sectional analysis and GEBT for 1-D beam analysis. The physically-based failure models for matrix in compression and tension loading are used in the present work. Matrix cracking is detected using two failure criterion: Matrix Failure in Compression and Matrix Failure in Tension which are based on the recovered field. A strain variable is set which drives the damage variable for matrix cracking and this damage variable is used to estimate the reduced cross-sectional stiffness. The matrix micro-cracking is performed in two different approaches: (i) Element-wise, and (ii) Node-wise. The procedure presented in this paper is implemented in VABS as matrix micro-cracking modeling module. Three examples are presented to investigate the matrix failure model which illustrate the effect of matrix cracking on cross-sectional stiffness by varying the applied cyclic load.

  8. Unified continuum damage model for matrix cracking in composite rotor blades

    International Nuclear Information System (INIS)

    Pollayi, Hemaraju; Harursampath, Dineshkumar

    2015-01-01

    This paper deals with modeling of the first damage mode, matrix micro-cracking, in helicopter rotor/wind turbine blades and how this effects the overall cross-sectional stiffness. The helicopter/wind turbine rotor system operates in a highly dynamic and unsteady environment leading to severe vibratory loads present in the system. Repeated exposure to this loading condition can induce damage in the composite rotor blades. These rotor/turbine blades are generally made of fiber-reinforced laminated composites and exhibit various competing modes of damage such as matrix micro-cracking, delamination, and fiber breakage. There is a need to study the behavior of the composite rotor system under various key damage modes in composite materials for developing Structural Health Monitoring (SHM) system. Each blade is modeled as a beam based on geometrically non-linear 3-D elasticity theory. Each blade thus splits into 2-D analyzes of cross-sections and non-linear 1-D analyzes along the beam reference curves. Two different tools are used here for complete 3-D analysis: VABS for 2-D cross-sectional analysis and GEBT for 1-D beam analysis. The physically-based failure models for matrix in compression and tension loading are used in the present work. Matrix cracking is detected using two failure criterion: Matrix Failure in Compression and Matrix Failure in Tension which are based on the recovered field. A strain variable is set which drives the damage variable for matrix cracking and this damage variable is used to estimate the reduced cross-sectional stiffness. The matrix micro-cracking is performed in two different approaches: (i) Element-wise, and (ii) Node-wise. The procedure presented in this paper is implemented in VABS as matrix micro-cracking modeling module. Three examples are presented to investigate the matrix failure model which illustrate the effect of matrix cracking on cross-sectional stiffness by varying the applied cyclic load

  9. Validation of the GROMOS force-field parameter set 45A3 against nuclear magnetic resonance data of hen egg lysozyme

    International Nuclear Information System (INIS)

    Soares, T. A.; Daura, X.; Oostenbrink, C.; Smith, L. J.; Gunsteren, W. F. van

    2004-01-01

    The quality of molecular dynamics (MD) simulations of proteins depends critically on the biomolecular force field that is used. Such force fields are defined by force-field parameter sets, which are generally determined and improved through calibration of properties of small molecules against experimental or theoretical data. By application to large molecules such as proteins, a new force-field parameter set can be validated. We report two 3.5 ns molecular dynamics simulations of hen egg white lysozyme in water applying the widely used GROMOS force-field parameter set 43A1 and a new set 45A3. The two MD ensembles are evaluated against NMR spectroscopic data NOE atom-atom distance bounds, 3 J NHα and 3 J αβ coupling constants, and 1 5N relaxation data. It is shown that the two sets reproduce structural properties about equally well. The 45A3 ensemble fulfills the atom-atom distance bounds derived from NMR spectroscopy slightly less well than the 43A1 ensemble, with most of the NOE distance violations in both ensembles involving residues located in loops or flexible regions of the protein. Convergence patterns are very similar in both simulations atom-positional root-mean-square differences (RMSD) with respect to the X-ray and NMR model structures and NOE inter-proton distances converge within 1.0-1.5 ns while backbone 3 J HNα -coupling constants and 1 H- 1 5N order parameters take slightly longer, 1.0-2.0 ns. As expected, side-chain 3 J αβ -coupling constants and 1 H- 1 5N order parameters do not reach full convergence for all residues in the time period simulated. This is particularly noticeable for side chains which display rare structural transitions. When comparing each simulation trajectory with an older and a newer set of experimental NOE data on lysozyme, it is found that the newer, larger, set of experimental data agrees as well with each of the simulations. In other words, the experimental data converged towards the theoretical result

  10. Wiener-Hopf factorization of piecewise meromorphic matrix-valued functions

    International Nuclear Information System (INIS)

    Adukov, Victor M

    2009-01-01

    Let D + be a multiply connected domain bounded by a contour Γ, let D - be the complement of D + union Γ in C-bar=C union {∞}, and a(t) be a continuous invertible matrix-valued function on Γ which can be meromorphically extended into the open disconnected set D - (as a piecewise meromorphic matrix-valued function). An explicit solution of the Wiener-Hopf factorization problem for a(t) is obtained and the partial factorization indices of a(t) are calculated. Here an explicit solution of a factorization problem is meant in the sense of reducing it to the investigation of finitely many systems of linear algebraic equations with matrices expressed in closed form, that is, in quadratures. Bibliography: 15 titles.

  11. Evaporation measurement in the validation drift - part 1

    International Nuclear Information System (INIS)

    Watanabe, Kunio

    1991-01-01

    Evaporation rate distribution over the wall surface of the validation drift was detaily mapped by using an equipment newly developed. The evaporation measurement was carried out to make clear the spatial variability of the inflow rate of groundwater seeping toward the tunnel. Air in the tunnel was warmed by an electric heater during the measurement period for reducing the relative humidity of air and for drying up the wall surface. Evaporation rates from rock matrix as well as from some major fractures were measured at about 500 points. Spatial distributions of evaporation rates over the tunnel wall were obtained under two different ventilation conditions. The average evaporation rates from the rock matrix of the wall were 0.29-0.35 mg/m 2 /s under these ventilation conditions. The average evaporation rate measured on some major fractures was about 1.3 mg/m 2 /s. The maximum evaporation rate measured was 12.8 mg/m 2 /s. Some spots of high evaporation rate were clearly found along some major fractures and these spots seemed to be the special seepage ways (channels) developed in those fractures. The fracture flow is relatively small compared with the matrix flow in the inner part of the drift. This measurement was performed about 1 month after the excavation of the validation drift. Groundwater flow around the tunnel might not be in a steady state because the period between tunnel excavation and the measurement was not so long. The evaporation rate distribution under the steady state of groundwater flow will be studied in 1991. (au)

  12. Phenotypic identification of Porphyromonas gingivalis validated with matrix-assisted laser desorption/ionization time-of-flight mass spectrometry

    NARCIS (Netherlands)

    Rams, Thomas E; Sautter, Jacqueline D; Getreu, Adam; van Winkelhoff, Arie J

    OBJECTIVE: Porphyromonas gingivalis is a major bacterial pathogen in human periodontitis. This study used matrix-assisted laser desorption/ionization time-of-flight (MALDI-TOF) mass spectrometry to assess the accuracy of a rapid phenotypic identification scheme for detection of cultivable P.

  13. FastaValidator: an open-source Java library to parse and validate FASTA formatted sequences.

    Science.gov (United States)

    Waldmann, Jost; Gerken, Jan; Hankeln, Wolfgang; Schweer, Timmy; Glöckner, Frank Oliver

    2014-06-14

    Advances in sequencing technologies challenge the efficient importing and validation of FASTA formatted sequence data which is still a prerequisite for most bioinformatic tools and pipelines. Comparative analysis of commonly used Bio*-frameworks (BioPerl, BioJava and Biopython) shows that their scalability and accuracy is hampered. FastaValidator represents a platform-independent, standardized, light-weight software library written in the Java programming language. It targets computer scientists and bioinformaticians writing software which needs to parse quickly and accurately large amounts of sequence data. For end-users FastaValidator includes an interactive out-of-the-box validation of FASTA formatted files, as well as a non-interactive mode designed for high-throughput validation in software pipelines. The accuracy and performance of the FastaValidator library qualifies it for large data sets such as those commonly produced by massive parallel (NGS) technologies. It offers scientists a fast, accurate and standardized method for parsing and validating FASTA formatted sequence data.

  14. Short-distance matrix elements for $D$-meson mixing for 2+1 lattice QCD

    Energy Technology Data Exchange (ETDEWEB)

    Chang, Chia Cheng [Univ. of Illinois, Champaign, IL (United States)

    2015-01-01

    We study the short-distance hadronic matrix elements for D-meson mixing with partially quenched Nf = 2+1 lattice QCD. We use a large set of the MIMD Lattice Computation Collaboration's gauge configurations with a2 tadpole-improved staggered sea quarks and tadpole-improved Lüscher-Weisz gluons. We use the a2 tadpole-improved action for valence light quarks and the Sheikoleslami-Wohlert action with the Fermilab interpretation for the valence charm quark. Our calculation covers the complete set of five operators needed to constrain new physics models for D-meson mixing. We match our matrix elements to the MS-NDR scheme evaluated at 3 GeV. We report values for the Beneke-Buchalla-Greub-Lenz-Nierste choice of evanescent operators.

  15. Validation of the Self Reporting Questionnaire 20-Item (SRQ-20) for Use in a Low- and Middle-Income Country Emergency Centre Setting

    Science.gov (United States)

    Wyatt, Gail; Williams, John K.; Stein, Dan J.; Sorsdahl, Katherine

    2015-01-01

    Common mental disorders are highly prevalent in emergency centre (EC) patients, yet few brief screening tools have been validated for low- and middle-income country (LMIC) ECs. This study explored the psychometric properties of the SRQ-20 screening tool in South African ECs using the Mini Neuropsychiatric Interview (MINI) as the gold standard comparison tool. Patients (n=200) from two ECs in Cape Town, South Africa were interviewed using the SRQ-20 and the MINI. Internal consistency, screening properties and factorial validity were examined. The SRQ-20 was effective in identifying participants with major depression, anxiety disorders or suicidality and displayed good internal consistency. The optimal cutoff scores were 4/5 and 6/7 for men and women respectively. The factor structure differed by gender. The SRQ-20 is a useful tool for EC settings in South Africa and holds promise for task-shifted approaches to decreasing the LMIC burden of mental disorders. PMID:26957953

  16. Matrix albedo for discrete ordinates infinite-medium boundary condition

    International Nuclear Information System (INIS)

    Mathews, K.; Dishaw, J.

    2007-01-01

    Discrete ordinates problems with an infinite exterior medium (reflector) can be more efficiently computed by eliminating grid cells in the exterior medium and applying a matrix albedo boundary condition. The albedo matrix is a discretized bidirectional reflection distribution function (BRDF) that accounts for the angular quadrature set, spatial quadrature method, and spatial grid that would have been used to model a portion of the exterior medium. The method is exact in slab geometry, and could be used as an approximation in multiple dimensions or curvilinear coordinates. We present an adequate method for computing albedo matrices and demonstrate their use in verifying a discrete ordinates code in slab geometry by comparison with Ganapol's infinite medium semi-analytic TIEL benchmark. With sufficient resolution in the spatial and angular grids and iteration tolerance to yield solutions converged to 6 digits, the conventional (scalar) albedo boundary condition yielded 2-digit accuracy at the boundary, but the matrix albedo solution reproduced the benchmark scalar flux at the boundary to all 6 digits. (authors)

  17. General factorization relations and consistency conditions in the sudden approximation via infinite matrix inversion

    International Nuclear Information System (INIS)

    Chan, C.K.; Hoffman, D.K.; Evans, J.W.

    1985-01-01

    Local, i.e., multiplicative, operators satisfy well-known linear factorization relations wherein matrix elements (between states associated with a complete set of wave functions) can be obtained as a linear combination of those out of the ground state (the input data). Analytic derivation of factorization relations for general state input data results in singular integral expressions for the coefficients, which can, however, be regularized using consistency conditions between matrix elements out of a single (nonground) state. Similar results hold for suitable ''symmetry class'' averaged matrix elements where the symmetry class projection operators are ''complete.'' In several cases where the wave functions or projection operators incorporate orthogonal polynomial dependence, we show that the ground state factorization relations have a simplified structure allowing an alternative derivation of the general factorization relations via an infinite matrix inversion procedure. This form is shown to have some advantages over previous versions. In addition, this matrix inversion procedure obtains all consistency conditions (which is not always the case from regularization of singular integrals)

  18. Generalized algebra-valued models of set theory

    NARCIS (Netherlands)

    Löwe, B.; Tarafder, S.

    2015-01-01

    We generalize the construction of lattice-valued models of set theory due to Takeuti, Titani, Kozawa and Ozawa to a wider class of algebras and show that this yields a model of a paraconsistent logic that validates all axioms of the negation-free fragment of Zermelo-Fraenkel set theory.

  19. Multi-threaded Sparse Matrix-Matrix Multiplication for Many-Core and GPU Architectures.

    Energy Technology Data Exchange (ETDEWEB)

    Deveci, Mehmet [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Rajamanickam, Sivasankaran [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Trott, Christian Robert [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-12-01

    Sparse Matrix-Matrix multiplication is a key kernel that has applications in several domains such as scienti c computing and graph analysis. Several algorithms have been studied in the past for this foundational kernel. In this paper, we develop parallel algorithms for sparse matrix-matrix multiplication with a focus on performance portability across different high performance computing architectures. The performance of these algorithms depend on the data structures used in them. We compare different types of accumulators in these algorithms and demonstrate the performance difference between these data structures. Furthermore, we develop a meta-algorithm, kkSpGEMM, to choose the right algorithm and data structure based on the characteristics of the problem. We show performance comparisons on three architectures and demonstrate the need for the community to develop two phase sparse matrix-matrix multiplication implementations for efficient reuse of the data structures involved.

  20. The accuracy of SST retrievals from AATSR: An initial assessment through geophysical validation against in situ radiometers, buoys and other SST data sets

    Science.gov (United States)

    Corlett, G. K.; Barton, I. J.; Donlon, C. J.; Edwards, M. C.; Good, S. A.; Horrocks, L. A.; Llewellyn-Jones, D. T.; Merchant, C. J.; Minnett, P. J.; Nightingale, T. J.; Noyes, E. J.; O'Carroll, A. G.; Remedios, J. J.; Robinson, I. S.; Saunders, R. W.; Watts, J. G.

    The Advanced Along-Track Scanning Radiometer (AATSR) was launched on Envisat in March 2002. The AATSR instrument is designed to retrieve precise and accurate global sea surface temperature (SST) that, combined with the large data set collected from its predecessors, ATSR and ATSR-2, will provide a long term record of SST data that is greater than 15 years. This record can be used for independent monitoring and detection of climate change. The AATSR validation programme has successfully completed its initial phase. The programme involves validation of the AATSR derived SST values using in situ radiometers, in situ buoys and global SST fields from other data sets. The results of the initial programme presented here will demonstrate that the AATSR instrument is currently close to meeting its scientific objectives of determining global SST to an accuracy of 0.3 K (one sigma). For night time data, the analysis gives a warm bias of between +0.04 K (0.28 K) for buoys to +0.06 K (0.20 K) for radiometers, with slightly higher errors observed for day time data, showing warm biases of between +0.02 (0.39 K) for buoys to +0.11 K (0.33 K) for radiometers. They show that the ATSR series of instruments continues to be the world leader in delivering accurate space-based observations of SST, which is a key climate parameter.

  1. The nuclear reaction matrix

    International Nuclear Information System (INIS)

    Krenciglowa, E.M.; Kung, C.L.; Kuo, T.T.S.; Osnes, E.; and Department of Physics, State University of New York at Stony Brook, Stony Brook, New York 11794)

    1976-01-01

    Different definitions of the reaction matrix G appropriate to the calculation of nuclear structure are reviewed and discussed. Qualitative physical arguments are presented in support of a two-step calculation of the G-matrix for finite nuclei. In the first step the high-energy excitations are included using orthogonalized plane-wave intermediate states, and in the second step the low-energy excitations are added in, using harmonic oscillator intermediate states. Accurate calculations of G-matrix elements for nuclear structure calculations in the Aapprox. =18 region are performed following this procedure and treating the Pauli exclusion operator Q 2 /sub p/ by the method of Tsai and Kuo. The treatment of Q 2 /sub p/, the effect of the intermediate-state spectrum and the energy dependence of the reaction matrix are investigated in detail. The present matrix elements are compared with various matrix elements given in the literature. In particular, close agreement is obtained with the matrix elements calculated by Kuo and Brown using approximate methods

  2. Linear models in matrix form a hands-on approach for the behavioral sciences

    CERN Document Server

    Brown, Jonathon D

    2014-01-01

    This textbook is an approachable introduction to statistical analysis using matrix algebra. Prior knowledge of matrix algebra is not necessary. Advanced topics are easy to follow through analyses that were performed on an open-source spreadsheet using a few built-in functions. These topics include ordinary linear regression, as well as maximum likelihood estimation, matrix decompositions, nonparametric smoothers and penalized cubic splines. Each data set (1) contains a limited number of observations to encourage readers to do the calculations themselves, and (2) tells a coherent story based on statistical significance and confidence intervals. In this way, students will learn how the numbers were generated and how they can be used to make cogent arguments about everyday matters. This textbook is designed for use in upper level undergraduate courses or first year graduate courses. The first chapter introduces students to linear equations, then covers matrix algebra, focusing on three essential operations: sum ...

  3. Matrix elasticity regulates the optimal cardiac myocyte shape for contractility

    Science.gov (United States)

    McCain, Megan L.; Yuan, Hongyan; Pasqualini, Francesco S.; Campbell, Patrick H.

    2014-01-01

    Concentric hypertrophy is characterized by ventricular wall thickening, fibrosis, and decreased myocyte length-to-width aspect ratio. Ventricular thickening is considered compensatory because it reduces wall stress, but the functional consequences of cell shape remodeling in this pathological setting are unknown. We hypothesized that decreases in myocyte aspect ratio allow myocytes to maximize contractility when the extracellular matrix becomes stiffer due to conditions such as fibrosis. To test this, we engineered neonatal rat ventricular myocytes into rectangles mimicking the 2-D profiles of healthy and hypertrophied myocytes on hydrogels with moderate (13 kPa) and high (90 kPa) elastic moduli. Actin alignment was unaffected by matrix elasticity, but sarcomere content was typically higher on stiff gels. Microtubule polymerization was higher on stiff gels, implying increased intracellular elastic modulus. On moderate gels, myocytes with moderate aspect ratios (∼7:1) generated the most peak systolic work compared with other cell shapes. However, on stiffer gels, low aspect ratios (∼2:1) generated the most peak systolic work. To compare the relative contributions of intracellular vs. extracellular elasticity to contractility, we developed an analytical model and used our experimental data to fit unknown parameters. Our model predicted that matrix elasticity dominates over intracellular elasticity, suggesting that the extracellular matrix may potentially be a more effective therapeutic target than microtubules. Our data and model suggest that myocytes with lower aspect ratios have a functional advantage when the elasticity of the extracellular matrix decreases due to conditions such as fibrosis, highlighting the role of the extracellular matrix in cardiac disease. PMID:24682394

  4. Concordance and predictive value of two adverse drug event data sets.

    Science.gov (United States)

    Cami, Aurel; Reis, Ben Y

    2014-08-22

    Accurate prediction of adverse drug events (ADEs) is an important means of controlling and reducing drug-related morbidity and mortality. Since no single "gold standard" ADE data set exists, a range of different drug safety data sets are currently used for developing ADE prediction models. There is a critical need to assess the degree of concordance between these various ADE data sets and to validate ADE prediction models against multiple reference standards. We systematically evaluated the concordance of two widely used ADE data sets - Lexi-comp from 2010 and SIDER from 2012. The strength of the association between ADE (drug) counts in Lexi-comp and SIDER was assessed using Spearman rank correlation, while the differences between the two data sets were characterized in terms of drug categories, ADE categories and ADE frequencies. We also performed a comparative validation of the Predictive Pharmacosafety Networks (PPN) model using both ADE data sets. The predictive power of PPN using each of the two validation sets was assessed using the area under Receiver Operating Characteristic curve (AUROC). The correlations between the counts of ADEs and drugs in the two data sets were 0.84 (95% CI: 0.82-0.86) and 0.92 (95% CI: 0.91-0.93), respectively. Relative to an earlier snapshot of Lexi-comp from 2005, Lexi-comp 2010 and SIDER 2012 introduced a mean of 1,973 and 4,810 new drug-ADE associations per year, respectively. The difference between these two data sets was most pronounced for Nervous System and Anti-infective drugs, Gastrointestinal and Nervous System ADEs, and postmarketing ADEs. A minor difference of 1.1% was found in the AUROC of PPN when SIDER 2012 was used for validation instead of Lexi-comp 2010. In conclusion, the ADE and drug counts in Lexi-comp and SIDER data sets were highly correlated and the choice of validation set did not greatly affect the overall prediction performance of PPN. Our results also suggest that it is important to be aware of the

  5. Electron collision cross section sets of TMS and TEOS vapours

    Science.gov (United States)

    Kawaguchi, S.; Takahashi, K.; Satoh, K.; Itoh, H.

    2017-05-01

    Reliable and detailed sets of electron collision cross sections for tetramethylsilane [TMS, Si(CH3)4] and tetraethoxysilane [TEOS, Si(OC2H5)4] vapours are proposed. The cross section sets of TMS and TEOS vapours include 16 and 20 kinds of partial ionization cross sections, respectively. Electron transport coefficients, such as electron drift velocity, ionization coefficient, and longitudinal diffusion coefficient, in those vapours are calculated by Monte Carlo simulations using the proposed cross section sets, and the validity of the sets is confirmed by comparing the calculated values of those transport coefficients with measured data. Furthermore, the calculated values of the ionization coefficient in TEOS/O2 mixtures are compared with measured data to confirm the validity of the proposed cross section set.

  6. Studies Related to the Oregon State University High Temperature Test Facility: Scaling, the Validation Matrix, and Similarities to the Modular High Temperature Gas-Cooled Reactor

    Energy Technology Data Exchange (ETDEWEB)

    Richard R. Schultz; Paul D. Bayless; Richard W. Johnson; William T. Taitano; James R. Wolf; Glenn E. McCreery

    2010-09-01

    The Oregon State University (OSU) High Temperature Test Facility (HTTF) is an integral experimental facility that will be constructed on the OSU campus in Corvallis, Oregon. The HTTF project was initiated, by the U.S. Nuclear Regulatory Commission (NRC), on September 5, 2008 as Task 4 of the 5 year High Temperature Gas Reactor Cooperative Agreement via NRC Contract 04-08-138. Until August, 2010, when a DOE contract was initiated to fund additional capabilities for the HTTF project, all of the funding support for the HTTF was provided by the NRC via their cooperative agreement. The U.S. Department of Energy (DOE) began their involvement with the HTTF project in late 2009 via the Next Generation Nuclear Plant project. Because the NRC interests in HTTF experiments were only centered on the depressurized conduction cooldown (DCC) scenario, NGNP involvement focused on expanding the experimental envelope of the HTTF to include steady-state operations and also the pressurized conduction cooldown (PCC). Since DOE has incorporated the HTTF as an ingredient in the NGNP thermal-fluids validation program, several important outcomes should be noted: 1. The reference prismatic reactor design, that serves as the basis for scaling the HTTF, became the modular high temperature gas-cooled reactor (MHTGR). The MHTGR has also been chosen as the reference design for all of the other NGNP thermal-fluid experiments. 2. The NGNP validation matrix is being planned using the same scaling strategy that has been implemented to design the HTTF, i.e., the hierarchical two-tiered scaling methodology developed by Zuber in 1991. Using this approach a preliminary validation matrix has been designed that integrates the HTTF experiments with the other experiments planned for the NGNP thermal-fluids verification and validation project. 3. Initial analyses showed that the inherent power capability of the OSU infrastructure, which only allowed a total operational facility power capability of 0.6 MW, is

  7. Aging linear viscoelasticity of matrix-inclusion composite materials featuring ellipsoidal inclusions

    OpenAIRE

    LAVERGNE, Francis; SAB, Karam; SANAHUJA, Julien; BORNERT, Michel; TOULEMONDE, Charles

    2016-01-01

    A multi-scale homogenization scheme is proposed to estimate the time-dependent strains of fiber-reinforced concrete. This material is modeled as an aging linear viscoelastic composite material featuring ellipsoidal inclusions embedded in a viscoelastic cementitious matrix characterized by a time-dependent Poisson's ratio. To this end, the homogenization scheme proposed in Lavergne et al. [1] is adapted to the case of a time-dependent Poisson's ratio and it is successfully validated on a non-a...

  8. Excellent cross-cultural validity, intra-test reliability and construct validity of the dutch rivermead mobility index in patients after stroke undergoing rehabilitation

    NARCIS (Netherlands)

    Roorda, Leo D.; Green, John; De Kluis, Kiki R. A.; Molenaar, Ivo W.; Bagley, Pam; Smith, Jane; Geurts, Alexander C. H.

    2008-01-01

    Objective: To investigate the cross-cultural validity of international Dutch-English comparisons when using the Dutch Rivermead Mobility Index (RMI), and the intra-test reliability and construct validity of the Dutch RMI. Methods: Cross-cultural validity was studied in a combined data-set of Dutch

  9. Global unitary fixing and matrix-valued correlations in matrix models

    International Nuclear Information System (INIS)

    Adler, Stephen L.; Horwitz, Lawrence P.

    2003-01-01

    We consider the partition function for a matrix model with a global unitary invariant energy function. We show that the averages over the partition function of global unitary invariant trace polynomials of the matrix variables are the same when calculated with any choice of a global unitary fixing, while averages of such polynomials without a trace define matrix-valued correlation functions, that depend on the choice of unitary fixing. The unitary fixing is formulated within the standard Faddeev-Popov framework, in which the squared Vandermonde determinant emerges as a factor of the complete Faddeev-Popov determinant. We give the ghost representation for the FP determinant, and the corresponding BRST invariance of the unitary-fixed partition function. The formalism is relevant for deriving Ward identities obeyed by matrix-valued correlation functions

  10. Matrix Information Geometry

    CERN Document Server

    Bhatia, Rajendra

    2013-01-01

    This book is an outcome of the Indo-French Workshop on Matrix Information Geometries (MIG): Applications in Sensor and Cognitive Systems Engineering, which was held in Ecole Polytechnique and Thales Research and Technology Center, Palaiseau, France, in February 23-25, 2011. The workshop was generously funded by the Indo-French Centre for the Promotion of Advanced Research (IFCPAR).  During the event, 22 renowned invited french or indian speakers gave lectures on their areas of expertise within the field of matrix analysis or processing. From these talks, a total of 17 original contribution or state-of-the-art chapters have been assembled in this volume. All articles were thoroughly peer-reviewed and improved, according to the suggestions of the international referees. The 17 contributions presented  are organized in three parts: (1) State-of-the-art surveys & original matrix theory work, (2) Advanced matrix theory for radar processing, and (3) Matrix-based signal processing applications.  

  11. Correlation Matrix Renormalization Theory: Improving Accuracy with Two-Electron Density-Matrix Sum Rules.

    Science.gov (United States)

    Liu, C; Liu, J; Yao, Y X; Wu, P; Wang, C Z; Ho, K M

    2016-10-11

    We recently proposed the correlation matrix renormalization (CMR) theory to treat the electronic correlation effects [Phys. Rev. B 2014, 89, 045131 and Sci. Rep. 2015, 5, 13478] in ground state total energy calculations of molecular systems using the Gutzwiller variational wave function (GWF). By adopting a number of approximations, the computational effort of the CMR can be reduced to a level similar to Hartree-Fock calculations. This paper reports our recent progress in minimizing the error originating from some of these approximations. We introduce a novel sum-rule correction to obtain a more accurate description of the intersite electron correlation effects in total energy calculations. Benchmark calculations are performed on a set of molecules to show the reasonable accuracy of the method.

  12. Linear programming model for solution of matrix game with payoffs trapezoidal intuitionistic fuzzy number

    Directory of Open Access Journals (Sweden)

    Darunee Hunwisai

    2017-01-01

    Full Text Available In this work, we considered two-person zero-sum games with fuzzy payoffs and matrix games with payoffs of trapezoidal intuitionistic fuzzy numbers (TrIFNs. The concepts of TrIFNs and their arithmetic operations were used. The cut-set based method for matrix game with payoffs of TrIFNs was also considered. Compute the interval-type value of any alfa-constrategies by simplex method for linear programming. The proposed method is illustrated with a numerical example.

  13. An Efficient Local Correlation Matrix Decomposition Approach for the Localization Implementation of Ensemble-Based Assimilation Methods

    Science.gov (United States)

    Zhang, Hongqin; Tian, Xiangjun

    2018-04-01

    Ensemble-based data assimilation methods often use the so-called localization scheme to improve the representation of the ensemble background error covariance (Be). Extensive research has been undertaken to reduce the computational cost of these methods by using the localized ensemble samples to localize Be by means of a direct decomposition of the local correlation matrix C. However, the computational costs of the direct decomposition of the local correlation matrix C are still extremely high due to its high dimension. In this paper, we propose an efficient local correlation matrix decomposition approach based on the concept of alternating directions. This approach is intended to avoid direct decomposition of the correlation matrix. Instead, we first decompose the correlation matrix into 1-D correlation matrices in the three coordinate directions, then construct their empirical orthogonal function decomposition at low resolution. This procedure is followed by the 1-D spline interpolation process to transform the above decompositions to the high-resolution grid. Finally, an efficient correlation matrix decomposition is achieved by computing the very similar Kronecker product. We conducted a series of comparison experiments to illustrate the validity and accuracy of the proposed local correlation matrix decomposition approach. The effectiveness of the proposed correlation matrix decomposition approach and its efficient localization implementation of the nonlinear least-squares four-dimensional variational assimilation are further demonstrated by several groups of numerical experiments based on the Advanced Research Weather Research and Forecasting model.

  14. Engineering a collagen matrix that replicates the biological properties of native extracellular matrix.

    Science.gov (United States)

    Nam, Kwangwoo; Sakai, Yuuki; Funamoto, Seiichi; Kimura, Tsuyoshi; Kishida, Akio

    2011-01-01

    In this study, we aimed to replicate the function of native tissues that can be used in tissue engineering and regenerative medicine. The key to such replication is the preparation of an artificial collagen matrix that possesses a structure resembling that of the extracellular matrix. We, therefore, prepared a collagen matrix by fibrillogenesis in a NaCl/Na(2)HPO(4) aqueous solution using a dialysis cassette and investigated its biological behavior in vitro and in vivo. The in vitro cell adhesion and proliferation did not show any significant differences. The degradation rate in the living body could be controlled according to the preparation condition, where the collagen matrix with high water content (F-collagen matrix, >98%) showed fast degradation and collagen matrix with lower water content (T-collagen matrix, >80%) showed no degradation for 8 weeks. The degradation did not affect the inflammatory response at all and relatively faster wound healing response was observed. Comparing this result with that of collagen gel and decellularized cornea, it can be concluded that the structural factor is very important and no cell abnormal behavior would be observed for quaternary structured collagen matrix.

  15. Carbonate fuel cell matrix

    Science.gov (United States)

    Farooque, Mohammad; Yuh, Chao-Yi

    1996-01-01

    A carbonate fuel cell matrix comprising support particles and crack attenuator particles which are made platelet in shape to increase the resistance of the matrix to through cracking. Also disclosed is a matrix having porous crack attenuator particles and a matrix whose crack attenuator particles have a thermal coefficient of expansion which is significantly different from that of the support particles, and a method of making platelet-shaped crack attenuator particles.

  16. Sequential function approximation on arbitrarily distributed point sets

    Science.gov (United States)

    Wu, Kailiang; Xiu, Dongbin

    2018-02-01

    We present a randomized iterative method for approximating unknown function sequentially on arbitrary point set. The method is based on a recently developed sequential approximation (SA) method, which approximates a target function using one data point at each step and avoids matrix operations. The focus of this paper is on data sets with highly irregular distribution of the points. We present a nearest neighbor replacement (NNR) algorithm, which allows one to sample the irregular data sets in a near optimal manner. We provide mathematical justification and error estimates for the NNR algorithm. Extensive numerical examples are also presented to demonstrate that the NNR algorithm can deliver satisfactory convergence for the SA method on data sets with high irregularity in their point distributions.

  17. Validation of MIMGO: a method to identify differentially expressed GO terms in a microarray dataset

    Directory of Open Access Journals (Sweden)

    Yamada Yoichi

    2012-12-01

    Full Text Available Abstract Background We previously proposed an algorithm for the identification of GO terms that commonly annotate genes whose expression is upregulated or downregulated in some microarray data compared with in other microarray data. We call these “differentially expressed GO terms” and have named the algorithm “matrix-assisted identification method of differentially expressed GO terms” (MIMGO. MIMGO can also identify microarray data in which genes annotated with a differentially expressed GO term are upregulated or downregulated. However, MIMGO has not yet been validated on a real microarray dataset using all available GO terms. Findings We combined Gene Set Enrichment Analysis (GSEA with MIMGO to identify differentially expressed GO terms in a yeast cell cycle microarray dataset. GSEA followed by MIMGO (GSEA + MIMGO correctly identified (p Conclusions MIMGO is a reliable method to identify differentially expressed GO terms comprehensively.

  18. Method of forming a ceramic matrix composite and a ceramic matrix component

    Science.gov (United States)

    de Diego, Peter; Zhang, James

    2017-05-30

    A method of forming a ceramic matrix composite component includes providing a formed ceramic member having a cavity, filling at least a portion of the cavity with a ceramic foam. The ceramic foam is deposited on a barrier layer covering at least one internal passage of the cavity. The method includes processing the formed ceramic member and ceramic foam to obtain a ceramic matrix composite component. Also provided is a method of forming a ceramic matrix composite blade and a ceramic matrix composite component.

  19. The sintered microsphere matrix for bone tissue engineering: in vitro osteoconductivity studies.

    Science.gov (United States)

    Borden, Mark; Attawia, Mohamed; Laurencin, Cato T

    2002-09-05

    A tissue engineering approach has been used to design three-dimensional synthetic matrices for bone repair. The osteoconductivity and degradation profile of a novel polymeric bone-graft substitute was evaluated in an in vitro setting. Using the copolymer poly(lactide-co-glycolide) [PLAGA], a sintering technique based on microsphere technology was used to fabricate three-dimensional porous scaffolds for bone regeneration. Osteoblasts and fibroblasts were seeded onto a 50:50 PLAGA scaffold. Morphologic evaluation through scanning electron microscopy demonstrated that both cell types attached and spread over the scaffold. Cells migrated through the matrix using cytoplasmic extensions to bridge the structure. Cross-sectional images indicated that cellular proliferation had penetrated into the matrix approximately 700 microm from the surface. Examination of the surfaces of cell/matrix constructs demonstrated that cellular proliferation had encompassed the pores of the matrix by 14 days of cell culture. With the aim of optimizing polymer composition and polymer molecular weight, a degradation study was conducted utilizing the matrix. The results demonstrate that degradation of the sintered matrix is dependent on molecular weight, copolymer ratio, and pore volume. From this data, it was determined that 75:25 PLAGA with an initial molecular weight of 100,000 has an optimal degradation profile. These studies show that the sintered microsphere matrix has an osteoconductive structure capable of functioning as a cellular scaffold with a degradation profile suitable for bone regeneration. Copyright 2002 Wiley Periodicals, Inc.

  20. Determination of arsenic and cadmium in shellfish samples by graphite furnace atomic absorption spectrometry using matrix modifier

    International Nuclear Information System (INIS)

    Cortez Diaz, Mirella del Carmen

    2002-01-01

    Heavy metals are a big source of environmental contamination and are also highly toxic to humans. Since shellfish are bio-accumulators of these metals, proper techniques for quantifying them should be available. This work aims to develop an analytical method for the quantitative determination of heavy metals in biological materials (shellfish), specifically arsenic and cadmium at the trace level, using graphite furnace atomic absorption spectrometry, for which nickel and phosphate solutions were used to modify the modifiers. Prior to the analysis, the sample was diluted with nitric acid in a DAB II pressure digestion system order to destroy the organic matter. The instrument conditions were initially set (wavelength, slit, integration peaks, graphite tube, etc.), then the work range was defined for each element and the most appropriate operational parameters were studied, such as: temperature, ramp times, hold times and internal gas flow, in the different stage of the electrothermal treatment (drying, calcination, atomization) for the furnace program. Once the above mentioned conditions were set and since this was a biological sample, a matrix chemical modifier had to be used, in order to make the elements that accompany the element being studied more volatile. In this way the chemical and spectral interferences decrease together with the high background absorption of the matrix. Therefore, different matrix modifiers were studied for the definition of each analyte. The method validation was done using Certified Oyster Tissue Reference Material N o 1566a from the National Institute of Standards and Technology applying different tests in order to eliminate outliers. Repeatability, uncertainty, sensitivity, lineal range, working range, detection limit and quantification limit were evaluated for each element, and the results were compared with the values for the certified material. The Fisher and Student tests were the statistical tools used. The experimental values

  1. Hamiltonian formalism, quantization and S matrix for supergravity. [S matrix, canonical constraints

    Energy Technology Data Exchange (ETDEWEB)

    Fradkin, E S; Vasiliev, M A [AN SSSR, Moscow. Fizicheskij Inst.

    1977-12-05

    The canonical formalism for supergravity is constructed. The algebra of canonical constraints is found. The correct expression for the S matrix is obtained. Usual 'covariant methods' lead to an incorrect S matrix in supergravity since a new four-particle interaction of ghostfields survives in the Lagrangian expression of the S matrix.

  2. Determination of the full polarimetric transition matrix of a magnetized plasma from measurements of phase only

    International Nuclear Information System (INIS)

    Segre, S.E.

    1996-09-01

    It is shown that, by using a convenient modulated input polarization, it is possible to determine the full plasma polarimetric transition matrix purely from phase measurements. These are advantageous compared to previously proposed amplitude measurements. Two alternative sets of configurations for the input polarization are considered. The elements of the transition matrix thus found can be used in the reconstruction of the MHD equilibrium

  3. Exponential formula for the reachable sets of quantum stochastic differential inclusions

    International Nuclear Information System (INIS)

    Ayoola, E.O.

    2001-07-01

    We establish an exponential formula for the reachable sets of quantum stochastic differential inclusions (QSDI) which are locally Lipschitzian with convex values. Our main results partially rely on an auxiliary result concerning the density, in the topology of the locally convex space of solutions, of the set of trajectories whose matrix elements are continuously differentiable By applying the exponential formula, we obtain results concerning convergence of the discrete approximations of the reachable set of the QSDI. This extends similar results of Wolenski for classical differential inclusions to the present noncommutative quantum setting. (author)

  4. Printing microstructures in a polymer matrix using a ferrofluid droplet

    International Nuclear Information System (INIS)

    Abdel Fattah, Abdel Rahman; Ghosh, Suvojit; Puri, Ishwar K.

    2016-01-01

    We print complex curvilinear microstructures in an elastomer matrix using a ferrofluid droplet as the print head. A magnetic field moves the droplet along a prescribed path in liquid polydimethylsiloxane (PDMS). The droplet sheds magnetic nanoparticle (MNP) clusters in its wake, forming printed features. The PDMS is subsequently heated so that it crosslinks, which preserves the printed features in the elastomer matrix. The competition between magnetic and drag forces experienced by the ferrofluid droplet and its trailing MNPs highlight design criteria for successful printing, which are experimentally confirmed. The method promises new applications, such as flexible 3D circuitry. - Highlights: • Magnetically guided miscible ferrofluid droplets print 3D patterns in a polymer. • Printing mechanism depends on the dynamics between the fluid and magnetic forces. • Droplet size influences the width of the printed trail. • The Colloidal distribution of the ferrofluid is important for pattern integrity. • Particle trajectories and trails are simulated and validated through experiments.

  5. Printing microstructures in a polymer matrix using a ferrofluid droplet

    Energy Technology Data Exchange (ETDEWEB)

    Abdel Fattah, Abdel Rahman [Department of Mechanical Engineering, Hamilton, Ontario (Canada); Ghosh, Suvojit [Department of Engineering Physics, McMaster University, Hamilton, Ontario (Canada); Puri, Ishwar K. [Department of Mechanical Engineering, Hamilton, Ontario (Canada); Department of Engineering Physics, McMaster University, Hamilton, Ontario (Canada)

    2016-03-01

    We print complex curvilinear microstructures in an elastomer matrix using a ferrofluid droplet as the print head. A magnetic field moves the droplet along a prescribed path in liquid polydimethylsiloxane (PDMS). The droplet sheds magnetic nanoparticle (MNP) clusters in its wake, forming printed features. The PDMS is subsequently heated so that it crosslinks, which preserves the printed features in the elastomer matrix. The competition between magnetic and drag forces experienced by the ferrofluid droplet and its trailing MNPs highlight design criteria for successful printing, which are experimentally confirmed. The method promises new applications, such as flexible 3D circuitry. - Highlights: • Magnetically guided miscible ferrofluid droplets print 3D patterns in a polymer. • Printing mechanism depends on the dynamics between the fluid and magnetic forces. • Droplet size influences the width of the printed trail. • The Colloidal distribution of the ferrofluid is important for pattern integrity. • Particle trajectories and trails are simulated and validated through experiments.

  6. Complete set of essential parameters of an effective theory

    Science.gov (United States)

    Ioffe, M. V.; Vereshagin, V. V.

    2018-04-01

    The present paper continues the series [V. V. Vereshagin, True self-energy function and reducibility in effective scalar theories, Phys. Rev. D 89, 125022 (2014); , 10.1103/PhysRevD.89.125022A. Vereshagin and V. Vereshagin, Resultant parameters of effective theory, Phys. Rev. D 69, 025002 (2004); , 10.1103/PhysRevD.69.025002K. Semenov-Tian-Shansky, A. Vereshagin, and V. Vereshagin, S-matrix renormalization in effective theories, Phys. Rev. D 73, 025020 (2006), 10.1103/PhysRevD.73.025020] devoted to the systematic study of effective scattering theories. We consider matrix elements of the effective Lagrangian monomials (in the interaction picture) of arbitrary high dimension D and show that the full set of corresponding coupling constants contains parameters of both kinds: essential and redundant. Since it would be pointless to formulate renormalization prescriptions for redundant parameters, it is necessary to select the full set of the essential ones. This is done in the present paper for the case of the single scalar field.

  7. The Matrix Cookbook

    DEFF Research Database (Denmark)

    Petersen, Kaare Brandt; Pedersen, Michael Syskind

    Matrix identities, relations and approximations. A desktop reference for quick overview of mathematics of matrices.......Matrix identities, relations and approximations. A desktop reference for quick overview of mathematics of matrices....

  8. Development and Validation of a Multiresidue Method for the Determination of Pesticides in Dry Samples (Rice and Wheat Flour) Using Liquid Chromatography/Triple Quadrupole Tandem Mass Spectrometry.

    Science.gov (United States)

    Grande-Martínez, Ángel; Arrebola, Francisco Javier; Moreno, Laura Díaz; Vidal, José Luis Martínez; Frenich, Antonia Garrido

    2015-01-01

    A rapid and sensitive multiresidue method was developed and validated for the determination of around 100 pesticides in dry samples (rice and wheat flour) by ultra-performance LC coupled to a triple quadrupole mass analyzer working in tandem mode (UPLC/QqQ-MS/MS). The sample preparation step was optimized for both matrixes. Pesticides were extracted from rice samples using aqueous ethyl acetate, while aqueous acetonitrile extraction [modified QuEChERS (quick, easy, cheap, effective, rugged, and safe) method] was used for wheat flour matrixes. In both cases the extracts were then cleaned up by dispersive solid phase extraction with MgSO4 and primary secondary amine+C18 sorbents. A further cleanup step with Florisil was necessary to remove fat in wheat flour. The method was validated at two concentration levels (3.6 and 40 μg/kg for most compounds), obtaining recoveries ranging from 70 to 120%, intraday and interday precision values≤20% expressed as RSDs, and expanded uncertainty values≤50%. The LOQ values ranged between 3.6 and 20 μg/kg, although it was set at 3.6 μg/kg for the majority of the pesticides. The method was applied to the analysis of 20 real samples, and no pesticides were detected.

  9. Separation of soft and collinear infrared limits of QCD squared matrix elements

    CERN Document Server

    Nagy, Zoltan; Trócsányi, Z L; Trocsanyi, Zoltan; Somogyi, Gabor; Trocsanyi, Zoltan

    2007-01-01

    We present a simple way of separating the overlap between the soft and collinear factorization formulae of QCD squared matrix elements. We check its validity explicitly for single and double unresolved emissions of tree-level processes. The new method makes possible the definition of helicity-dependent subtraction terms for regularizing the real contributions in computing radiative corrections to QCD jet cross sections. This implies application of Monte Carlo helicity summation in computing higher order corrections.

  10. Multi-threaded Sparse Matrix Sparse Matrix Multiplication for Many-Core and GPU Architectures.

    Energy Technology Data Exchange (ETDEWEB)

    Deveci, Mehmet [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Trott, Christian Robert [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Rajamanickam, Sivasankaran [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2018-01-01

    Sparse Matrix-Matrix multiplication is a key kernel that has applications in several domains such as scientific computing and graph analysis. Several algorithms have been studied in the past for this foundational kernel. In this paper, we develop parallel algorithms for sparse matrix- matrix multiplication with a focus on performance portability across different high performance computing architectures. The performance of these algorithms depend on the data structures used in them. We compare different types of accumulators in these algorithms and demonstrate the performance difference between these data structures. Furthermore, we develop a meta-algorithm, kkSpGEMM, to choose the right algorithm and data structure based on the characteristics of the problem. We show performance comparisons on three architectures and demonstrate the need for the community to develop two phase sparse matrix-matrix multiplication implementations for efficient reuse of the data structures involved.

  11. Parallelism in matrix computations

    CERN Document Server

    Gallopoulos, Efstratios; Sameh, Ahmed H

    2016-01-01

    This book is primarily intended as a research monograph that could also be used in graduate courses for the design of parallel algorithms in matrix computations. It assumes general but not extensive knowledge of numerical linear algebra, parallel architectures, and parallel programming paradigms. The book consists of four parts: (I) Basics; (II) Dense and Special Matrix Computations; (III) Sparse Matrix Computations; and (IV) Matrix functions and characteristics. Part I deals with parallel programming paradigms and fundamental kernels, including reordering schemes for sparse matrices. Part II is devoted to dense matrix computations such as parallel algorithms for solving linear systems, linear least squares, the symmetric algebraic eigenvalue problem, and the singular-value decomposition. It also deals with the development of parallel algorithms for special linear systems such as banded ,Vandermonde ,Toeplitz ,and block Toeplitz systems. Part III addresses sparse matrix computations: (a) the development of pa...

  12. A random matrix approach to the crossover of energy-level statistics from Wigner to Poisson

    International Nuclear Information System (INIS)

    Datta, Nilanjana; Kunz, Herve

    2004-01-01

    We analyze a class of parametrized random matrix models, introduced by Rosenzweig and Porter, which is expected to describe the energy level statistics of quantum systems whose classical dynamics varies from regular to chaotic as a function of a parameter. We compute the generating function for the correlations of energy levels, in the limit of infinite matrix size. The crossover between Poisson and Wigner statistics is measured by a renormalized coupling constant. The model is exactly solved in the sense that, in the limit of infinite matrix size, the energy-level correlation functions and their generating function are given in terms of a finite set of integrals

  13. Effects of mistuning and matrix structure on the topology of frequency response curves

    Science.gov (United States)

    Afolabi, Dare

    1989-01-01

    The stability of a frequency response curve under mild perturbations of the system's matrix is investigated. Using recent developments in the theory of singularities of differentiable maps, it is shown that the stability of a response curve depends on the structure of the system's matrix. In particular, the frequency response curves of a cylic system are shown to be unstable. Consequently, slight parameter variations engendered by mistuning will induce a significant difference in the topology of the forced response curves, if the mistuning transformation crosses the bifurcation set.

  14. Image Matrix Processor for Volumetric Computations Final Report CRADA No. TSB-1148-95

    Energy Technology Data Exchange (ETDEWEB)

    Roberson, G. Patrick [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Browne, Jolyon [Advanced Research & Applications Corporation, Sunnyvale, CA (United States)

    2018-01-22

    The development of an Image Matrix Processor (IMP) was proposed that would provide an economical means to perform rapid ray-tracing processes on volume "Giga Voxel" data sets. This was a multi-phased project. The objective of the first phase of the IMP project was to evaluate the practicality of implementing a workstation-based Image Matrix Processor for use in volumetric reconstruction and rendering using hardware simulation techniques. Additionally, ARACOR and LLNL worked together to identify and pursue further funding sources to complete a second phase of this project.

  15. The Exopolysaccharide Matrix

    Science.gov (United States)

    Koo, H.; Falsetta, M.L.; Klein, M.I.

    2013-01-01

    Many infectious diseases in humans are caused or exacerbated by biofilms. Dental caries is a prime example of a biofilm-dependent disease, resulting from interactions of microorganisms, host factors, and diet (sugars), which modulate the dynamic formation of biofilms on tooth surfaces. All biofilms have a microbial-derived extracellular matrix as an essential constituent. The exopolysaccharides formed through interactions between sucrose- (and starch-) and Streptococcus mutans-derived exoenzymes present in the pellicle and on microbial surfaces (including non-mutans) provide binding sites for cariogenic and other organisms. The polymers formed in situ enmesh the microorganisms while forming a matrix facilitating the assembly of three-dimensional (3D) multicellular structures that encompass a series of microenvironments and are firmly attached to teeth. The metabolic activity of microbes embedded in this exopolysaccharide-rich and diffusion-limiting matrix leads to acidification of the milieu and, eventually, acid-dissolution of enamel. Here, we discuss recent advances concerning spatio-temporal development of the exopolysaccharide matrix and its essential role in the pathogenesis of dental caries. We focus on how the matrix serves as a 3D scaffold for biofilm assembly while creating spatial heterogeneities and low-pH microenvironments/niches. Further understanding on how the matrix modulates microbial activity and virulence expression could lead to new approaches to control cariogenic biofilms. PMID:24045647

  16. Strong, Weak and Branching Bisimulation for Transition Systems and Markov Reward Chains: A Unifying Matrix Approach

    Directory of Open Access Journals (Sweden)

    Nikola Trčka

    2009-12-01

    Full Text Available We first study labeled transition systems with explicit successful termination. We establish the notions of strong, weak, and branching bisimulation in terms of boolean matrix theory, introducing thus a novel and powerful algebraic apparatus. Next we consider Markov reward chains which are standardly presented in real matrix theory. By interpreting the obtained matrix conditions for bisimulations in this setting, we automatically obtain the definitions of strong, weak, and branching bisimulation for Markov reward chains. The obtained strong and weak bisimulations are shown to coincide with some existing notions, while the obtained branching bisimulation is new, but its usefulness is questionable.

  17. Specific extracellular matrix remodeling signature of colon hepatic metastases.

    Directory of Open Access Journals (Sweden)

    Maguy Del Rio

    Full Text Available To identify genes implicated in metastatic colonization of the liver in colorectal cancer, we collected pairs of primary tumors and hepatic metastases before chemotherapy in 13 patients. We compared mRNA expression in the pairs of patients to identify genes deregulated during metastatic evolution. We then validated the identified genes using data obtained by different groups. The 33-gene signature was able to classify 87% of hepatic metastases, 98% of primary tumors, 97% of normal colon mucosa, and 95% of normal liver tissues in six datasets obtained using five different microarray platforms. The identified genes are specific to colon cancer and hepatic metastases since other metastatic locations and hepatic metastases originating from breast cancer were not classified by the signature. Gene Ontology term analysis showed that 50% of the genes are implicated in extracellular matrix remodeling, and more precisely in cell adhesion, extracellular matrix organization and angiogenesis. Because of the high efficiency of the signature to classify colon hepatic metastases, the identified genes represent promising targets to develop new therapies that will specifically affect hepatic metastasis microenvironment.

  18. N=2 Minimal Conformal Field Theories and Matrix Bifactorisations of x d

    Science.gov (United States)

    Davydov, Alexei; Camacho, Ana Ros; Runkel, Ingo

    2018-01-01

    We establish an action of the representations of N = 2-superconformal symmetry on the category of matrix factorisations of the potentials x d and x d - y d , for d odd. More precisely we prove a tensor equivalence between (a) the category of Neveu-Schwarz-type representations of the N = 2 minimal super vertex operator algebra at central charge 3-6/d, and (b) a full subcategory of graded matrix factorisations of the potential x d - y d . The subcategory in (b) is given by permutation-type matrix factorisations with consecutive index sets. The physical motivation for this result is the Landau-Ginzburg/conformal field theory correspondence, where it amounts to the equivalence of a subset of defects on both sides of the correspondence. Our work builds on results by Brunner and Roggenkamp [BR], where an isomorphism of fusion rules was established.

  19. Diagnostic PCR: validation and sample preparation are two sides of the same coin

    DEFF Research Database (Denmark)

    Hoorfar, Jeffrey; Wolffs, Petra; Radstrøm, Peter

    2004-01-01

    Increased use of powerful PCR technology for the routine detection of pathogens has focused attention on the need for international validation and preparation of official non-commercial guidelines. Bacteria of epidemiological importance should be the prime focus, although a "validation...... of quantitative reference DNA material and reagents, production of stringent protocols and tools for thermal cycler performance testing, uncomplicated sample preparation techniques, and extensive ring trials for assessment of the efficacy of selected matrix/pathogen detection protocols....

  20. Creep of plain weave polymer matrix composites

    Science.gov (United States)

    Gupta, Abhishek

    Polymer matrix composites are increasingly used in various industrial sectors to reduce structural weight and improve performance. Woven (also known as textile) composites are one class of polymer matrix composites with increasing market share mostly due to their lightweight, their flexibility to form into desired shape, their mechanical properties and toughness. Due to the viscoelasticity of the polymer matrix, time-dependent degradation in modulus (creep) and strength (creep rupture) are two of the major mechanical properties required by engineers to design a structure reliably when using these materials. Unfortunately, creep and creep rupture of woven composites have received little attention by the research community and thus, there is a dire need to generate additional knowledge and prediction models, given the increasing market share of woven composites in load bearing structural applications. Currently, available creep models are limited in scope and have not been validated for any loading orientation and time period beyond the experimental time window. In this thesis, an analytical creep model, namely the Modified Equivalent Laminate Model (MELM), was developed to predict tensile creep of plain weave composites for any orientation of the load with respect to the orientation of the fill and warp fibers, using creep of unidirectional composites. The ability of the model to predict creep for any orientation of the load is a "first" in this area. The model was validated using an extensive experimental involving the tensile creep of plain weave composites under varying loading orientation and service conditions. Plain weave epoxy (F263)/ carbon fiber (T300) composite, currently used in aerospace applications, was procured as fabrics from Hexcel Corporation. Creep tests were conducted under two loading conditions: on-axis loading (0°) and off-axis loading (45°). Constant load creep, in the temperature range of 80-240°C and stress range of 1-70% UTS of the