WorldWideScience

Sample records for rigorous jackknife cross-validation

  1. Cross validation in LULOO

    DEFF Research Database (Denmark)

    Sørensen, Paul Haase; Nørgård, Peter Magnus; Hansen, Lars Kai

    1996-01-01

    The leave-one-out cross-validation scheme for generalization assessment of neural network models is computationally expensive due to replicated training sessions. Linear unlearning of examples has recently been suggested as an approach to approximative cross-validation. Here we briefly review...... the linear unlearning scheme, dubbed LULOO, and we illustrate it on a systemidentification example. Further, we address the possibility of extracting confidence information (error bars) from the LULOO ensemble....

  2. The Infinitesimal Jackknife with Exploratory Factor Analysis

    Science.gov (United States)

    Zhang, Guangjian; Preacher, Kristopher J.; Jennrich, Robert I.

    2012-01-01

    The infinitesimal jackknife, a nonparametric method for estimating standard errors, has been used to obtain standard error estimates in covariance structure analysis. In this article, we adapt it for obtaining standard errors for rotated factor loadings and factor correlations in exploratory factor analysis with sample correlation matrices. Both…

  3. Bias Correction with Jackknife, Bootstrap, and Taylor Series

    OpenAIRE

    Jiao, Jiantao; Han, Yanjun; Weissman, Tsachy

    2017-01-01

    We analyze the bias correction methods using jackknife, bootstrap, and Taylor series. We focus on the binomial model, and consider the problem of bias correction for estimating $f(p)$, where $f \\in C[0,1]$ is arbitrary. We characterize the supremum norm of the bias of general jackknife and bootstrap estimators for any continuous functions, and demonstrate the in delete-$d$ jackknife, different values of $d$ may lead to drastically different behavior in jackknife. We show that in the binomial ...

  4. A short note on jackknifing the concordance correlation coefficient.

    Science.gov (United States)

    Feng, Dai; Baumgartner, Richard; Svetnik, Vladimir

    2014-02-10

    Lin's concordance correlation coefficient (CCC) is a very popular scaled index of agreement used in applied statistics. To obtain a confidence interval (CI) for the estimate of CCC, jackknifing was proposed and shown to perform well in simulation as well as in applications. However, a theoretical proof of the validity of the jackknife CI for the CCC has not been presented yet. In this note, we establish a sufficient condition for using the jackknife method to construct the CI for the CCC. Copyright © 2013 John Wiley & Sons, Ltd.

  5. Classification in hyperspectral images by independent component analysis, segmented cross-validation and uncertainty estimates

    Directory of Open Access Journals (Sweden)

    Beatriz Galindo-Prieto

    2018-02-01

    Full Text Available Independent component analysis combined with various strategies for cross-validation, uncertainty estimates by jack-knifing and critical Hotelling’s T2 limits estimation, proposed in this paper, is used for classification purposes in hyperspectral images. To the best of our knowledge, the combined approach of methods used in this paper has not been previously applied to hyperspectral imaging analysis for interpretation and classification in the literature. The data analysis performed here aims to distinguish between four different types of plastics, some of them containing brominated flame retardants, from their near infrared hyperspectral images. The results showed that the method approach used here can be successfully used for unsupervised classification. A comparison of validation approaches, especially leave-one-out cross-validation and regions of interest scheme validation is also evaluated.

  6. Linear Unlearning for Cross-Validation

    DEFF Research Database (Denmark)

    Hansen, Lars Kai; Larsen, Jan

    1996-01-01

    The leave-one-out cross-validation scheme for generalization assessment of neural network models is computationally expensive due to replicated training sessions. In this paper we suggest linear unlearning of examples as an approach to approximative cross-validation. Further, we discuss...... time series prediction benchmark demonstrate the potential of the linear unlearning technique...

  7. A theory of cross-validation error

    OpenAIRE

    Turney, Peter D.

    1994-01-01

    This paper presents a theory of error in cross-validation testing of algorithms for predicting real-valued attributes. The theory justifies the claim that predicting real-valued attributes requires balancing the conflicting demands of simplicity and accuracy. Furthermore, the theory indicates precisely how these conflicting demands must be balanced, in order to minimize cross-validation error. A general theory is presented, then it is developed in detail for linear regression and instance-bas...

  8. Online cross-validation-based ensemble learning.

    Science.gov (United States)

    Benkeser, David; Ju, Cheng; Lendle, Sam; van der Laan, Mark

    2018-01-30

    Online estimators update a current estimate with a new incoming batch of data without having to revisit past data thereby providing streaming estimates that are scalable to big data. We develop flexible, ensemble-based online estimators of an infinite-dimensional target parameter, such as a regression function, in the setting where data are generated sequentially by a common conditional data distribution given summary measures of the past. This setting encompasses a wide range of time-series models and, as special case, models for independent and identically distributed data. Our estimator considers a large library of candidate online estimators and uses online cross-validation to identify the algorithm with the best performance. We show that by basing estimates on the cross-validation-selected algorithm, we are asymptotically guaranteed to perform as well as the true, unknown best-performing algorithm. We provide extensions of this approach including online estimation of the optimal ensemble of candidate online estimators. We illustrate excellent performance of our methods using simulations and a real data example where we make streaming predictions of infectious disease incidence using data from a large database. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  9. Comparison of Sparse and Jack-knife partial least squares regression methods for variable selection

    DEFF Research Database (Denmark)

    Karaman, Ibrahim; Qannari, El Mostafa; Martens, Harald

    2013-01-01

    The objective of this study was to compare two different techniques of variable selection, Sparse PLSR and Jack-knife PLSR, with respect to their predictive ability and their ability to identify relevant variables. Sparse PLSR is a method that is frequently used in genomics, whereas Jack-knife PL...

  10. A cross-validation package driving Netica with python

    Science.gov (United States)

    Fienen, Michael N.; Plant, Nathaniel G.

    2014-01-01

    Bayesian networks (BNs) are powerful tools for probabilistically simulating natural systems and emulating process models. Cross validation is a technique to avoid overfitting resulting from overly complex BNs. Overfitting reduces predictive skill. Cross-validation for BNs is known but rarely implemented due partly to a lack of software tools designed to work with available BN packages. CVNetica is open-source, written in Python, and extends the Netica software package to perform cross-validation and read, rebuild, and learn BNs from data. Insights gained from cross-validation and implications on prediction versus description are illustrated with: a data-driven oceanographic application; and a model-emulation application. These examples show that overfitting occurs when BNs become more complex than allowed by supporting data and overfitting incurs computational costs as well as causing a reduction in prediction skill. CVNetica evaluates overfitting using several complexity metrics (we used level of discretization) and its impact on performance metrics (we used skill).

  11. How (not) to train a dependency parser: The curious case of jackknifing part-of-speech taggers

    DEFF Research Database (Denmark)

    Agic, Zeljko; Schluter, Natalie

    2017-01-01

    In dependency parsing, jackknifing taggers is indiscriminately used as a simple adaptation strategy. Here, we empirically evaluate when and how (not) to use jackknifing in parsing. On 26 languages, we reveal a preference that conflicts with, and surpasses the ubiquitous ten-folding. We show no cl...

  12. Asymptotic theory of generalized estimating equations based on jack-knife pseudo-observations

    DEFF Research Database (Denmark)

    Overgaard, Morten; Parner, Erik Thorlund; Pedersen, Jan

    2017-01-01

    A general asymptotic theory of estimates from estimating functions based on jack-knife pseudo-observations is established by requiring that the underlying estimator can be expressed as a smooth functional of the empirical distribution. Using results in p-variation norms, the theory is applied...

  13. Is the lateral jack-knife position responsible for cases of transient neurapraxia?

    Science.gov (United States)

    Molinares, Diana Margarita; Davis, Timothy T; Fung, Daniel A; Liu, John Chung-Liang; Clark, Stephen; Daily, David; Mok, James M

    2016-01-01

    The lateral jack-knife position is often used during transpsoas surgery to improve access to the spine. Postoperative neurological signs and symptoms are very common after such procedures, and the mechanism is not adequately understood. The objective of this study is to assess if the lateral jack-knife position alone can cause neurapraxia. This study compares neurological status at baseline and after positioning in the 25° right lateral jack-knife (RLJK) and the right lateral decubitus (RLD) position. Fifty healthy volunteers, ages 21 to 35, were randomly assigned to one of 2 groups: Group A (RLD) and Group B (RLJK). Motor and sensory testing was performed prior to positioning. Subjects were placed in the RLD or RLJK position, according to group assignment, for 60 minutes. Motor testing was performed immediately after this 60-minute period and again 60 minutes thereafter. Sensory testing was performed immediately after the 60-minute period and every 15 minutes thereafter, for a total of 5 times. Motor testing was performed by a physical therapist who was blinded to group assignment. A follow-up call was made 7 days after the positioning sessions. Motor deficits were observed in the nondependent lower limb in 100% of the subjects in Group B, and no motor deficits were seen in Group A. Statistically significant differences (p knife positioning for 60 minutes results in neurapraxia of the nondependent lower extremity. Our results support the hypothesis that jack-knife positioning alone can cause postoperative neurological symptoms.

  14. Locomotive fuel tank structural safety testing program : passenger locomotive fuel tank jackknife derailment load test.

    Science.gov (United States)

    2010-08-01

    This report presents the results of a passenger locomotive fuel tank load test simulating jackknife derailment (JD) load. The test is based on FRA requirements for locomotive fuel tanks in the Title 49, Code of Federal Regulations (CFR), Part 238, Ap...

  15. Benchmarking protein classification algorithms via supervised cross-validation

    NARCIS (Netherlands)

    Kertész-Farkas, A.; Dhir, S.; Sonego, P.; Pacurar, M.; Netoteia, S.; Nijveen, H.; Kuzniar, A.; Leunissen, J.A.M.; Kocsor, A.; Pongor, S.

    2008-01-01

    Development and testing of protein classification algorithms are hampered by the fact that the protein universe is characterized by groups vastly different in the number of members, in average protein size, similarity within group, etc. Datasets based on traditional cross-validation (k-fold,

  16. Scientific rigor through videogames.

    Science.gov (United States)

    Treuille, Adrien; Das, Rhiju

    2014-11-01

    Hypothesis-driven experimentation - the scientific method - can be subverted by fraud, irreproducibility, and lack of rigorous predictive tests. A robust solution to these problems may be the 'massive open laboratory' model, recently embodied in the internet-scale videogame EteRNA. Deploying similar platforms throughout biology could enforce the scientific method more broadly. Copyright © 2014 Elsevier Ltd. All rights reserved.

  17. Statistical mechanics rigorous results

    CERN Document Server

    Ruelle, David

    1999-01-01

    This classic book marks the beginning of an era of vigorous mathematical progress in equilibrium statistical mechanics. Its treatment of the infinite system limit has not been superseded, and the discussion of thermodynamic functions and states remains basic for more recent work. The conceptual foundation provided by the Rigorous Results remains invaluable for the study of the spectacular developments of statistical mechanics in the second half of the 20th century.

  18. Cross-Validation of Aerobic Capacity Prediction Models in Adolescents.

    Science.gov (United States)

    Burns, Ryan Donald; Hannon, James C; Brusseau, Timothy A; Eisenman, Patricia A; Saint-Maurice, Pedro F; Welk, Greg J; Mahar, Matthew T

    2015-08-01

    Cardiorespiratory endurance is a component of health-related fitness. FITNESSGRAM recommends the Progressive Aerobic Cardiovascular Endurance Run (PACER) or One mile Run/Walk (1MRW) to assess cardiorespiratory endurance by estimating VO2 Peak. No research has cross-validated prediction models from both PACER and 1MRW, including the New PACER Model and PACER-Mile Equivalent (PACER-MEQ) using current standards. The purpose of this study was to cross-validate prediction models from PACER and 1MRW against measured VO2 Peak in adolescents. Cardiorespiratory endurance data were collected on 90 adolescents aged 13-16 years (Mean = 14.7 ± 1.3 years; 32 girls, 52 boys) who completed the PACER and 1MRW in addition to a laboratory maximal treadmill test to measure VO2 Peak. Multiple correlations among various models with measured VO2 Peak were considered moderately strong (R = .74-0.78), and prediction error (RMSE) ranged from 5.95 ml·kg⁻¹,min⁻¹ to 8.27 ml·kg⁻¹.min⁻¹. Criterion-referenced agreement into FITNESSGRAM's Healthy Fitness Zones was considered fair-to-good among models (Kappa = 0.31-0.62; Agreement = 75.5-89.9%; F = 0.08-0.65). In conclusion, prediction models demonstrated moderately strong linear relationships with measured VO2 Peak, fair prediction error, and fair-to-good criterion referenced agreement with measured VO2 Peak into FITNESSGRAM's Healthy Fitness Zones.

  19. Cross-validated detection of crack initiation in aerospace materials

    Science.gov (United States)

    Vanniamparambil, Prashanth A.; Cuadra, Jefferson; Guclu, Utku; Bartoli, Ivan; Kontsos, Antonios

    2014-03-01

    A cross-validated nondestructive evaluation approach was employed to in situ detect the onset of damage in an Aluminum alloy compact tension specimen. The approach consisted of the coordinated use primarily the acoustic emission, combined with the infrared thermography and digital image correlation methods. Both tensile loads were applied and the specimen was continuously monitored using the nondestructive approach. Crack initiation was witnessed visually and was confirmed by the characteristic load drop accompanying the ductile fracture process. The full field deformation map provided by the nondestructive approach validated the formation of a pronounced plasticity zone near the crack tip. At the time of crack initiation, a burst in the temperature field ahead of the crack tip as well as a sudden increase of the acoustic recordings were observed. Although such experiments have been attempted and reported before in the literature, the presented approach provides for the first time a cross-validated nondestructive dataset that can be used for quantitative analyses of the crack initiation information content. It further allows future development of automated procedures for real-time identification of damage precursors including the rarely explored crack incubation stage in fatigue conditions.

  20. The efficiency of modified jackknife and ridge type regression estimators: a comparison

    Directory of Open Access Journals (Sweden)

    Sharad Damodar Gore

    2008-09-01

    Full Text Available A common problem in multiple regression models is multicollinearity, which produces undesirable effects on the least squares estimator. To circumvent this problem, two well known estimation procedures are often suggested in the literature. They are Generalized Ridge Regression (GRR estimation suggested by Hoerl and Kennard iteb8 and the Jackknifed Ridge Regression (JRR estimation suggested by Singh et al. iteb13. The GRR estimation leads to a reduction in the sampling variance, whereas, JRR leads to a reduction in the bias. In this paper, we propose a new estimator namely, Modified Jackknife Ridge Regression Estimator (MJR. It is based on the criterion that combines the ideas underlying both the GRR and JRR estimators. We have investigated standard properties of this new estimator. From a simulation study, we find that the new estimator often outperforms the LASSO, and it is superior to both GRR and JRR estimators, using the mean squared error criterion. The conditions under which the MJR estimator is better than the other two competing estimators have been investigated.

  1. Cross-validating a bidimensional mathematics anxiety scale.

    Science.gov (United States)

    Haiyan Bai

    2011-03-01

    The psychometric properties of a 14-item bidimensional Mathematics Anxiety Scale-Revised (MAS-R) were empirically cross-validated with two independent samples consisting of 647 secondary school students. An exploratory factor analysis on the scale yielded strong construct validity with a clear two-factor structure. The results from a confirmatory factor analysis indicated an excellent model-fit (χ(2) = 98.32, df = 62; normed fit index = .92, comparative fit index = .97; root mean square error of approximation = .04). The internal consistency (.85), test-retest reliability (.71), interfactor correlation (.26, p anxiety. Math anxiety, as measured by MAS-R, correlated negatively with student achievement scores (r = -.38), suggesting that MAS-R may be a useful tool for classroom teachers and other educational personnel tasked with identifying students at risk of reduced math achievement because of anxiety.

  2. The efficiency of different search strategies in estimating parsimony jackknife, bootstrap, and Bremer support

    Directory of Open Access Journals (Sweden)

    Müller Kai F

    2005-10-01

    Full Text Available Abstract Background For parsimony analyses, the most common way to estimate confidence is by resampling plans (nonparametric bootstrap, jackknife, and Bremer support (Decay indices. The recent literature reveals that parameter settings that are quite commonly employed are not those that are recommended by theoretical considerations and by previous empirical studies. The optimal search strategy to be applied during resampling was previously addressed solely via standard search strategies available in PAUP*. The question of a compromise between search extensiveness and improved support accuracy for Bremer support received even less attention. A set of experiments was conducted on different datasets to find an empirical cut-off point at which increased search extensiveness does not significantly change Bremer support and jackknife or bootstrap proportions any more. Results For the number of replicates needed for accurate estimates of support in resampling plans, a diagram is provided that helps to address the question whether apparently different support values really differ significantly. It is shown that the use of random addition cycles and parsimony ratchet iterations during bootstrapping does not translate into higher support, nor does any extension of the search extensiveness beyond the rather moderate effort of TBR (tree bisection and reconnection branch swapping plus saving one tree per replicate. Instead, in case of very large matrices, saving more than one shortest tree per iteration and using a strict consensus tree of these yields decreased support compared to saving only one tree. This can be interpreted as a small risk of overestimating support but should be more than compensated by other factors that counteract an enhanced type I error. With regard to Bremer support, a rule of thumb can be derived stating that not much is gained relative to the surplus computational effort when searches are extended beyond 20 ratchet iterations per

  3. An Estimator of Heavy Tail Index through the Generalized Jackknife Methodology

    Directory of Open Access Journals (Sweden)

    Weiqi Liu

    2014-01-01

    Full Text Available In practice, sometimes the data can be divided into several blocks but only a few of the largest observations within each block are available to estimate the heavy tail index. To address this problem, we propose a new class of estimators through the Generalized Jackknife methodology based on Qi’s estimator (2010. These estimators are proved to be asymptotically normal under suitable conditions. Compared to Hill’s estimator and Qi’s estimator, our new estimator has better asymptotic efficiency in terms of the minimum mean squared error, for a wide range of the second order shape parameters. For the finite samples, our new estimator still compares favorably to Hill’s estimator and Qi’s estimator, providing stable sample paths as a function of the number of dividing the sample into blocks, smaller estimation bias, and MSE.

  4. The Bootstrap, the Jackknife, and the Randomization Test: A Sampling Taxonomy.

    Science.gov (United States)

    Rodgers, J L

    1999-10-01

    A simple sampling taxonomy is defined that shows the differences between and relationships among the bootstrap, the jackknife, and the randomization test. Each method has as its goal the creation of an empirical sampling distribution that can be used to test statistical hypotheses, estimate standard errors, and/or create confidence intervals. Distinctions between the methods can be made based on the sampling approach (with replacement versus without replacement) and the sample size (replacing the whole original sample versus replacing a subset of the original sample). The taxonomy is useful for teaching the goals and purposes of resampling schemes. An extension of the taxonomy implies other possible resampling approaches that have not previously been considered. Univariate and multivariate examples are presented.

  5. Jack-knife stretching promotes flexibility of tight hamstrings after 4 weeks: a pilot study.

    Science.gov (United States)

    Sairyo, Koichi; Kawamura, Takeshi; Mase, Yasuyoshi; Hada, Yasushi; Sakai, Toshinori; Hasebe, Kiyotaka; Dezawa, Akira

    2013-08-01

    Tight hamstrings are reported to be one of the causes of low back pain. However, there have been few reports on effective stretching procedures for the tight hamstrings. The so-called jack-knife stretch, an active-static type of stretching, can efficiently increase the flexibility of tight hamstrings. To evaluate hamstring tightness before and after the 4-week stretching protocol in healthy volunteer adults and patients aged under 18 years with low back pain. For understanding the hamstrings tightness, we measured two parameters including (1) finger to floor distance (FFD) and (2) pelvis forward inclination angle (PFIA). Eight healthy adult volunteers who had no lumbar or hip problems participated in this study (mean age: 26.8 years). All lacked flexibility and their FFD were positive before the experiment. Subjects performed 2 sets of the jack-knife stretch every day for 4 weeks. One set consisted of 5 repetitions, each held for 5 s. Before and during the 4-week experiment, the FFD and PFIA of toe-touching tests were measured weekly. For 17 of the sports players aged under 18, only FFD was measured. In adult volunteers, FFD was 14.1 ± 6.1 cm before the experiment and decreased to -8.1 ± 3.7 cm by the end of week 4, indicating a gain in flexibility of 22.2 cm. PFIA was 50.6 ± 8.2 before the experiment and 83.8 ± 5.8 degrees after. Before and after the experiment, the differences were significant (p hamstrings.

  6. A Phylogeny of the Monocots, as Inferred from rbcL and atpA Sequence Variation, and a Comparison of Methods for Calculating Jackknife and Bootstrap Values

    DEFF Research Database (Denmark)

    Davis, Jerrold I.; Stevenson, Dennis W.; Petersen, Gitte

    2004-01-01

    elements of Xyridaceae. A comparison was conducted of jackknife and bootstrap values, as computed using strict-consensus (SC) and frequency-within-replicates (FWR) approaches. Jackknife values tend to be higher than bootstrap values, and for each of these methods support values obtained with the FWR...

  7. Cross validation for the classical model of structured expert judgment

    International Nuclear Information System (INIS)

    Colson, Abigail R.; Cooke, Roger M.

    2017-01-01

    We update the 2008 TU Delft structured expert judgment database with data from 33 professionally contracted Classical Model studies conducted between 2006 and March 2015 to evaluate its performance relative to other expert aggregation models. We briefly review alternative mathematical aggregation schemes, including harmonic weighting, before focusing on linear pooling of expert judgments with equal weights and performance-based weights. Performance weighting outperforms equal weighting in all but 1 of the 33 studies in-sample. True out-of-sample validation is rarely possible for Classical Model studies, and cross validation techniques that split calibration questions into a training and test set are used instead. Performance weighting incurs an “out-of-sample penalty” and its statistical accuracy out-of-sample is lower than that of equal weighting. However, as a function of training set size, the statistical accuracy of performance-based combinations reaches 75% of the equal weight value when the training set includes 80% of calibration variables. At this point the training set is sufficiently powerful to resolve differences in individual expert performance. The information of performance-based combinations is double that of equal weighting when the training set is at least 50% of the set of calibration variables. Previous out-of-sample validation work used a Total Out-of-Sample Validity Index based on all splits of the calibration questions into training and test subsets, which is expensive to compute and includes small training sets of dubious value. As an alternative, we propose an Out-of-Sample Validity Index based on averaging the product of statistical accuracy and information over all training sets sized at 80% of the calibration set. Performance weighting outperforms equal weighting on this Out-of-Sample Validity Index in 26 of the 33 post-2006 studies; the probability of 26 or more successes on 33 trials if there were no difference between performance

  8. Cross-validation pitfalls when selecting and assessing regression and classification models.

    Science.gov (United States)

    Krstajic, Damjan; Buturovic, Ljubomir J; Leahy, David E; Thomas, Simon

    2014-03-29

    We address the problem of selecting and assessing classification and regression models using cross-validation. Current state-of-the-art methods can yield models with high variance, rendering them unsuitable for a number of practical applications including QSAR. In this paper we describe and evaluate best practices which improve reliability and increase confidence in selected models. A key operational component of the proposed methods is cloud computing which enables routine use of previously infeasible approaches. We describe in detail an algorithm for repeated grid-search V-fold cross-validation for parameter tuning in classification and regression, and we define a repeated nested cross-validation algorithm for model assessment. As regards variable selection and parameter tuning we define two algorithms (repeated grid-search cross-validation and double cross-validation), and provide arguments for using the repeated grid-search in the general case. We show results of our algorithms on seven QSAR datasets. The variation of the prediction performance, which is the result of choosing different splits of the dataset in V-fold cross-validation, needs to be taken into account when selecting and assessing classification and regression models. We demonstrate the importance of repeating cross-validation when selecting an optimal model, as well as the importance of repeating nested cross-validation when assessing a prediction error.

  9. CVTresh: R Package for Level-Dependent Cross-Validation Thresholding

    Directory of Open Access Journals (Sweden)

    Donghoh Kim

    2006-04-01

    Full Text Available The core of the wavelet approach to nonparametric regression is thresholding of wavelet coefficients. This paper reviews a cross-validation method for the selection of the thresholding value in wavelet shrinkage of Oh, Kim, and Lee (2006, and introduces the R package CVThresh implementing details of the calculations for the procedures. This procedure is implemented by coupling a conventional cross-validation with a fast imputation method, so that it overcomes a limitation of data length, a power of 2. It can be easily applied to the classical leave-one-out cross-validation and K-fold cross-validation. Since the procedure is computationally fast, a level-dependent cross-validation can be developed for wavelet shrinkage of data with various sparseness according to levels.

  10. CVTresh: R Package for Level-Dependent Cross-Validation Thresholding

    Directory of Open Access Journals (Sweden)

    Donghoh Kim

    2006-04-01

    Full Text Available The core of the wavelet approach to nonparametric regression is thresholding of wavelet coefficients. This paper reviews a cross-validation method for the selection of the thresholding value in wavelet shrinkage of Oh, Kim, and Lee (2006, and introduces the R package CVThresh implementing details of the calculations for the procedures.This procedure is implemented by coupling a conventional cross-validation with a fast imputation method, so that it overcomes a limitation of data length, a power of 2. It can be easily applied to the classical leave-one-out cross-validation and K-fold cross-validation. Since the procedure is computationally fast, a level-dependent cross-validation can be developed for wavelet shrinkage of data with various sparseness according to levels.

  11. Econometric modelling of Serbian current account determinants: Jackknife Model Averaging approach

    Directory of Open Access Journals (Sweden)

    Petrović Predrag

    2014-01-01

    Full Text Available This research aims to model Serbian current account determinants for the period Q1 2002 - Q4 2012. Taking into account the majority of relevant determinants, using the Jackknife Model Averaging approach, 48 different models have been estimated, where 1254 equations needed to be estimated and averaged for each of the models. The results of selected representative models indicate moderate persistence of the CA and positive influence of: fiscal balance, oil trade balance, terms of trade, relative income and real effective exchange rates, where we should emphasise: (i a rather strong influence of relative income, (ii the fact that the worsening of oil trade balance results in worsening of other components (probably non-oil trade balance of CA and (iii that the positive influence of terms of trade reveals functionality of the Harberger-Laursen-Metzler effect in Serbia. On the other hand, negative influence is evident in case of: relative economic growth, gross fixed capital formation, net foreign assets and trade openness. What particularly stands out is the strong effect of relative economic growth that, most likely, reveals high citizens' future income growth expectations, which has negative impact on the CA.

  12. Cross validation of bioelectrical impedance equations for men

    Directory of Open Access Journals (Sweden)

    Maria Fátima Glaner

    2005-06-01

    Full Text Available The purpose of this study was to analyze the cross validity of bioimpedance equations (BIA on the estimation of the fat free mass (FFM of 44 men, with mean age of 24.98 ± 3.40 years and relative body fat (%BF of 17.15 ± 6.41%. A dual energy x-ray absorptiometry was used as a reference method for %BF and FFM. Total body resistance was assessed by the Biodynamics (Model 310. The equations analyzed in this study were: two equations (Eq.1 and 2 developed by Carvalho e Pires Neto (1998; one equation (Eq. 3 developed by Rising et al.(1991; one equation (Eq. 4 developed by Oppliger et al.(1991; two equations (Eq. 5 (%FM RESUMO Este estudo teve como objetivo analisar a validade concorrente de equações de impedância bioelétrica (IB para estimar a massa corporal livre de gordura (MLG, em 44 homens, com idade média de 24,98 ± 3,40 anos e gordura relativa (%G de 17,15 ± 6,41 %. A absortometria de raio-x de dupla energia foi usada como critério, para mensurar a %G e a MLG, e para obter estas variáveis decorrentes das equações de IB foi utilizado o Biodynamics (Modelo 310. As equações de IB analisadas neste estudo foram: duas equações (Eq. 1 e 2 de Carvalho e Pires Neto (1998; uma equação (Eq. 3 de Rising et al. (1991; uma equação (Eq. 4 de Oppliger et al.(1991; duas equações (Eq. 5 (%G < 20% e Eq. 6 (%G ≥ 20% de Segal et al. (1988. Os critérios adotados para validação foram os propostos por Lohman (1991. Todas as correlações foram altas e significativas, oscilando de 0,906 (Eq. 2 a 0,981 (Eq. 6. As equações 1 a 5 superestimaram de forma significativa (p < 0,001 a MLG, sendo que os erros constantes variaram de 1,32 kg (Eq. 5 a 5,90 kg (Eq. 4. A equação 6 atendeu a todos os critérios de validação, apresentando: correlação = 0,981; erro constante = -0,38 kg; erro total = 1,10 kg. Esta equação de Segal et al.(1988, para homens com gordura relativa ≥ 20% (Eq. 6 foi a única que apresentou validade concorrente, estimando

  13. Bayesian leave-one-out cross-validation approximations for Gaussian latent variable models

    DEFF Research Database (Denmark)

    Vehtari, Aki; Mononen, Tommi; Tolvanen, Ville

    2016-01-01

    The future predictive performance of a Bayesian model can be estimated using Bayesian cross-validation. In this article, we consider Gaussian latent variable models where the integration over the latent values is approximated using the Laplace method or expectation propagation (EP). We study...... the properties of several Bayesian leave-one-out (LOO) cross-validation approximations that in most cases can be computed with a small additional cost after forming the posterior approximation given the full data. Our main objective is to assess the accuracy of the approximative LOO cross-validation estimators...

  14. Putrefactive rigor: apparent rigor mortis due to gas distension.

    Science.gov (United States)

    Gill, James R; Landi, Kristen

    2011-09-01

    Artifacts due to decomposition may cause confusion for the initial death investigator, leading to an incorrect suspicion of foul play. Putrefaction is a microorganism-driven process that results in foul odor, skin discoloration, purge, and bloating. Various decompositional gases including methane, hydrogen sulfide, carbon dioxide, and hydrogen will cause the body to bloat. We describe 3 instances of putrefactive gas distension (bloating) that produced the appearance of inappropriate rigor, so-called putrefactive rigor. These gases may distend the body to an extent that the extremities extend and lose contact with their underlying support surface. The medicolegal investigator must recognize that this is not true rigor mortis and the body was not necessarily moved after death for this gravity-defying position to occur.

  15. Ensemble Kalman filter regularization using leave-one-out data cross-validation

    KAUST Repository

    Rayo Schiappacasse, Lautaro Jeró nimo; Hoteit, Ibrahim

    2012-01-01

    In this work, the classical leave-one-out cross-validation method for selecting a regularization parameter for the Tikhonov problem is implemented within the EnKF framework. Following the original concept, the regularization parameter is selected

  16. Empirical Performance of Cross-Validation With Oracle Methods in a Genomics Context.

    Science.gov (United States)

    Martinez, Josue G; Carroll, Raymond J; Müller, Samuel; Sampson, Joshua N; Chatterjee, Nilanjan

    2011-11-01

    When employing model selection methods with oracle properties such as the smoothly clipped absolute deviation (SCAD) and the Adaptive Lasso, it is typical to estimate the smoothing parameter by m-fold cross-validation, for example, m = 10. In problems where the true regression function is sparse and the signals large, such cross-validation typically works well. However, in regression modeling of genomic studies involving Single Nucleotide Polymorphisms (SNP), the true regression functions, while thought to be sparse, do not have large signals. We demonstrate empirically that in such problems, the number of selected variables using SCAD and the Adaptive Lasso, with 10-fold cross-validation, is a random variable that has considerable and surprising variation. Similar remarks apply to non-oracle methods such as the Lasso. Our study strongly questions the suitability of performing only a single run of m-fold cross-validation with any oracle method, and not just the SCAD and Adaptive Lasso.

  17. Mathematical Rigor in Introductory Physics

    Science.gov (United States)

    Vandyke, Michael; Bassichis, William

    2011-10-01

    Calculus-based introductory physics courses intended for future engineers and physicists are often designed and taught in the same fashion as those intended for students of other disciplines. A more mathematically rigorous curriculum should be more appropriate and, ultimately, more beneficial for the student in his or her future coursework. This work investigates the effects of mathematical rigor on student understanding of introductory mechanics. Using a series of diagnostic tools in conjunction with individual student course performance, a statistical analysis will be performed to examine student learning of introductory mechanics and its relation to student understanding of the underlying calculus.

  18. Biased binomial assessment of cross-validated estimation of classification accuracies illustrated in diagnosis predictions

    Directory of Open Access Journals (Sweden)

    Quentin Noirhomme

    2014-01-01

    Full Text Available Multivariate classification is used in neuroimaging studies to infer brain activation or in medical applications to infer diagnosis. Their results are often assessed through either a binomial or a permutation test. Here, we simulated classification results of generated random data to assess the influence of the cross-validation scheme on the significance of results. Distributions built from classification of random data with cross-validation did not follow the binomial distribution. The binomial test is therefore not adapted. On the contrary, the permutation test was unaffected by the cross-validation scheme. The influence of the cross-validation was further illustrated on real-data from a brain–computer interface experiment in patients with disorders of consciousness and from an fMRI study on patients with Parkinson disease. Three out of 16 patients with disorders of consciousness had significant accuracy on binomial testing, but only one showed significant accuracy using permutation testing. In the fMRI experiment, the mental imagery of gait could discriminate significantly between idiopathic Parkinson's disease patients and healthy subjects according to the permutation test but not according to the binomial test. Hence, binomial testing could lead to biased estimation of significance and false positive or negative results. In our view, permutation testing is thus recommended for clinical application of classification with cross-validation.

  19. Biased binomial assessment of cross-validated estimation of classification accuracies illustrated in diagnosis predictions.

    Science.gov (United States)

    Noirhomme, Quentin; Lesenfants, Damien; Gomez, Francisco; Soddu, Andrea; Schrouff, Jessica; Garraux, Gaëtan; Luxen, André; Phillips, Christophe; Laureys, Steven

    2014-01-01

    Multivariate classification is used in neuroimaging studies to infer brain activation or in medical applications to infer diagnosis. Their results are often assessed through either a binomial or a permutation test. Here, we simulated classification results of generated random data to assess the influence of the cross-validation scheme on the significance of results. Distributions built from classification of random data with cross-validation did not follow the binomial distribution. The binomial test is therefore not adapted. On the contrary, the permutation test was unaffected by the cross-validation scheme. The influence of the cross-validation was further illustrated on real-data from a brain-computer interface experiment in patients with disorders of consciousness and from an fMRI study on patients with Parkinson disease. Three out of 16 patients with disorders of consciousness had significant accuracy on binomial testing, but only one showed significant accuracy using permutation testing. In the fMRI experiment, the mental imagery of gait could discriminate significantly between idiopathic Parkinson's disease patients and healthy subjects according to the permutation test but not according to the binomial test. Hence, binomial testing could lead to biased estimation of significance and false positive or negative results. In our view, permutation testing is thus recommended for clinical application of classification with cross-validation.

  20. A case of instantaneous rigor?

    Science.gov (United States)

    Pirch, J; Schulz, Y; Klintschar, M

    2013-09-01

    The question of whether instantaneous rigor mortis (IR), the hypothetic sudden occurrence of stiffening of the muscles upon death, actually exists has been controversially debated over the last 150 years. While modern German forensic literature rejects this concept, the contemporary British literature is more willing to embrace it. We present the case of a young woman who suffered from diabetes and who was found dead in an upright standing position with back and shoulders leaned against a punchbag and a cupboard. Rigor mortis was fully established, livor mortis was strong and according to the position the body was found in. After autopsy and toxicological analysis, it was stated that death most probably occurred due to a ketoacidotic coma with markedly increased values of glucose and lactate in the cerebrospinal fluid as well as acetone in blood and urine. Whereas the position of the body is most unusual, a detailed analysis revealed that it is a stable position even without rigor mortis. Therefore, this case does not further support the controversial concept of IR.

  1. [Low-dose hypobaric spinal anesthesia for anorectal surgery in jackknife position: levobupivacaine-fentanyl compared to lidocaine-fentanyl].

    Science.gov (United States)

    de Santiago, J; Santos-Yglesias, J; Girón, J; Jiménez, A; Errando, C L

    2010-11-01

    To compare the percentage of patients who were able to bypass the postoperative intensive care recovery unit after selective spinal anesthesia with lidocaine-fentanyl versus levobupivacaine-fentanyl for anorectal surgery in jackknife position. Randomized double-blind clinical trial comparing 2 groups of 30 patients classified ASA 1-2. One group received 18 mg of 0.6% lidocaine plus 10 microg of fentanyl while the other group received 3 mg of 0.1% levobupivacaine plus 10 microg of fentanyl. Intraoperative variables were time of start of surgery, maximum extension of sensory blockade, requirement for rescue analgesics, and hemodynamic events. The level of sensory blockade was recorded at 5, 10, and 15 minutes after the start of surgery and at the end of the procedure. The degrees of postoperative motor blockade and proprioception were recorded, as were the results of the Romberg test and whether or not the patient was able to bypass the postoperative recovery unit. Also noted were times of start of ambulation and discharge, complications, and postoperative satisfaction. Intraoperative variables did not differ significantly between groups, and all patients in both groups bypassed the postoperative recovery unit. Times until walking and discharge home, complications, and overall satisfaction after surgery were similar in the 2 groups. Both spinal anesthetic solutions provide effective, selective anesthesia and are associated with similar rates of recovery care unit bypass after anorectal surgery in jackknife position.

  2. Accelerating cross-validation with total variation and its application to super-resolution imaging.

    Directory of Open Access Journals (Sweden)

    Tomoyuki Obuchi

    Full Text Available We develop an approximation formula for the cross-validation error (CVE of a sparse linear regression penalized by ℓ1-norm and total variation terms, which is based on a perturbative expansion utilizing the largeness of both the data dimensionality and the model. The developed formula allows us to reduce the necessary computational cost of the CVE evaluation significantly. The practicality of the formula is tested through application to simulated black-hole image reconstruction on the event-horizon scale with super resolution. The results demonstrate that our approximation reproduces the CVE values obtained via literally conducted cross-validation with reasonably good precision.

  3. Ensemble Kalman filter regularization using leave-one-out data cross-validation

    KAUST Repository

    Rayo Schiappacasse, Lautaro Jerónimo

    2012-09-19

    In this work, the classical leave-one-out cross-validation method for selecting a regularization parameter for the Tikhonov problem is implemented within the EnKF framework. Following the original concept, the regularization parameter is selected such that it minimizes the predictive error. Some ideas about the implementation, suitability and conceptual interest of the method are discussed. Finally, what will be called the data cross-validation regularized EnKF (dCVr-EnKF) is implemented in a 2D 2-phase synthetic oil reservoir experiment and the results analyzed.

  4. A Cross-Validation Study of Police Recruit Performance as Predicted by the IPI and MMPI.

    Science.gov (United States)

    Shusman, Elizabeth J.; And Others

    Validation and cross-validation studies were conducted using the Minnesota Multiphasic Personality Inventory (MMPI) and Inwald Personality Inventory (IPI) to predict job performance for 698 urban male police officers who completed a six-month training academy. Job performance criteria evaluated included absence, lateness, derelictions, negative…

  5. Efficient approximate k-fold and leave-one-out cross-validation for ridge regression

    NARCIS (Netherlands)

    Meijer, R.J.; Goeman, J.J.

    2013-01-01

    In model building and model evaluation, cross-validation is a frequently used resampling method. Unfortunately, this method can be quite time consuming. In this article, we discuss an approximation method that is much faster and can be used in generalized linear models and Cox' proportional hazards

  6. Cross-validation of theoretically quantified fiber continuum generation and absolute pulse measurement by MIIPS for a broadband coherently controlled optical source

    DEFF Research Database (Denmark)

    Tu, H.; Liu, Y.; Lægsgaard, Jesper

    2012-01-01

    source with the MIIPS-integrated pulse shaper produces compressed transform-limited 9.6 fs (FWHM) pulses or arbitrarily shaped pulses at a central wavelength of 1020 nm, an average power over 100 mW, and a repetition rate of 76 MHz. In comparison to the 229-fs pump laser pulses that generate the fiber......The predicted spectral phase of a fiber continuum pulsed source rigorously quantified by the scalar generalized nonlinear Schrödinger equation is found to be in excellent agreement with that measured by multiphoton intrapulse interference phase scan (MIIPS) with background subtraction. This cross......-validation confirms the absolute pulse measurement by MIIPS and the transform-limited compression of the fiber continuum pulses by the pulse shaper performing the MIIPS measurement, and permits the subsequent coherent control on the fiber continuum pulses by this pulse shaper. The combination of the fiber continuum...

  7. On the use of the observation-wise k-fold operation in PCA cross-validation

    NARCIS (Netherlands)

    Saccenti, E.; Camacho, J.

    2015-01-01

    Cross-validation (CV) is a common approach for determining the optimal number of components in a principal component analysis model. To guarantee the independence between model testing and calibration, the observationwise k-fold operation is commonly implemented in each cross-validation step. This

  8. Cross-Validation of Survival Bump Hunting by Recursive Peeling Methods.

    Science.gov (United States)

    Dazard, Jean-Eudes; Choe, Michael; LeBlanc, Michael; Rao, J Sunil

    2014-08-01

    We introduce a survival/risk bump hunting framework to build a bump hunting model with a possibly censored time-to-event type of response and to validate model estimates. First, we describe the use of adequate survival peeling criteria to build a survival/risk bump hunting model based on recursive peeling methods. Our method called "Patient Recursive Survival Peeling" is a rule-induction method that makes use of specific peeling criteria such as hazard ratio or log-rank statistics. Second, to validate our model estimates and improve survival prediction accuracy, we describe a resampling-based validation technique specifically designed for the joint task of decision rule making by recursive peeling (i.e. decision-box) and survival estimation. This alternative technique, called "combined" cross-validation is done by combining test samples over the cross-validation loops, a design allowing for bump hunting by recursive peeling in a survival setting. We provide empirical results showing the importance of cross-validation and replication.

  9. Realizing rigor in the mathematics classroom

    CERN Document Server

    Hull, Ted H (Henry); Balka, Don S

    2014-01-01

    Rigor put within reach! Rigor: The Common Core has made it policy-and this first-of-its-kind guide takes math teachers and leaders through the process of making it reality. Using the Proficiency Matrix as a framework, the authors offer proven strategies and practical tools for successful implementation of the CCSS mathematical practices-with rigor as a central objective. You'll learn how to Define rigor in the context of each mathematical practice Identify and overcome potential issues, including differentiating instruction and using data

  10. Classroom Talk for Rigorous Reading Comprehension Instruction

    Science.gov (United States)

    Wolf, Mikyung Kim; Crosson, Amy C.; Resnick, Lauren B.

    2004-01-01

    This study examined the quality of classroom talk and its relation to academic rigor in reading-comprehension lessons. Additionally, the study aimed to characterize effective questions to support rigorous reading comprehension lessons. The data for this study included 21 reading-comprehension lessons in several elementary and middle schools from…

  11. Raman fiber-optical method for colon cancer detection: Cross-validation and outlier identification approach

    Science.gov (United States)

    Petersen, D.; Naveed, P.; Ragheb, A.; Niedieker, D.; El-Mashtoly, S. F.; Brechmann, T.; Kötting, C.; Schmiegel, W. H.; Freier, E.; Pox, C.; Gerwert, K.

    2017-06-01

    Endoscopy plays a major role in early recognition of cancer which is not externally accessible and therewith in increasing the survival rate. Raman spectroscopic fiber-optical approaches can help to decrease the impact on the patient, increase objectivity in tissue characterization, reduce expenses and provide a significant time advantage in endoscopy. In gastroenterology an early recognition of malign and precursor lesions is relevant. Instantaneous and precise differentiation between adenomas as precursor lesions for cancer and hyperplastic polyps on the one hand and between high and low-risk alterations on the other hand is important. Raman fiber-optical measurements of colon biopsy samples taken during colonoscopy were carried out during a clinical study, and samples of adenocarcinoma (22), tubular adenomas (141), hyperplastic polyps (79) and normal tissue (101) from 151 patients were analyzed. This allows us to focus on the bioinformatic analysis and to set stage for Raman endoscopic measurements. Since spectral differences between normal and cancerous biopsy samples are small, special care has to be taken in data analysis. Using a leave-one-patient-out cross-validation scheme, three different outlier identification methods were investigated to decrease the influence of systematic errors, like a residual risk in misplacement of the sample and spectral dilution of marker bands (esp. cancerous tissue) and therewith optimize the experimental design. Furthermore other validations methods like leave-one-sample-out and leave-one-spectrum-out cross-validation schemes were compared with leave-one-patient-out cross-validation. High-risk lesions were differentiated from low-risk lesions with a sensitivity of 79%, specificity of 74% and an accuracy of 77%, cancer and normal tissue with a sensitivity of 79%, specificity of 83% and an accuracy of 81%. Additionally applied outlier identification enabled us to improve the recognition of neoplastic biopsy samples.

  12. Raman fiber-optical method for colon cancer detection: Cross-validation and outlier identification approach.

    Science.gov (United States)

    Petersen, D; Naveed, P; Ragheb, A; Niedieker, D; El-Mashtoly, S F; Brechmann, T; Kötting, C; Schmiegel, W H; Freier, E; Pox, C; Gerwert, K

    2017-06-15

    Endoscopy plays a major role in early recognition of cancer which is not externally accessible and therewith in increasing the survival rate. Raman spectroscopic fiber-optical approaches can help to decrease the impact on the patient, increase objectivity in tissue characterization, reduce expenses and provide a significant time advantage in endoscopy. In gastroenterology an early recognition of malign and precursor lesions is relevant. Instantaneous and precise differentiation between adenomas as precursor lesions for cancer and hyperplastic polyps on the one hand and between high and low-risk alterations on the other hand is important. Raman fiber-optical measurements of colon biopsy samples taken during colonoscopy were carried out during a clinical study, and samples of adenocarcinoma (22), tubular adenomas (141), hyperplastic polyps (79) and normal tissue (101) from 151 patients were analyzed. This allows us to focus on the bioinformatic analysis and to set stage for Raman endoscopic measurements. Since spectral differences between normal and cancerous biopsy samples are small, special care has to be taken in data analysis. Using a leave-one-patient-out cross-validation scheme, three different outlier identification methods were investigated to decrease the influence of systematic errors, like a residual risk in misplacement of the sample and spectral dilution of marker bands (esp. cancerous tissue) and therewith optimize the experimental design. Furthermore other validations methods like leave-one-sample-out and leave-one-spectrum-out cross-validation schemes were compared with leave-one-patient-out cross-validation. High-risk lesions were differentiated from low-risk lesions with a sensitivity of 79%, specificity of 74% and an accuracy of 77%, cancer and normal tissue with a sensitivity of 79%, specificity of 83% and an accuracy of 81%. Additionally applied outlier identification enabled us to improve the recognition of neoplastic biopsy samples. Copyright

  13. A statistical method (cross-validation) for bone loss region detection after spaceflight

    Science.gov (United States)

    Zhao, Qian; Li, Wenjun; Li, Caixia; Chu, Philip W.; Kornak, John; Lang, Thomas F.

    2010-01-01

    Astronauts experience bone loss after the long spaceflight missions. Identifying specific regions that undergo the greatest losses (e.g. the proximal femur) could reveal information about the processes of bone loss in disuse and disease. Methods for detecting such regions, however, remains an open problem. This paper focuses on statistical methods to detect such regions. We perform statistical parametric mapping to get t-maps of changes in images, and propose a new cross-validation method to select an optimum suprathreshold for forming clusters of pixels. Once these candidate clusters are formed, we use permutation testing of longitudinal labels to derive significant changes. PMID:20632144

  14. The development and cross-validation of an MMPI typology of murderers.

    Science.gov (United States)

    Holcomb, W R; Adams, N A; Ponder, H M

    1985-06-01

    A sample of 80 male offenders charged with premeditated murder were divided into five personality types using MMPI scores. A hierarchical clustering procedure was used with a subsequent internal cross-validation analysis using a second sample of 80 premeditated murderers. A Discriminant Analysis resulted in a 96.25% correct classification of subjects from the second sample into the five types. Clinical data from a mental status interview schedule supported the external validity of these types. There were significant differences among the five types in hallucinations, disorientation, hostility, depression, and paranoid thinking. Both similarities and differences of the present typology with prior research was discussed. Additional research questions were suggested.

  15. Cross-validation of an employee safety climate model in Malaysia.

    Science.gov (United States)

    Bahari, Siti Fatimah; Clarke, Sharon

    2013-06-01

    Whilst substantial research has investigated the nature of safety climate, and its importance as a leading indicator of organisational safety, much of this research has been conducted with Western industrial samples. The current study focuses on the cross-validation of a safety climate model in the non-Western industrial context of Malaysian manufacturing. The first-order factorial validity of Cheyne et al.'s (1998) [Cheyne, A., Cox, S., Oliver, A., Tomas, J.M., 1998. Modelling safety climate in the prediction of levels of safety activity. Work and Stress, 12(3), 255-271] model was tested, using confirmatory factor analysis, in a Malaysian sample. Results showed that the model fit indices were below accepted levels, indicating that the original Cheyne et al. (1998) safety climate model was not supported. An alternative three-factor model was developed using exploratory factor analysis. Although these findings are not consistent with previously reported cross-validation studies, we argue that previous studies have focused on validation across Western samples, and that the current study demonstrates the need to take account of cultural factors in the development of safety climate models intended for use in non-Western contexts. The results have important implications for the transferability of existing safety climate models across cultures (for example, in global organisations) and highlight the need for future research to examine cross-cultural issues in relation to safety climate. Copyright © 2013 National Safety Council and Elsevier Ltd. All rights reserved.

  16. Rigorous Science: a How-To Guide

    Directory of Open Access Journals (Sweden)

    Arturo Casadevall

    2016-11-01

    Full Text Available Proposals to improve the reproducibility of biomedical research have emphasized scientific rigor. Although the word “rigor” is widely used, there has been little specific discussion as to what it means and how it can be achieved. We suggest that scientific rigor combines elements of mathematics, logic, philosophy, and ethics. We propose a framework for rigor that includes redundant experimental design, sound statistical analysis, recognition of error, avoidance of logical fallacies, and intellectual honesty. These elements lead to five actionable recommendations for research education.

  17. Asymptotic optimality and efficient computation of the leave-subject-out cross-validation

    KAUST Repository

    Xu, Ganggang

    2012-12-01

    Although the leave-subject-out cross-validation (CV) has been widely used in practice for tuning parameter selection for various nonparametric and semiparametric models of longitudinal data, its theoretical property is unknown and solving the associated optimization problem is computationally expensive, especially when there are multiple tuning parameters. In this paper, by focusing on the penalized spline method, we show that the leave-subject-out CV is optimal in the sense that it is asymptotically equivalent to the empirical squared error loss function minimization. An efficient Newton-type algorithm is developed to compute the penalty parameters that optimize the CV criterion. Simulated and real data are used to demonstrate the effectiveness of the leave-subject-out CV in selecting both the penalty parameters and the working correlation matrix. © 2012 Institute of Mathematical Statistics.

  18. Efficient generalized cross-validation with applications to parametric image restoration and resolution enhancement.

    Science.gov (United States)

    Nguyen, N; Milanfar, P; Golub, G

    2001-01-01

    In many image restoration/resolution enhancement applications, the blurring process, i.e., point spread function (PSF) of the imaging system, is not known or is known only to within a set of parameters. We estimate these PSF parameters for this ill-posed class of inverse problem from raw data, along with the regularization parameters required to stabilize the solution, using the generalized cross-validation method (GCV). We propose efficient approximation techniques based on the Lanczos algorithm and Gauss quadrature theory, reducing the computational complexity of the GCV. Data-driven PSF and regularization parameter estimation experiments with synthetic and real image sequences are presented to demonstrate the effectiveness and robustness of our method.

  19. Asymptotic optimality and efficient computation of the leave-subject-out cross-validation

    KAUST Repository

    Xu, Ganggang; Huang, Jianhua Z.

    2012-01-01

    Although the leave-subject-out cross-validation (CV) has been widely used in practice for tuning parameter selection for various nonparametric and semiparametric models of longitudinal data, its theoretical property is unknown and solving the associated optimization problem is computationally expensive, especially when there are multiple tuning parameters. In this paper, by focusing on the penalized spline method, we show that the leave-subject-out CV is optimal in the sense that it is asymptotically equivalent to the empirical squared error loss function minimization. An efficient Newton-type algorithm is developed to compute the penalty parameters that optimize the CV criterion. Simulated and real data are used to demonstrate the effectiveness of the leave-subject-out CV in selecting both the penalty parameters and the working correlation matrix. © 2012 Institute of Mathematical Statistics.

  20. Evaluation of Analysis by Cross-Validation, Part II: Diagnostic and Optimization of Analysis Error Covariance

    Directory of Open Access Journals (Sweden)

    Richard Ménard

    2018-02-01

    Full Text Available We present a general theory of estimation of analysis error covariances based on cross-validation as well as a geometric interpretation of the method. In particular, we use the variance of passive observation-minus-analysis residuals and show that the true analysis error variance can be estimated, without relying on the optimality assumption. This approach is used to obtain near optimal analyses that are then used to evaluate the air quality analysis error using several different methods at active and passive observation sites. We compare the estimates according to the method of Hollingsworth-Lönnberg, Desroziers et al., a new diagnostic we developed, and the perceived analysis error computed from the analysis scheme, to conclude that, as long as the analysis is near optimal, all estimates agree within a certain error margin.

  1. Diversity shrinkage: Cross-validating pareto-optimal weights to enhance diversity via hiring practices.

    Science.gov (United States)

    Song, Q Chelsea; Wee, Serena; Newman, Daniel A

    2017-12-01

    To reduce adverse impact potential and improve diversity outcomes from personnel selection, one promising technique is De Corte, Lievens, and Sackett's (2007) Pareto-optimal weighting strategy. De Corte et al.'s strategy has been demonstrated on (a) a composite of cognitive and noncognitive (e.g., personality) tests (De Corte, Lievens, & Sackett, 2008) and (b) a composite of specific cognitive ability subtests (Wee, Newman, & Joseph, 2014). Both studies illustrated how Pareto-weighting (in contrast to unit weighting) could lead to substantial improvement in diversity outcomes (i.e., diversity improvement), sometimes more than doubling the number of job offers for minority applicants. The current work addresses a key limitation of the technique-the possibility of shrinkage, especially diversity shrinkage, in the Pareto-optimal solutions. Using Monte Carlo simulations, sample size and predictor combinations were varied and cross-validated Pareto-optimal solutions were obtained. Although diversity shrinkage was sizable for a composite of cognitive and noncognitive predictors when sample size was at or below 500, diversity shrinkage was typically negligible for a composite of specific cognitive subtest predictors when sample size was at least 100. Diversity shrinkage was larger when the Pareto-optimal solution suggested substantial diversity improvement. When sample size was at least 100, cross-validated Pareto-optimal weights typically outperformed unit weights-suggesting that diversity improvement is often possible, despite diversity shrinkage. Implications for Pareto-optimal weighting, adverse impact, sample size of validation studies, and optimizing the diversity-job performance tradeoff are discussed. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  2. SU-E-T-231: Cross-Validation of 3D Gamma Comparison Tools

    International Nuclear Information System (INIS)

    Alexander, KM; Jechel, C; Pinter, C; Lasso, A; Fichtinger, G; Salomons, G; Schreiner, LJ

    2015-01-01

    Purpose: Moving the computational analysis for 3D gel dosimetry into the 3D Slicer (www.slicer.org) environment has made gel dosimetry more clinically accessible. To ensure accuracy, we cross-validate the 3D gamma comparison module in 3D Slicer with an independently developed algorithm using simulated and measured dose distributions. Methods: Two reference dose distributions were generated using the Varian Eclipse treatment planning system. The first distribution consisted of a four-field box irradiation delivered to a plastic water phantom and the second, a VMAT plan delivered to a gel dosimeter phantom. The first reference distribution was modified within Eclipse to create an evaluated dose distribution by spatially shifting one field by 3mm, increasing the monitor units of the second field, applying a dynamic wedge for the third field, and leaving the fourth field unchanged. The VMAT plan was delivered to a gel dosimeter and the evaluated dose in the gel was calculated from optical CT measurements. Results from the gamma comparison tool built into the SlicerRT toolbox were compared to results from our in-house gamma algorithm implemented in Matlab (via MatlabBridge in 3D Slicer). The effects of noise, resolution and the exchange of reference and evaluated designations on the gamma comparison were also examined. Results: Perfect agreement was found between the gamma results obtained using the SlicerRT tool and our Matlab implementation for both the four-field box and gel datasets. The behaviour of the SlicerRT comparison with respect to changes in noise, resolution and the role of the reference and evaluated dose distributions was consistent with previous findings. Conclusion: Two independently developed gamma comparison tools have been cross-validated and found to be identical. As we transition our gel dosimetry analysis from Matlab to 3D Slicer, this validation serves as an important test towards ensuring the consistency of dose comparisons using the 3D Slicer

  3. Cross-validation and Peeling Strategies for Survival Bump Hunting using Recursive Peeling Methods

    Science.gov (United States)

    Dazard, Jean-Eudes; Choe, Michael; LeBlanc, Michael; Rao, J. Sunil

    2015-01-01

    We introduce a framework to build a survival/risk bump hunting model with a censored time-to-event response. Our Survival Bump Hunting (SBH) method is based on a recursive peeling procedure that uses a specific survival peeling criterion derived from non/semi-parametric statistics such as the hazards-ratio, the log-rank test or the Nelson--Aalen estimator. To optimize the tuning parameter of the model and validate it, we introduce an objective function based on survival or prediction-error statistics, such as the log-rank test and the concordance error rate. We also describe two alternative cross-validation techniques adapted to the joint task of decision-rule making by recursive peeling and survival estimation. Numerical analyses show the importance of replicated cross-validation and the differences between criteria and techniques in both low and high-dimensional settings. Although several non-parametric survival models exist, none addresses the problem of directly identifying local extrema. We show how SBH efficiently estimates extreme survival/risk subgroups unlike other models. This provides an insight into the behavior of commonly used models and suggests alternatives to be adopted in practice. Finally, our SBH framework was applied to a clinical dataset. In it, we identified subsets of patients characterized by clinical and demographic covariates with a distinct extreme survival outcome, for which tailored medical interventions could be made. An R package PRIMsrc (Patient Rule Induction Method in Survival, Regression and Classification settings) is available on CRAN (Comprehensive R Archive Network) and GitHub. PMID:27034730

  4. Experimental evaluation of rigor mortis. V. Effect of various temperatures on the evolution of rigor mortis.

    Science.gov (United States)

    Krompecher, T

    1981-01-01

    Objective measurements were carried out to study the evolution of rigor mortis on rats at various temperatures. Our experiments showed that: (1) at 6 degrees C rigor mortis reaches full development between 48 and 60 hours post mortem, and is resolved at 168 hours post mortem; (2) at 24 degrees C rigor mortis reaches full development at 5 hours post mortem, and is resolved at 16 hours post mortem; (3) at 37 degrees C rigor mortis reaches full development at 3 hours post mortem, and is resolved at 6 hours post mortem; (4) the intensity of rigor mortis grows with increase in temperature (difference between values obtained at 24 degrees C and 37 degrees C); and (5) and 6 degrees C a "cold rigidity" was found, in addition to and independent of rigor mortis.

  5. Demographic analysis, a comparison of the jackknife and bootstrap methods, and predation projection: a case study of Chrysopa pallens (Neuroptera: Chrysopidae).

    Science.gov (United States)

    Yu, Ling-Yuan; Chen, Zhen-Zhen; Zheng, Fang-Qiang; Shi, Ai-Ju; Guo, Ting-Ting; Yeh, Bao-Hua; Chi, Hsin; Xu, Yong-Yu

    2013-02-01

    The life table of the green lacewing, Chrysopa pallens (Rambur), was studied at 22 degrees C, a photoperiod of 15:9 (L:D) h, and 80% relative humidity in the laboratory. The raw data were analyzed using the age-stage, two-sex life table. The intrinsic rate of increase (r), the finite rate of increase (lambda), the net reproduction rate (R0), and the mean generation time (T) of Ch. pallens were 0.1258 d(-1), 1.1340 d(-1), 241.4 offspring and 43.6 d, respectively. For the estimation of the means, variances, and SEs of the population parameters, we compared the jackknife and bootstrap techniques. Although similar values of the means and SEs were obtained with both techniques, significant differences were observed in the frequency distribution and variances of all parameters. The jackknife technique will result in a zero net reproductive rate upon the omission of a male, an immature death, or a nonreproductive female. This result represents, however, a contradiction because an intrinsic rate of increase exists in this situation. Therefore, we suggest that the jackknife technique should not be used for the estimation of population parameters. In predator-prey interactions, the nonpredatory egg and pupal stages of the predator are time refuges for the prey, and the pest population can grow during these times. In this study, a population projection based on the age-stage, two-sex life table is used to determine the optimal interval between releases to fill the predation gaps and maintain the predatory capacity of the control agent.

  6. A jackknife approach to quantifying single-trial correlation between covariance-based metrics undefined on a single-trial basis.

    Science.gov (United States)

    Richter, Craig G; Thompson, William H; Bosman, Conrado A; Fries, Pascal

    2015-07-01

    The quantification of covariance between neuronal activities (functional connectivity) requires the observation of correlated changes and therefore multiple observations. The strength of such neuronal correlations may itself undergo moment-by-moment fluctuations, which might e.g. lead to fluctuations in single-trial metrics such as reaction time (RT), or may co-fluctuate with the correlation between activity in other brain areas. Yet, quantifying the relation between moment-by-moment co-fluctuations in neuronal correlations is precluded by the fact that neuronal correlations are not defined per single observation. The proposed solution quantifies this relation by first calculating neuronal correlations for all leave-one-out subsamples (i.e. the jackknife replications of all observations) and then correlating these values. Because the correlation is calculated between jackknife replications, we address this approach as jackknife correlation (JC). First, we demonstrate the equivalence of JC to conventional correlation for simulated paired data that are defined per observation and therefore allow the calculation of conventional correlation. While the JC recovers the conventional correlation precisely, alternative approaches, like sorting-and-binning, result in detrimental effects of the analysis parameters. We then explore the case of relating two spectral correlation metrics, like coherence, that require multiple observation epochs, where the only viable alternative analysis approaches are based on some form of epoch subdivision, which results in reduced spectral resolution and poor spectral estimators. We show that JC outperforms these approaches, particularly for short epoch lengths, without sacrificing any spectral resolution. Finally, we note that the JC can be applied to relate fluctuations in any smooth metric that is not defined on single observations. Copyright © 2015. Published by Elsevier Inc.

  7. "Rigor mortis" in a live patient.

    Science.gov (United States)

    Chakravarthy, Murali

    2010-03-01

    Rigor mortis is conventionally a postmortem change. Its occurrence suggests that death has occurred at least a few hours ago. The authors report a case of "Rigor Mortis" in a live patient after cardiac surgery. The likely factors that may have predisposed such premortem muscle stiffening in the reported patient are, intense low cardiac output status, use of unusually high dose of inotropic and vasopressor agents and likely sepsis. Such an event may be of importance while determining the time of death in individuals such as described in the report. It may also suggest requirement of careful examination of patients with muscle stiffening prior to declaration of death. This report is being published to point out the likely controversies that might arise out of muscle stiffening, which should not always be termed rigor mortis and/ or postmortem.

  8. [Rigor mortis -- a definite sign of death?].

    Science.gov (United States)

    Heller, A R; Müller, M P; Frank, M D; Dressler, J

    2005-04-01

    In the past years an ongoing controversial debate exists in Germany, regarding quality of the coroner's inquest and declaration of death by physicians. We report the case of a 90-year old female, who was found after an unknown time following a suicide attempt with benzodiazepine. The examination of the patient showed livores (mortis?) on the left forearm and left lower leg. Moreover, rigor (mortis?) of the left arm was apparent which prevented arm flexion and extension. The hypothermic patient with insufficient respiration was intubated and mechanically ventilated. Chest compressions were not performed, because central pulses were (hardly) palpable and a sinus bradycardia 45/min (AV-block 2 degrees and sole premature ventricular complexes) was present. After placement of an intravenous line (17 G, external jugular vein) the hemodynamic situation was stabilized with intermittent boli of epinephrine and with sodium bicarbonate. With improved circulation livores and rigor disappeared. In the present case a minimal central circulation was noted, which could be stabilized, despite the presence of certain signs of death ( livores and rigor mortis). Considering the finding of an abrogated peripheral perfusion (livores), we postulate a centripetal collapse of glycogen and ATP supply in the patients left arm (rigor), which was restored after resuscitation and reperfusion. Thus, it appears that livores and rigor are not sensitive enough to exclude a vita minima, in particular in hypothermic patients with intoxications. Consequently a careful ABC-check should be performed even in the presence of apparently certain signs of death, to avoid underdiagnosing a vita minima. Additional ECG- monitoring is required to reduce the rate of false positive declarations of death. To what extent basic life support by paramedics should commence when rigor and livores are present until physician DNR order, deserves further discussion.

  9. Cross-validation and hypothesis testing in neuroimaging: An irenic comment on the exchange between Friston and Lindquist et al.

    Science.gov (United States)

    Reiss, Philip T

    2015-08-01

    The "ten ironic rules for statistical reviewers" presented by Friston (2012) prompted a rebuttal by Lindquist et al. (2013), which was followed by a rejoinder by Friston (2013). A key issue left unresolved in this discussion is the use of cross-validation to test the significance of predictive analyses. This note discusses the role that cross-validation-based and related hypothesis tests have come to play in modern data analyses, in neuroimaging and other fields. It is shown that such tests need not be suboptimal and can fill otherwise-unmet inferential needs. Copyright © 2015 Elsevier Inc. All rights reserved.

  10. An ultramicroscopic study on rigor mortis.

    Science.gov (United States)

    Suzuki, T

    1976-01-01

    Gastrocnemius muscles taken from decapitated mice at various intervals after death and from mice killed by 2,4-dinitrophenol or mono-iodoacetic acid injection to induce rigor mortis soon after death, were observed by electron microscopy. The prominent appearance of many fine cross striations in the myofibrils (occurring about every 400 A) was considered to be characteristic of rigor mortis. These striations were caused by minute granules studded along the surfaces of both thick and thin filaments and appeared to be the bridges connecting the 2 kinds of filaments and accounted for the hardness and rigidity of the muscle.

  11. The Rigor Mortis of Education: Rigor Is Required in a Dying Educational System

    Science.gov (United States)

    Mixon, Jason; Stuart, Jerry

    2009-01-01

    In an effort to answer the "Educational Call to Arms", our national public schools have turned to Advanced Placement (AP) courses as the predominate vehicle used to address the lack of academic rigor in our public high schools. Advanced Placement is believed by many to provide students with the rigor and work ethic necessary to…

  12. Cross Validation of Rain Drop Size Distribution between GPM and Ground Based Polarmetric radar

    Science.gov (United States)

    Chandra, C. V.; Biswas, S.; Le, M.; Chen, H.

    2017-12-01

    Dual-frequency precipitation radar (DPR) on board the Global Precipitation Measurement (GPM) core satellite has reflectivity measurements at two independent frequencies, Ku- and Ka- band. Dual-frequency retrieval algorithms have been developed traditionally through forward, backward, and recursive approaches. However, these algorithms suffer from "dual-value" problem when they retrieve medium volume diameter from dual-frequency ratio (DFR) in rain region. To this end, a hybrid method has been proposed to perform raindrop size distribution (DSD) retrieval for GPM using a linear constraint of DSD along rain profile to avoid "dual-value" problem (Le and Chandrasekar, 2015). In the current GPM level 2 algorithm (Iguchi et al. 2017- Algorithm Theoretical Basis Document) the Solver module retrieves a vertical profile of drop size distributionn from dual-frequency observations and path integrated attenuations. The algorithm details can be found in Seto et al. (2013) . On the other hand, ground based polarimetric radars have been used for a long time to estimate drop size distributions (e.g., Gorgucci et al. 2002 ). In addition, coincident GPM and ground based observations have been cross validated using careful overpass analysis. In this paper, we perform cross validation on raindrop size distribution retrieval from three sources, namely the hybrid method, the standard products from the solver module and DSD retrievals from ground polarimetric radars. The results are presented from two NEXRAD radars located in Dallas -Fort Worth, Texas (i.e., KFWS radar) and Melbourne, Florida (i.e., KMLB radar). The results demonstrate the ability of DPR observations to produce DSD estimates, which can be used subsequently to generate global DSD maps. References: Seto, S., T. Iguchi, T. Oki, 2013: The basic performance of a precipitation retrieval algorithm for the Global Precipitation Measurement mission's single/dual-frequency radar measurements. IEEE Transactions on Geoscience and

  13. Trends: Rigor Mortis in the Arts.

    Science.gov (United States)

    Blodget, Alden S.

    1991-01-01

    Outlines how past art education provided a refuge for students from the rigors of other academic subjects. Observes that in recent years art education has become "discipline based." Argues that art educators need to reaffirm their commitment to a humanistic way of knowing. (KM)

  14. Photoconductivity of amorphous silicon-rigorous modelling

    International Nuclear Information System (INIS)

    Brada, P.; Schauer, F.

    1991-01-01

    It is our great pleasure to express our gratitude to Prof. Grigorovici, the pioneer of the exciting field of amorphous state by our modest contribution to this area. In this paper are presented the outline of the rigorous modelling program of the steady-state photoconductivity in amorphous silicon and related materials. (Author)

  15. Test-retest reliability and cross validation of the functioning everyday with a wheelchair instrument.

    Science.gov (United States)

    Mills, Tamara L; Holm, Margo B; Schmeler, Mark

    2007-01-01

    The purpose of this study was to establish the test-retest reliability and content validity of an outcomes tool designed to measure the effectiveness of seating-mobility interventions on the functional performance of individuals who use wheelchairs or scooters as their primary seating-mobility device. The instrument, Functioning Everyday With a Wheelchair (FEW), is a questionnaire designed to measure perceived user function related to wheelchair/scooter use. Using consumer-generated items, FEW Beta Version 1.0 was developed and test-retest reliability was established. Cross-validation of FEW Beta Version 1.0 was then carried out with five samples of seating-mobility users to establish content validity. Based on the content validity study, FEW Version 2.0 was developed and administered to seating-mobility consumers to examine its test-retest reliability. FEW Beta Version 1.0 yielded an intraclass correlation coefficient (ICC) Model (3,k) of .92, p content validity results revealed that FEW Beta Version 1.0 captured 55% of seating-mobility goals reported by consumers across five samples. FEW Version 2.0 yielded ICC(3,k) = .86, p content validity of FEW Version 2.0 was confirmed. FEW Beta Version 1.0 and FEW Version 2.0 were highly stable in their measurement of participants' seating-mobility goals over a 1-week interval.

  16. Sound quality indicators for urban places in Paris cross-validated by Milan data.

    Science.gov (United States)

    Ricciardi, Paola; Delaitre, Pauline; Lavandier, Catherine; Torchia, Francesca; Aumond, Pierre

    2015-10-01

    A specific smartphone application was developed to collect perceptive and acoustic data in Paris. About 3400 questionnaires were analyzed, regarding the global sound environment characterization, the perceived loudness of some emergent sources and the presence time ratio of sources that do not emerge from the background. Sound pressure level was recorded each second from the mobile phone's microphone during a 10-min period. The aim of this study is to propose indicators of urban sound quality based on linear regressions with perceptive variables. A cross validation of the quality models extracted from Paris data was carried out by conducting the same survey in Milan. The proposed sound quality general model is correlated with the real perceived sound quality (72%). Another model without visual amenity and familiarity is 58% correlated with perceived sound quality. In order to improve the sound quality indicator, a site classification was performed by Kohonen's Artificial Neural Network algorithm, and seven specific class models were developed. These specific models attribute more importance on source events and are slightly closer to the individual data than the global model. In general, the Parisian models underestimate the sound quality of Milan environments assessed by Italian people.

  17. Application of Monte Carlo cross-validation to identify pathway cross-talk in neonatal sepsis.

    Science.gov (United States)

    Zhang, Yuxia; Liu, Cui; Wang, Jingna; Li, Xingxia

    2018-03-01

    To explore genetic pathway cross-talk in neonates with sepsis, an integrated approach was used in this paper. To explore the potential relationships between differently expressed genes between normal uninfected neonates and neonates with sepsis and pathways, genetic profiling and biologic signaling pathway were first integrated. For different pathways, the score was obtained based upon the genetic expression by quantitatively analyzing the pathway cross-talk. The paired pathways with high cross-talk were identified by random forest classification. The purpose of the work was to find the best pairs of pathways able to discriminate sepsis samples versus normal samples. The results found 10 pairs of pathways, which were probably able to discriminate neonates with sepsis versus normal uninfected neonates. Among them, the best two paired pathways were identified according to analysis of extensive literature. Impact statement To find the best pairs of pathways able to discriminate sepsis samples versus normal samples, an RF classifier, the DS obtained by DEGs of paired pathways significantly associated, and Monte Carlo cross-validation were applied in this paper. Ten pairs of pathways were probably able to discriminate neonates with sepsis versus normal uninfected neonates. Among them, the best two paired pathways ((7) IL-6 Signaling and Phospholipase C Signaling (PLC); (8) Glucocorticoid Receptor (GR) Signaling and Dendritic Cell Maturation) were identified according to analysis of extensive literature.

  18. Compressive Sensing with Cross-Validation and Stop-Sampling for Sparse Polynomial Chaos Expansions

    Energy Technology Data Exchange (ETDEWEB)

    Huan, Xun; Safta, Cosmin; Sargsyan, Khachik; Vane, Zachary Phillips; Lacaze, Guilhem; Oefelein, Joseph C.; Najm, Habib N.

    2017-07-01

    Compressive sensing is a powerful technique for recovering sparse solutions of underdetermined linear systems, which is often encountered in uncertainty quanti cation analysis of expensive and high-dimensional physical models. We perform numerical investigations employing several com- pressive sensing solvers that target the unconstrained LASSO formulation, with a focus on linear systems that arise in the construction of polynomial chaos expansions. With core solvers of l1 ls, SpaRSA, CGIST, FPC AS, and ADMM, we develop techniques to mitigate over tting through an automated selection of regularization constant based on cross-validation, and a heuristic strategy to guide the stop-sampling decision. Practical recommendations on parameter settings for these tech- niques are provided and discussed. The overall method is applied to a series of numerical examples of increasing complexity, including large eddy simulations of supersonic turbulent jet-in-cross flow involving a 24-dimensional input. Through empirical phase-transition diagrams and convergence plots, we illustrate sparse recovery performance under structures induced by polynomial chaos, accuracy and computational tradeoffs between polynomial bases of different degrees, and practi- cability of conducting compressive sensing for a realistic, high-dimensional physical application. Across test cases studied in this paper, we find ADMM to have demonstrated empirical advantages through consistent lower errors and faster computational times.

  19. The cross-validated AUC for MCP-logistic regression with high-dimensional data.

    Science.gov (United States)

    Jiang, Dingfeng; Huang, Jian; Zhang, Ying

    2013-10-01

    We propose a cross-validated area under the receiving operator characteristic (ROC) curve (CV-AUC) criterion for tuning parameter selection for penalized methods in sparse, high-dimensional logistic regression models. We use this criterion in combination with the minimax concave penalty (MCP) method for variable selection. The CV-AUC criterion is specifically designed for optimizing the classification performance for binary outcome data. To implement the proposed approach, we derive an efficient coordinate descent algorithm to compute the MCP-logistic regression solution surface. Simulation studies are conducted to evaluate the finite sample performance of the proposed method and its comparison with the existing methods including the Akaike information criterion (AIC), Bayesian information criterion (BIC) or Extended BIC (EBIC). The model selected based on the CV-AUC criterion tends to have a larger predictive AUC and smaller classification error than those with tuning parameters selected using the AIC, BIC or EBIC. We illustrate the application of the MCP-logistic regression with the CV-AUC criterion on three microarray datasets from the studies that attempt to identify genes related to cancers. Our simulation studies and data examples demonstrate that the CV-AUC is an attractive method for tuning parameter selection for penalized methods in high-dimensional logistic regression models.

  20. Assessing behavioural changes in ALS: cross-validation of ALS-specific measures.

    Science.gov (United States)

    Pinto-Grau, Marta; Costello, Emmet; O'Connor, Sarah; Elamin, Marwa; Burke, Tom; Heverin, Mark; Pender, Niall; Hardiman, Orla

    2017-07-01

    The Beaumont Behavioural Inventory (BBI) is a behavioural proxy report for the assessment of behavioural changes in ALS. This tool has been validated against the FrSBe, a non-ALS-specific behavioural assessment, and further comparison of the BBI against a disease-specific tool was considered. This study cross-validates the BBI against the ALS-FTD-Q. Sixty ALS patients, 8% also meeting criteria for FTD, were recruited. All patients were evaluated using the BBI and the ALS-FTD-Q, completed by a carer. Correlational analysis was performed to assess construct validity. Precision, sensitivity, specificity, and overall accuracy of the BBI when compared to the ALS-FTD-Q, were obtained. The mean score of the whole sample on the BBI was 11.45 ± 13.06. ALS-FTD patients scored significantly higher than non-demented ALS patients (31.6 ± 14.64, 9.62 ± 11.38; p ALS-FTD-Q was observed (r = 0.807, p ALS-FTD-Q. Good construct validity has been further confirmed when the BBI is compared to an ALS-specific tool. Furthermore, the BBI is a more comprehensive behavioural assessment for ALS, as it measures the whole behavioural spectrum in this condition.

  1. A Cross-Validation Study of the Kirton Adaption-Innovation Inventory in Three Research and Development Organizations.

    Science.gov (United States)

    Keller, Robert T.; Holland, Winford E.

    1979-01-01

    A cross-validation study of the Kirton Adaption-Innovation Inventory (KAI) was conducted with 256 professional employees from three applied research and development organizations. The KAI correlated well with both direct and indirect measures of innovativeness in all three organizations. (Author/MH)

  2. Development and Cross-Validation of Equation for Estimating Percent Body Fat of Korean Adults According to Body Mass Index

    Directory of Open Access Journals (Sweden)

    Hoyong Sung

    2017-06-01

    Full Text Available Background : Using BMI as an independent variable is the easiest way to estimate percent body fat. Thus far, few studies have investigated the development and cross-validation of an equation for estimating the percent body fat of Korean adults according to the BMI. The goals of this study were the development and cross-validation of an equation for estimating the percent fat of representative Korean adults using the BMI. Methods : Samples were obtained from the Korea National Health and Nutrition Examination Survey between 2008 and 2011. The samples from 2008-2009 and 2010-2011 were labeled as the validation group (n=10,624 and the cross-validation group (n=8,291, respectively. The percent fat was measured using dual-energy X-ray absorptiometry, and the body mass index, gender, and age were included as independent variables to estimate the measured percent fat. The coefficient of determination (R², standard error of estimation (SEE, and total error (TE were calculated to examine the accuracy of the developed equation. Results : The cross-validated R² was 0.731 for Model 1 and 0.735 for Model 2. The SEE was 3.978 for Model 1 and 3.951 for Model 2. The equations developed in this study are more accurate for estimating percent fat of the cross-validation group than those previously published by other researchers. Conclusion : The newly developed equations are comparatively accurate for the estimation of the percent fat of Korean adults.

  3. Accelerating Biomedical Discoveries through Rigor and Transparency.

    Science.gov (United States)

    Hewitt, Judith A; Brown, Liliana L; Murphy, Stephanie J; Grieder, Franziska; Silberberg, Shai D

    2017-07-01

    Difficulties in reproducing published research findings have garnered a lot of press in recent years. As a funder of biomedical research, the National Institutes of Health (NIH) has taken measures to address underlying causes of low reproducibility. Extensive deliberations resulted in a policy, released in 2015, to enhance reproducibility through rigor and transparency. We briefly explain what led to the policy, describe its elements, provide examples and resources for the biomedical research community, and discuss the potential impact of the policy on translatability with a focus on research using animal models. Importantly, while increased attention to rigor and transparency may lead to an increase in the number of laboratory animals used in the near term, it will lead to more efficient and productive use of such resources in the long run. The translational value of animal studies will be improved through more rigorous assessment of experimental variables and data, leading to better assessments of the translational potential of animal models, for the benefit of the research community and society. Published by Oxford University Press on behalf of the Institute for Laboratory Animal Research 2017. This work is written by (a) US Government employee(s) and is in the public domain in the US.

  4. Cross-validity of a portable glucose capillary monitors in relation to enzymatic spectrophotometer methods

    Directory of Open Access Journals (Sweden)

    William Alves Lima

    2006-09-01

    Full Text Available The glucose is an important substrate utilizaded during exercise. Accurate measurement of glucose is vital to obtain trustworthy results. The enzymatic spectrophotometer methods are generally considered the “goldstandard” laboratory procedure for measuring of glucose (GEnz, is time consuming, costly, and inappropriate for large scale field testing. Compact and portable glucose monitors (GAccu are quick and easy methods to assess glucose on large numbers of subjects. So, this study aimed to test the cross-validity of GAccu. The sample was composed of 107 men (aged= 35.4±10.7 years; stature= 168.4±6.9 cm; body mass= 73.4±11.2 kg; %fat= 20.9±8.3% – by dual energy x-ray absorptiometry. Blood for measuring fasting glucose was taken in basilar vein (Genz, Bioplus: Bio-2000 and in ring finger (GAccu: Accu-Chek© Advantage©, after a 12-hour overnight fast. GEnz was used as the criterion for cross-validity. Paired t-test shown differences (p RESUMO A glicose é um substrato importante utilizado durante o exercício físico. Medidas acuradas da glicose são fundamentais para a obtenção de resultados confiáveis. O método laboratorial de espectrofotometria enzimática geralmente é considerado o procedimento “padrão ouro” para medir a glicose (GEnz, o qual requer tempo, custo e é inapropriado para o uso em larga escala. Monitores portáteis de glicose (GAccu são rápidos e fáceis para medir a glicose em um grande número de sujeitos. Então, este estudo teve por objetivo testar a validade concorrente do GAccu. A amostra foi composta por 107 homens (idade= 35,4±10,7 anos; estatura= 168,4±6,9 cm; massa corporal= 73,4±11,2 kg; %gordura= 20,9±8,3% – por absortometria de raio-x de dupla energia. O sangue para mensurar a glicose em jejum foi tirado na veia basilar (Genz, Bioplus: Bio-2000 e no dedo anular (GAccu - Accu- Chek© Advantage©, depois de 12h de jejum noturno. O GEnz foi usado como critério para testar a validade

  5. Vascular Adaptation: Pattern Formation and Cross Validation between an Agent Based Model and a Dynamical System.

    Science.gov (United States)

    Garbey, Marc; Casarin, Stefano; Berceli, Scott A

    2017-09-21

    Myocardial infarction is the global leading cause of mortality (Go et al., 2014). Coronary artery occlusion is its main etiology and it is commonly treated by Coronary Artery Bypass Graft (CABG) surgery (Wilson et al, 2007). The long-term outcome remains unsatisfactory (Benedetto, 2016) as the graft faces the phenomenon of restenosis during the post-surgery, which consists of re-occlusion of the lumen and usually requires secondary intervention even within one year after the initial surgery (Harskamp, 2013). In this work, we propose an extensive study of the restenosis phenomenon by implementing two mathematical models previously developed by our group: a heuristic Dynamical System (DS) (Garbey and Berceli, 2013), and a stochastic Agent Based Model (ABM) (Garbey et al., 2015). With an extensive use of the ABM, we retrieved the pattern formations of the cellular events that mainly lead the restenosis, especially focusing on mitosis in intima, caused by alteration in shear stress, and mitosis in media, fostered by alteration in wall tension. A deep understanding of the elements at the base of the restenosis is indeed crucial in order to improve the final outcome of vein graft bypass. We also turned the ABM closer to the physiological reality by abating its original assumption of circumferential symmetry. This allowed us to finely replicate the trigger event of the restenosis, i.e. the loss of the endothelium in the early stage of the post-surgical follow up (Roubos et al., 1995) and to simulate the encroachment of the lumen in a fashion aligned with histological evidences (Owens et al., 2015). Finally, we cross-validated the two models by creating an accurate matching procedure. In this way we added the degree of accuracy given by the ABM to a simplified model (DS) that can serve as powerful predictive tool for the clinic. Copyright © 2017 Elsevier Ltd. All rights reserved.

  6. Development and Cross-Validation of the Short Form of the Cultural Competence Scale for Nurses

    Directory of Open Access Journals (Sweden)

    Duckhee Chae, PhD, RN

    2018-03-01

    Full Text Available Purpose: To develop and validate the short form of the Korean adaptation of the Cultural Competence Scale for Nurses. Methods: To shorten the 33-item Cultural Competence Scale for Nurses, an expert panel (N = 6 evaluated its content validity. The revised items were pilot tested using a sample of nine nurses, and clarity was assessed through cognitive interviews with respondents. The original instrument was shortened and validated through item analysis, exploratory factor analysis, convergent validity, and reliability using data from 277 hospital nurses. The 14-item final version was cross-validated through confirmatory factor analysis, convergent validity, discriminant validity, known-group comparisons, and reliability using data from 365 nurses belonging to 19 hospitals. Results: A 4-factor, 14-item model demonstrated satisfactory fit with significant factor loadings. The convergent validity between the developed tool and transcultural self-efficacy was significant (r = .55, p < .001. The convergent validity evaluated using the Average Variance Extracted and discriminant validity were acceptable. Known-group comparisons revealed significant differences in the mean scores of the groups who spent more than one month abroad (p = .002 were able to communicate in a foreign language (p < .001 and had education to care for foreign patients (p = .039. Cronbach's α was .89, and the reliability of the subscales ranged from .74 to .91. Conclusion: The Cultural Competence Scale for Nurses-Short Form demonstrated good reliability and validity. It is a short and appropriate instrument for use in clinical and research settings to assess nurses' cultural competence. Keywords: cultural competence, psychometric properties, nurse

  7. Cross Validation Through Two-Dimensional Solution Surface for Cost-Sensitive SVM.

    Science.gov (United States)

    Gu, Bin; Sheng, Victor S; Tay, Keng Yeow; Romano, Walter; Li, Shuo

    2017-06-01

    Model selection plays an important role in cost-sensitive SVM (CS-SVM). It has been proven that the global minimum cross validation (CV) error can be efficiently computed based on the solution path for one parameter learning problems. However, it is a challenge to obtain the global minimum CV error for CS-SVM based on one-dimensional solution path and traditional grid search, because CS-SVM is with two regularization parameters. In this paper, we propose a solution and error surfaces based CV approach (CV-SES). More specifically, we first compute a two-dimensional solution surface for CS-SVM based on a bi-parameter space partition algorithm, which can fit solutions of CS-SVM for all values of both regularization parameters. Then, we compute a two-dimensional validation error surface for each CV fold, which can fit validation errors of CS-SVM for all values of both regularization parameters. Finally, we obtain the CV error surface by superposing K validation error surfaces, which can find the global minimum CV error of CS-SVM. Experiments are conducted on seven datasets for cost sensitive learning and on four datasets for imbalanced learning. Experimental results not only show that our proposed CV-SES has a better generalization ability than CS-SVM with various hybrids between grid search and solution path methods, and than recent proposed cost-sensitive hinge loss SVM with three-dimensional grid search, but also show that CV-SES uses less running time.

  8. Estimating misclassification error: a closer look at cross-validation based methods

    Directory of Open Access Journals (Sweden)

    Ounpraseuth Songthip

    2012-11-01

    Full Text Available Abstract Background To estimate a classifier’s error in predicting future observations, bootstrap methods have been proposed as reduced-variation alternatives to traditional cross-validation (CV methods based on sampling without replacement. Monte Carlo (MC simulation studies aimed at estimating the true misclassification error conditional on the training set are commonly used to compare CV methods. We conducted an MC simulation study to compare a new method of bootstrap CV (BCV to k-fold CV for estimating clasification error. Findings For the low-dimensional conditions simulated, the modest positive bias of k-fold CV contrasted sharply with the substantial negative bias of the new BCV method. This behavior was corroborated using a real-world dataset of prognostic gene-expression profiles in breast cancer patients. Our simulation results demonstrate some extreme characteristics of variance and bias that can occur due to a fault in the design of CV exercises aimed at estimating the true conditional error of a classifier, and that appear not to have been fully appreciated in previous studies. Although CV is a sound practice for estimating a classifier’s generalization error, using CV to estimate the fixed misclassification error of a trained classifier conditional on the training set is problematic. While MC simulation of this estimation exercise can correctly represent the average bias of a classifier, it will overstate the between-run variance of the bias. Conclusions We recommend k-fold CV over the new BCV method for estimating a classifier’s generalization error. The extreme negative bias of BCV is too high a price to pay for its reduced variance.

  9. Evidence-based cross validation for acoustic power transmission for a novel treatment system.

    Science.gov (United States)

    Mihcin, Senay; Strehlow, Jan; Demedts, Daniel; Schwenke, Michael; Levy, Yoav; Melzer, Andreas

    2017-06-01

    The novel Trans-Fusimo Treatment System (TTS) is designed to control Magnetic Resonance guided Focused Ultrasound (MRgFUS) therapy to ablate liver tumours under respiratory motion. It is crucial to deliver the acoustic power within tolerance limits for effective liver tumour treatment via MRgFUS. Before application in a clinical setting, evidence of reproducibility and reliability is a must for safe practice. The TTS software delivers the acoustic power via ExAblate-2100 Conformal Bone System (CBS) transducer. A built-in quality assurance application was developed to measure the force values, using a novel protocol to measure the efficiency for the electrical power values of 100 and 150W for 6s of sonication. This procedure was repeated 30 times by two independent users against the clinically approved ExAblate-2100 CBS for cross-validation. Both systems proved to deliver the power within the accepted efficiency levels (70-90%). Two sample t-tests were used to assess the differences in force values between the ExAblate-2100 CBS and the TTS (p > 0.05). Bland-Altman plots were used to demonstrate the limits of agreement between the two systems falling within the 10% limits of agreement. Two sample t-tests indicated that TTS does not have user dependency (p > 0.05). The TTS software proved to deliver the acoustic power without exceeding the safety levels. Results provide evidence as a part of ISO13485 regulations for CE marking purposes. The developed methodology could be utilised as a part of quality assurance system in clinical settings; when the TTS is used in clinical practice.

  10. Comparison of the Effects of Cross-validation Methods on Determining Performances of Classifiers Used in Diagnosing Congestive Heart Failure

    Directory of Open Access Journals (Sweden)

    Isler Yalcin

    2015-08-01

    Full Text Available Congestive heart failure (CHF occurs when the heart is unable to provide sufficient pump action to maintain blood flow to meet the needs of the body. Early diagnosis is important since the mortality rate of the patients with CHF is very high. There are different validation methods to measure performances of classifier algorithms designed for this purpose. In this study, k-fold and leave-one-out cross-validation methods were tested for performance measures of five distinct classifiers in the diagnosis of the patients with CHF. Each algorithm was run 100 times and the average and the standard deviation of classifier performances were recorded. As a result, it was observed that average performance was enhanced and the variability of performances was decreased when the number of data sections used in the cross-validation method was increased.

  11. Cross-validation analysis for genetic evaluation models for ranking in endurance horses.

    Science.gov (United States)

    García-Ballesteros, S; Varona, L; Valera, M; Gutiérrez, J P; Cervantes, I

    2018-01-01

    Ranking trait was used as a selection criterion for competition horses to estimate racing performance. In the literature the most common approaches to estimate breeding values are the linear or threshold statistical models. However, recent studies have shown that a Thurstonian approach was able to fix the race effect (competitive level of the horses that participate in the same race), thus suggesting a better prediction accuracy of breeding values for ranking trait. The aim of this study was to compare the predictability of linear, threshold and Thurstonian approaches for genetic evaluation of ranking in endurance horses. For this purpose, eight genetic models were used for each approach with different combinations of random effects: rider, rider-horse interaction and environmental permanent effect. All genetic models included gender, age and race as systematic effects. The database that was used contained 4065 ranking records from 966 horses and that for the pedigree contained 8733 animals (47% Arabian horses), with an estimated heritability around 0.10 for the ranking trait. The prediction ability of the models for racing performance was evaluated using a cross-validation approach. The average correlation between real and predicted performances across genetic models was around 0.25 for threshold, 0.58 for linear and 0.60 for Thurstonian approaches. Although no significant differences were found between models within approaches, the best genetic model included: the rider and rider-horse random effects for threshold, only rider and environmental permanent effects for linear approach and all random effects for Thurstonian approach. The absolute correlations of predicted breeding values among models were higher between threshold and Thurstonian: 0.90, 0.91 and 0.88 for all animals, top 20% and top 5% best animals. For rank correlations these figures were 0.85, 0.84 and 0.86. The lower values were those between linear and threshold approaches (0.65, 0.62 and 0.51). In

  12. Software metrics a rigorous and practical approach

    CERN Document Server

    Fenton, Norman

    2014-01-01

    A Framework for Managing, Measuring, and Predicting Attributes of Software Development Products and ProcessesReflecting the immense progress in the development and use of software metrics in the past decades, Software Metrics: A Rigorous and Practical Approach, Third Edition provides an up-to-date, accessible, and comprehensive introduction to software metrics. Like its popular predecessors, this third edition discusses important issues, explains essential concepts, and offers new approaches for tackling long-standing problems.New to the Third EditionThis edition contains new material relevant

  13. Cultural Orientations Framework (COF) Assessment Questionnaire in Cross-Cultural Coaching: A Cross-Validation with Wave Focus Styles

    OpenAIRE

    Rojon, C; McDowall, A

    2010-01-01

    This paper outlines a cross-validation of the Cultural Orientations Framework assessment questionnaire\\ud (COF, Rosinski, 2007; a new tool designed for cross-cultural coaching) with the Saville Consulting\\ud Wave Focus Styles questionnaire (Saville Consulting, 2006; an existing validated measure of\\ud occupational personality), using data from UK and German participants (N = 222). The convergent and\\ud divergent validity of the questionnaire was adequate. Contrary to previous findings which u...

  14. Screening for postdeployment conditions: development and cross-validation of an embedded validity scale in the neurobehavioral symptom inventory.

    Science.gov (United States)

    Vanderploeg, Rodney D; Cooper, Douglas B; Belanger, Heather G; Donnell, Alison J; Kennedy, Jan E; Hopewell, Clifford A; Scott, Steven G

    2014-01-01

    To develop and cross-validate internal validity scales for the Neurobehavioral Symptom Inventory (NSI). Four existing data sets were used: (1) outpatient clinical traumatic brain injury (TBI)/neurorehabilitation database from a military site (n = 403), (2) National Department of Veterans Affairs TBI evaluation database (n = 48 175), (3) Florida National Guard nonclinical TBI survey database (n = 3098), and (4) a cross-validation outpatient clinical TBI/neurorehabilitation database combined across 2 military medical centers (n = 206). Secondary analysis of existing cohort data to develop (study 1) and cross-validate (study 2) internal validity scales for the NSI. The NSI, Mild Brain Injury Atypical Symptoms, and Personality Assessment Inventory scores. Study 1: Three NSI validity scales were developed, composed of 5 unusual items (Negative Impression Management [NIM5]), 6 low-frequency items (LOW6), and the combination of 10 nonoverlapping items (Validity-10). Cut scores maximizing sensitivity and specificity on these measures were determined, using a Mild Brain Injury Atypical Symptoms score of 8 or more as the criterion for invalidity. Study 2: The same validity scale cut scores again resulted in the highest classification accuracy and optimal balance between sensitivity and specificity in the cross-validation sample, using a Personality Assessment Inventory Negative Impression Management scale with a T score of 75 or higher as the criterion for invalidity. The NSI is widely used in the Department of Defense and Veterans Affairs as a symptom-severity assessment following TBI, but is subject to symptom overreporting or exaggeration. This study developed embedded NSI validity scales to facilitate the detection of invalid response styles. The NSI Validity-10 scale appears to hold considerable promise for validity assessment when the NSI is used as a population-screening tool.

  15. Development of rigor mortis is not affected by muscle volume.

    Science.gov (United States)

    Kobayashi, M; Ikegaya, H; Takase, I; Hatanaka, K; Sakurada, K; Iwase, H

    2001-04-01

    There is a hypothesis suggesting that rigor mortis progresses more rapidly in small muscles than in large muscles. We measured rigor mortis as tension determined isometrically in rat musculus erector spinae that had been cut into muscle bundles of various volumes. The muscle volume did not influence either the progress or the resolution of rigor mortis, which contradicts the hypothesis. Differences in pre-rigor load on the muscles influenced the onset and resolution of rigor mortis in a few pairs of samples, but did not influence the time taken for rigor mortis to reach its full extent after death. Moreover, the progress of rigor mortis in this muscle was biphasic; this may reflect the early rigor of red muscle fibres and the late rigor of white muscle fibres.

  16. Rigor in Qualitative Supply Chain Management Research

    DEFF Research Database (Denmark)

    Goffin, Keith; Raja, Jawwad; Claes, Björn

    2012-01-01

    , reliability, and theoretical saturation. Originality/value – It is the authors' contention that the addition of the repertory grid technique to the toolset of methods used by logistics and supply chain management researchers can only enhance insights and the building of robust theories. Qualitative studies......Purpose – The purpose of this paper is to share the authors' experiences of using the repertory grid technique in two supply chain management studies. The paper aims to demonstrate how the two studies provided insights into how qualitative techniques such as the repertory grid can be made more...... rigorous than in the past, and how results can be generated that are inaccessible using quantitative methods. Design/methodology/approach – This paper presents two studies undertaken using the repertory grid technique to illustrate its application in supply chain management research. Findings – The paper...

  17. Statistics for mathematicians a rigorous first course

    CERN Document Server

    Panaretos, Victor M

    2016-01-01

    This textbook provides a coherent introduction to the main concepts and methods of one-parameter statistical inference. Intended for students of Mathematics taking their first course in Statistics, the focus is on Statistics for Mathematicians rather than on Mathematical Statistics. The goal is not to focus on the mathematical/theoretical aspects of the subject, but rather to provide an introduction to the subject tailored to the mindset and tastes of Mathematics students, who are sometimes turned off by the informal nature of Statistics courses. This book can be used as the basis for an elementary semester-long first course on Statistics with a firm sense of direction that does not sacrifice rigor. The deeper goal of the text is to attract the attention of promising Mathematics students.

  18. Exact Cross-Validation for kNN and applications to passive and active learning in classification

    OpenAIRE

    Célisse, Alain; Mary-Huard, Tristan

    2011-01-01

    In the binary classification framework, a closed form expression of the cross-validation Leave-p-Out (LpO) risk estimator for the k Nearest Neighbor algorithm (kNN) is derived. It is first used to study the LpO risk minimization strategy for choosing k in the passive learning setting. The impact of p on the choice of k and the LpO estimation of the risk are inferred. In the active learning setting, a procedure is proposed that selects new examples using a LpO committee of kNN classifiers. The...

  19. Rigorous theory of molecular orientational nonlinear optics

    International Nuclear Information System (INIS)

    Kwak, Chong Hoon; Kim, Gun Yeup

    2015-01-01

    Classical statistical mechanics of the molecular optics theory proposed by Buckingham [A. D. Buckingham and J. A. Pople, Proc. Phys. Soc. A 68, 905 (1955)] has been extended to describe the field induced molecular orientational polarization effects on nonlinear optics. In this paper, we present the generalized molecular orientational nonlinear optical processes (MONLO) through the calculation of the classical orientational averaging using the Boltzmann type time-averaged orientational interaction energy in the randomly oriented molecular system under the influence of applied electric fields. The focal points of the calculation are (1) the derivation of rigorous tensorial components of the effective molecular hyperpolarizabilities, (2) the molecular orientational polarizations and the electronic polarizations including the well-known third-order dc polarization, dc electric field induced Kerr effect (dc Kerr effect), optical Kerr effect (OKE), dc electric field induced second harmonic generation (EFISH), degenerate four wave mixing (DFWM) and third harmonic generation (THG). We also present some of the new predictive MONLO processes. For second-order MONLO, second-order optical rectification (SOR), Pockels effect and difference frequency generation (DFG) are described in terms of the anisotropic coefficients of first hyperpolarizability. And, for third-order MONLO, third-order optical rectification (TOR), dc electric field induced difference frequency generation (EFIDFG) and pump-probe transmission are presented

  20. Rigor or mortis: best practices for preclinical research in neuroscience.

    Science.gov (United States)

    Steward, Oswald; Balice-Gordon, Rita

    2014-11-05

    Numerous recent reports document a lack of reproducibility of preclinical studies, raising concerns about potential lack of rigor. Examples of lack of rigor have been extensively documented and proposals for practices to improve rigor are appearing. Here, we discuss some of the details and implications of previously proposed best practices and consider some new ones, focusing on preclinical studies relevant to human neurological and psychiatric disorders. Copyright © 2014 Elsevier Inc. All rights reserved.

  1. [Experimental study of restiffening of the rigor mortis].

    Science.gov (United States)

    Wang, X; Li, M; Liao, Z G; Yi, X F; Peng, X M

    2001-11-01

    To observe changes of the length of sarcomere of rat when restiffening. We measured the length of sarcomere of quadriceps in 40 rats in different condition by scanning electron microscope. The length of sarcomere of rigor mortis without destroy is obviously shorter than that of restiffening. The length of sarcomere is negatively correlative to the intensity of rigor mortis. Measuring the length of sarcomere can determine the intensity of rigor mortis and provide evidence for estimation of time since death.

  2. Long persistence of rigor mortis at constant low temperature.

    Science.gov (United States)

    Varetto, Lorenzo; Curto, Ombretta

    2005-01-06

    We studied the persistence of rigor mortis by using physical manipulation. We tested the mobility of the knee on 146 corpses kept under refrigeration at Torino's city mortuary at a constant temperature of +4 degrees C. We found a persistence of complete rigor lasting for 10 days in all the cadavers we kept under observation; and in one case, rigor lasted for 16 days. Between the 11th and the 17th days, a progressively increasing number of corpses showed a change from complete into partial rigor (characterized by partial bending of the articulation). After the 17th day, all the remaining corpses showed partial rigor and in the two cadavers that were kept under observation "à outrance" we found the absolute resolution of rigor mortis occurred on the 28th day. Our results prove that it is possible to find a persistence of rigor mortis that is much longer than the expected when environmental conditions resemble average outdoor winter temperatures in temperate zones. Therefore, this datum must be considered when a corpse is found in those environmental conditions so that when estimating the time of death, we are not misled by the long persistence of rigor mortis.

  3. Rigorous solution to Bargmann-Wigner equation for integer spin

    CERN Document Server

    Huang Shi Zhong; Wu Ning; Zheng Zhi Peng

    2002-01-01

    A rigorous method is developed to solve the Bargamann-Wigner equation for arbitrary integer spin in coordinate representation in a step by step way. The Bargmann-Wigner equation is first transformed to a form easier to solve, the new equations are then solved rigorously in coordinate representation, and the wave functions in a closed form are thus derived

  4. Using grounded theory as a method for rigorously reviewing literature

    NARCIS (Netherlands)

    Wolfswinkel, J.; Furtmueller-Ettinger, Elfriede; Wilderom, Celeste P.M.

    2013-01-01

    This paper offers guidance to conducting a rigorous literature review. We present this in the form of a five-stage process in which we use Grounded Theory as a method. We first probe the guidelines explicated by Webster and Watson, and then we show the added value of Grounded Theory for rigorously

  5. Evaluating Rigor in Qualitative Methodology and Research Dissemination

    Science.gov (United States)

    Trainor, Audrey A.; Graue, Elizabeth

    2014-01-01

    Despite previous and successful attempts to outline general criteria for rigor, researchers in special education have debated the application of rigor criteria, the significance or importance of small n research, the purpose of interpretivist approaches, and the generalizability of qualitative empirical results. Adding to these complications, the…

  6. Experimental evaluation of rigor mortis. VI. Effect of various causes of death on the evolution of rigor mortis.

    Science.gov (United States)

    Krompecher, T; Bergerioux, C; Brandt-Casadevall, C; Gujer, H R

    1983-07-01

    The evolution of rigor mortis was studied in cases of nitrogen asphyxia, drowning and strangulation, as well as in fatal intoxications due to strychnine, carbon monoxide and curariform drugs, using a modified method of measurement. Our experiments demonstrated that: (1) Strychnine intoxication hastens the onset and passing of rigor mortis. (2) CO intoxication delays the resolution of rigor mortis. (3) The intensity of rigor may vary depending upon the cause of death. (4) If the stage of rigidity is to be used to estimate the time of death, it is necessary: (a) to perform a succession of objective measurements of rigor mortis intensity; and (b) to verify the eventual presence of factors that could play a role in the modification of its development.

  7. Experimental evaluation of rigor mortis. VII. Effect of ante- and post-mortem electrocution on the evolution of rigor mortis.

    Science.gov (United States)

    Krompecher, T; Bergerioux, C

    1988-01-01

    The influence of electrocution on the evolution of rigor mortis was studied on rats. Our experiments showed that: (1) Electrocution hastens the onset of rigor mortis. After an electrocution of 90 s, a complete rigor develops already 1 h post-mortem (p.m.) compared to 5 h p.m. for the controls. (2) Electrocution hastens the passing of rigor mortis. After an electrocution of 90 s, the first significant decrease occurs at 3 h p.m. (8 h p.m. in the controls). (3) These modifications in rigor mortis evolution are less pronounced in the limbs not directly touched by the electric current. (4) In case of post-mortem electrocution, the changes are slightly less pronounced, the resistance is higher and the absorbed energy is lower as compared with the ante-mortem electrocution cases. The results are completed by two practical observations on human electrocution cases.

  8. Monitoring muscle optical scattering properties during rigor mortis

    Science.gov (United States)

    Xia, J.; Ranasinghesagara, J.; Ku, C. W.; Yao, G.

    2007-09-01

    Sarcomere is the fundamental functional unit in skeletal muscle for force generation. In addition, sarcomere structure is also an important factor that affects the eating quality of muscle food, the meat. The sarcomere structure is altered significantly during rigor mortis, which is the critical stage involved in transforming muscle to meat. In this paper, we investigated optical scattering changes during the rigor process in Sternomandibularis muscles. The measured optical scattering parameters were analyzed along with the simultaneously measured passive tension, pH value, and histology analysis. We found that the temporal changes of optical scattering, passive tension, pH value and fiber microstructures were closely correlated during the rigor process. These results suggested that sarcomere structure changes during rigor mortis can be monitored and characterized by optical scattering, which may find practical applications in predicting meat quality.

  9. Recent Development in Rigorous Computational Methods in Dynamical Systems

    OpenAIRE

    Arai, Zin; Kokubu, Hiroshi; Pilarczyk, Paweł

    2009-01-01

    We highlight selected results of recent development in the area of rigorous computations which use interval arithmetic to analyse dynamical systems. We describe general ideas and selected details of different ways of approach and we provide specific sample applications to illustrate the effectiveness of these methods. The emphasis is put on a topological approach, which combined with rigorous calculations provides a broad range of new methods that yield mathematically rel...

  10. A Procedure for Identification of Appropriate State Space and ARIMA Models Based on Time-Series Cross-Validation

    Directory of Open Access Journals (Sweden)

    Patrícia Ramos

    2016-11-01

    Full Text Available In this work, a cross-validation procedure is used to identify an appropriate Autoregressive Integrated Moving Average model and an appropriate state space model for a time series. A minimum size for the training set is specified. The procedure is based on one-step forecasts and uses different training sets, each containing one more observation than the previous one. All possible state space models and all ARIMA models where the orders are allowed to range reasonably are fitted considering raw data and log-transformed data with regular differencing (up to second order differences and, if the time series is seasonal, seasonal differencing (up to first order differences. The value of root mean squared error for each model is calculated averaging the one-step forecasts obtained. The model which has the lowest root mean squared error value and passes the Ljung–Box test using all of the available data with a reasonable significance level is selected among all the ARIMA and state space models considered. The procedure is exemplified in this paper with a case study of retail sales of different categories of women’s footwear from a Portuguese retailer, and its accuracy is compared with three reliable forecasting approaches. The results show that our procedure consistently forecasts more accurately than the other approaches and the improvements in the accuracy are significant.

  11. Latent structure and reliability analysis of the measure of body apperception: cross-validation for head and neck cancer patients.

    Science.gov (United States)

    Jean-Pierre, Pascal; Fundakowski, Christopher; Perez, Enrique; Jean-Pierre, Shadae E; Jean-Pierre, Ashley R; Melillo, Angelica B; Libby, Rachel; Sargi, Zoukaa

    2013-02-01

    Cancer and its treatments are associated with psychological distress that can negatively impact self-perception, psychosocial functioning, and quality of life. Patients with head and neck cancers (HNC) are particularly susceptible to psychological distress. This study involved a cross-validation of the Measure of Body Apperception (MBA) for HNC patients. One hundred and twenty-two English-fluent HNC patients between 20 and 88 years of age completed the MBA on a Likert scale ranging from "1 = disagree" to "4 = agree." We assessed the latent structure and internal consistency reliability of the MBA using Principal Components Analysis (PCA) and Cronbach's coefficient alpha (α), respectively. We determined convergent and divergent validities of the MBA using correlations with the Hospital Anxiety and Depression Scale (HADS), observer disfigurement rating, and patients' clinical and demographic variables. The PCA revealed a coherent set of items that explained 38 % of the variance. The Kaiser-Meyer-Olkin measure of sampling adequacy was 0.73 and the Bartlett's test of sphericity was statistically significant (χ (2) (28) = 253.64; p 0.05). The MBA is a valid and reliable screening measure of body apperception for HNC patients.

  12. A cross-validation Delphi method approach to the diagnosis and treatment of personality disorders in older adults.

    Science.gov (United States)

    Rosowsky, Erlene; Young, Alexander S; Malloy, Mary C; van Alphen, S P J; Ellison, James M

    2018-03-01

    The Delphi method is a consensus-building technique using expert opinion to formulate a shared framework for understanding a topic with limited empirical support. This cross-validation study replicates one completed in the Netherlands and Belgium, and explores US experts' views on the diagnosis and treatment of older adults with personality disorders (PD). Twenty-one geriatric PD experts participated in a Delphi survey addressing diagnosis and treatment of older adults with PD. The European survey was translated and administered electronically. First-round consensus was reached for 16 out of 18 items relevant to diagnosis and specific mental health programs for personality disorders in older adults. Experts agreed on the usefulness of establishing criteria for specific types of treatments. The majority of psychologists did not initially agree on the usefulness of pharmacotherapy. Expert consensus was reached following two subsequent rounds after clarification addressing medication use. Study results suggest consensus among regarding psychosocial treatments. Limited acceptance amongst US psychologists about the suitability of pharmacotherapy for late-life PDs contrasted with the views expressed by experts surveyed in Netherlands and Belgium studies.

  13. Cross-validation of generalised body composition equations with diverse young men and women: the Training Intervention and Genetics of Exercise Response (TIGER) Study

    Science.gov (United States)

    Generalised skinfold equations developed in the 1970s are commonly used to estimate laboratory-measured percentage fat (BF%). The equations were developed on predominately white individuals using Siri's two-component percentage fat equation (BF%-GEN). We cross-validated the Jackson-Pollock (JP) gene...

  14. Cross-Validation of a Recently Published Equation Predicting Energy Expenditure to Run or Walk a Mile in Normal-Weight and Overweight Adults

    Science.gov (United States)

    Morris, Cody E.; Owens, Scott G.; Waddell, Dwight E.; Bass, Martha A.; Bentley, John P.; Loftin, Mark

    2014-01-01

    An equation published by Loftin, Waddell, Robinson, and Owens (2010) was cross-validated using ten normal-weight walkers, ten overweight walkers, and ten distance runners. Energy expenditure was measured at preferred walking (normal-weight walker and overweight walkers) or running pace (distance runners) for 5 min and corrected to a mile. Energy…

  15. Rasch Validation and Cross-validation of the Health of Nation Outcome Scales (HoNOS) for Monitoring of Psychiatric Disability in Traumatized Refugees in Western Psychiatric Care

    DEFF Research Database (Denmark)

    Palic, Sabina; Kappel, Michelle Lind; Makransky, Guido

    2016-01-01

    group. A revised 10-item HoNOS fit the Rasch model at pre-treatment, and also showed excellent fit within the cross-validation data. Culture, gender, and need for translation did not exert serious bias on the measure’s performance. The results establish good monitoring properties of the 10-item Ho...

  16. Tenderness of pre- and post rigor lamb longissimus muscle.

    Science.gov (United States)

    Geesink, Geert; Sujang, Sadi; Koohmaraie, Mohammad

    2011-08-01

    Lamb longissimus muscle (n=6) sections were cooked at different times post mortem (prerigor, at rigor, 1dayp.m., and 7 days p.m.) using two cooking methods. Using a boiling waterbath, samples were either cooked to a core temperature of 70 °C or boiled for 3h. The latter method was meant to reflect the traditional cooking method employed in countries where preparation of prerigor meat is practiced. The time postmortem at which the meat was prepared had a large effect on the tenderness (shear force) of the meat (PCooking prerigor and at rigor meat to 70 °C resulted in higher shear force values than their post rigor counterparts at 1 and 7 days p.m. (9.4 and 9.6 vs. 7.2 and 3.7 kg, respectively). The differences in tenderness between the treatment groups could be largely explained by a difference in contraction status of the meat after cooking and the effect of ageing on tenderness. Cooking pre and at rigor meat resulted in severe muscle contraction as evidenced by the differences in sarcomere length of the cooked samples. Mean sarcomere lengths in the pre and at rigor samples ranged from 1.05 to 1.20 μm. The mean sarcomere length in the post rigor samples was 1.44 μm. Cooking for 3 h at 100 °C did improve the tenderness of pre and at rigor prepared meat as compared to cooking to 70 °C, but not to the extent that ageing did. It is concluded that additional intervention methods are needed to improve the tenderness of prerigor cooked meat. Copyright © 2011 Elsevier B.V. All rights reserved.

  17. Cross-validation of the Student Perceptions of Team-Based Learning Scale in the United States

    Directory of Open Access Journals (Sweden)

    Donald H. Lein

    2017-06-01

    Full Text Available Purpose The purpose of this study was to cross-validate the factor structure of the previously developed Student Perceptions of Team-Based Learning (TBL Scale among students in an entry-level doctor of physical therapy (DPT program in the United States. Methods Toward the end of the semester in 2 patient/client management courses taught using TBL, 115 DPT students completed the Student Perceptions of TBL Scale, with a response rate of 87%. Principal component analysis (PCA and confirmatory factor analysis (CFA were conducted to replicate and confirm the underlying factor structure of the scale. Results Based on the PCA for the validation sample, the original 2-factor structure (preference for TBL and preference for teamwork of the Student Perceptions of TBL Scale was replicated. The overall goodness-of-fit indices from the CFA suggested that the original 2-factor structure for the 15 items of the scale demonstrated a good model fit (comparative fit index, 0.95; non-normed fit index/Tucker-Lewis index, 0.93; root mean square error of approximation, 0.06; and standardized root mean square residual, 0.07. The 2 factors demonstrated high internal consistency (alpha= 0.83 and 0.88, respectively. DPT students taught using TBL viewed the factor of preference for teamwork more favorably than preference for TBL. Conclusion Our findings provide evidence supporting the replicability of the internal structure of the Student Perceptions of TBL Scale when assessing perceptions of TBL among DPT students in patient/client management courses.

  18. Estimation of the breaking of rigor mortis by myotonometry.

    Science.gov (United States)

    Vain, A; Kauppila, R; Vuori, E

    1996-05-31

    Myotonometry was used to detect breaking of rigor mortis. The myotonometer is a new instrument which measures the decaying oscillations of a muscle after a brief mechanical impact. The method gives two numerical parameters for rigor mortis, namely the period and decrement of the oscillations, both of which depend on the time period elapsed after death. In the case of breaking the rigor mortis by muscle lengthening, both the oscillation period and decrement decreased, whereas, shortening the muscle caused the opposite changes. Fourteen h after breaking the stiffness characteristics of the right and left m. biceps brachii, or oscillation periods, were assimilated. However, the values for decrement of the muscle, reflecting the dissipation of mechanical energy, maintained their differences.

  19. Physiological studies of muscle rigor mortis in the fowl

    International Nuclear Information System (INIS)

    Nakahira, S.; Kaneko, K.; Tanaka, K.

    1990-01-01

    A simple system was developed for continuous measurement of muscle contraction during nor mortis. Longitudinal muscle strips dissected from the Peroneus Longus were suspended in a plastic tube containing liquid paraffin. Mechanical activity was transmitted to a strain-gauge transducer which is connected to a potentiometric pen-recorder. At the onset of measurement 1.2g was loaded on the muscle strip. This model was used to study the muscle response to various treatments during nor mortis. All measurements were carried out under the anaerobic condition at 17°C, except otherwise stated. 1. The present system was found to be quite useful for continuous measurement of muscle rigor course. 2. Muscle contraction under the anaerobic condition at 17°C reached a peak about 2 hours after the onset of measurement and thereafter it relaxed at a slow rate. In contrast, the aerobic condition under a high humidity resulted in a strong rigor, about three times stronger than that in the anaerobic condition. 3. Ultrasonic treatment (37, 000-47, 000Hz) at 25°C for 10 minutes resulted in a moderate muscle rigor. 4. Treatment of muscle strip with 2mM EGTA at 30°C for 30 minutes led to a relaxation of the muscle. 5. The muscle from the birds killed during anesthesia with pentobarbital sodium resulted in a slow rate of rigor, whereas the birds killed one day after hypophysectomy led to a quick muscle rigor as seen in intact controls. 6. A slight muscle rigor was observed when muscle strip was placed in a refrigerator at 0°C for 18.5 hours and thereafter temperature was kept at 17°C. (author)

  20. Cross validation of two partitioning-based sampling approaches in mesocosms containing PCB contaminated field sediment, biota, and activated carbon amendment

    DEFF Research Database (Denmark)

    Nørgaard Schmidt, Stine; Wang, Alice P.; Gidley, Philip T

    2017-01-01

    with multiple thicknesses of silicone and in situ pre-equilibrium sampling with low density polyethylene (LDPE) loaded with performance reference compounds were applied independently to measure polychlorinated biphenyls (PCBs) in mesocosms with (1) New Bedford Harbor sediment (MA, USA), (2) sediment and biota......, and (3) activated carbon amended sediment and biota. The aim was to cross validate the two different sampling approaches. Around 100 PCB congeners were quantified in the two sampling polymers, and the results confirmed the good precision of both methods and were in overall good agreement with recently...... published silicone to LDPE partition ratios. Further, the methods yielded Cfree in good agreement for all three experiments. The average ratio between Cfree determined by the two methods was factor 1.4±0.3 (range: 0.6-2.0), and the results thus cross-validated the two sampling approaches. For future...

  1. Accuracies of genomic breeding values in American Angus beef cattle using K-means clustering for cross-validation

    Directory of Open Access Journals (Sweden)

    Saatchi Mahdi

    2011-11-01

    Full Text Available Abstract Background Genomic selection is a recently developed technology that is beginning to revolutionize animal breeding. The objective of this study was to estimate marker effects to derive prediction equations for direct genomic values for 16 routinely recorded traits of American Angus beef cattle and quantify corresponding accuracies of prediction. Methods Deregressed estimated breeding values were used as observations in a weighted analysis to derive direct genomic values for 3570 sires genotyped using the Illumina BovineSNP50 BeadChip. These bulls were clustered into five groups using K-means clustering on pedigree estimates of additive genetic relationships between animals, with the aim of increasing within-group and decreasing between-group relationships. All five combinations of four groups were used for model training, with cross-validation performed in the group not used in training. Bivariate animal models were used for each trait to estimate the genetic correlation between deregressed estimated breeding values and direct genomic values. Results Accuracies of direct genomic values ranged from 0.22 to 0.69 for the studied traits, with an average of 0.44. Predictions were more accurate when animals within the validation group were more closely related to animals in the training set. When training and validation sets were formed by random allocation, the accuracies of direct genomic values ranged from 0.38 to 0.85, with an average of 0.65, reflecting the greater relationship between animals in training and validation. The accuracies of direct genomic values obtained from training on older animals and validating in younger animals were intermediate to the accuracies obtained from K-means clustering and random clustering for most traits. The genetic correlation between deregressed estimated breeding values and direct genomic values ranged from 0.15 to 0.80 for the traits studied. Conclusions These results suggest that genomic estimates

  2. Accuracies of genomic breeding values in American Angus beef cattle using K-means clustering for cross-validation.

    Science.gov (United States)

    Saatchi, Mahdi; McClure, Mathew C; McKay, Stephanie D; Rolf, Megan M; Kim, JaeWoo; Decker, Jared E; Taxis, Tasia M; Chapple, Richard H; Ramey, Holly R; Northcutt, Sally L; Bauck, Stewart; Woodward, Brent; Dekkers, Jack C M; Fernando, Rohan L; Schnabel, Robert D; Garrick, Dorian J; Taylor, Jeremy F

    2011-11-28

    Genomic selection is a recently developed technology that is beginning to revolutionize animal breeding. The objective of this study was to estimate marker effects to derive prediction equations for direct genomic values for 16 routinely recorded traits of American Angus beef cattle and quantify corresponding accuracies of prediction. Deregressed estimated breeding values were used as observations in a weighted analysis to derive direct genomic values for 3570 sires genotyped using the Illumina BovineSNP50 BeadChip. These bulls were clustered into five groups using K-means clustering on pedigree estimates of additive genetic relationships between animals, with the aim of increasing within-group and decreasing between-group relationships. All five combinations of four groups were used for model training, with cross-validation performed in the group not used in training. Bivariate animal models were used for each trait to estimate the genetic correlation between deregressed estimated breeding values and direct genomic values. Accuracies of direct genomic values ranged from 0.22 to 0.69 for the studied traits, with an average of 0.44. Predictions were more accurate when animals within the validation group were more closely related to animals in the training set. When training and validation sets were formed by random allocation, the accuracies of direct genomic values ranged from 0.38 to 0.85, with an average of 0.65, reflecting the greater relationship between animals in training and validation. The accuracies of direct genomic values obtained from training on older animals and validating in younger animals were intermediate to the accuracies obtained from K-means clustering and random clustering for most traits. The genetic correlation between deregressed estimated breeding values and direct genomic values ranged from 0.15 to 0.80 for the traits studied. These results suggest that genomic estimates of genetic merit can be produced in beef cattle at a young age but

  3. The Cross-Calibration of Spectral Radiances and Cross-Validation of CO2 Estimates from GOSAT and OCO-2

    Directory of Open Access Journals (Sweden)

    Fumie Kataoka

    2017-11-01

    Full Text Available The Greenhouse gases Observing SATellite (GOSAT launched in January 2009 has provided radiance spectra with a Fourier Transform Spectrometer for more than eight years. The Orbiting Carbon Observatory 2 (OCO-2 launched in July 2014, collects radiance spectra using an imaging grating spectrometer. Both sensors observe sunlight reflected from Earth’s surface and retrieve atmospheric carbon dioxide (CO2 concentrations, but use different spectrometer technologies, observing geometries, and ground track repeat cycles. To demonstrate the effectiveness of satellite remote sensing for CO2 monitoring, the GOSAT and OCO-2 teams have worked together pre- and post-launch to cross-calibrate the instruments and cross-validate their retrieval algorithms and products. In this work, we first compare observed radiance spectra within three narrow bands centered at 0.76, 1.60 and 2.06 µm, at temporally coincident and spatially collocated points from September 2014 to March 2017. We reconciled the differences in observation footprints size, viewing geometry and associated differences in surface bidirectional reflectance distribution function (BRDF. We conclude that the spectral radiances measured by the two instruments agree within 5% for all bands. Second, we estimated mean bias and standard deviation of column-averaged CO2 dry air mole fraction (XCO2 retrieved from GOSAT and OCO-2 from September 2014 to May 2016. GOSAT retrievals used Build 7.3 (V7.3 of the Atmospheric CO2 Observations from Space (ACOS algorithm while OCO-2 retrievals used Version 7 of the OCO-2 retrieval algorithm. The mean biases and standard deviations are −0.57 ± 3.33 ppm over land with high gain, −0.17 ± 1.48 ppm over ocean with high gain and −0.19 ± 2.79 ppm over land with medium gain. Finally, our study is complemented with an analysis of error sources: retrieved surface pressure (Psurf, aerosol optical depth (AOD, BRDF and surface albedo inhomogeneity. We found no change in XCO2

  4. Robustness of two single-item self-esteem measures: cross-validation with a measure of stigma in a sample of psychiatric patients.

    Science.gov (United States)

    Bagley, Christopher

    2005-08-01

    Robins' Single-item Self-esteem Inventory was compared with a single item from the Coopersmith Self-esteem. Although a new scoring format was used, there was good evidence of cross-validation in 83 current and former psychiatric patients who completed Harvey's adapted measure of stigma felt and experienced by users of mental health services. Scores on the two single-item self-esteem measures correlated .76 (p self-esteem in users of mental health services.

  5. Attempted development and cross-validation of predictive models of individual-level and organizational-level turnover of nuclear power operators

    International Nuclear Information System (INIS)

    Vasa-Sideris, S.J.

    1989-01-01

    Nuclear power accounts for 209% of the electric power generated in the U.S. by 107 nuclear plants which employ over 8,700 operators. Operator turnover is significant to utilities from the economic point of view since it costs almost three hundred thousand dollars to train and qualify one operator, and because turnover affects plant operability and therefore plant safety. The study purpose was to develop and cross-validate individual-level and organizational-level models of turnover of nuclear power plant operators. Data were obtained by questionnaires and from published data for 1983 and 1984 on a number of individual, organizational, and environmental predictors. Plants had been in operation for two or more years. Questionnaires were returned by 29 out of 50 plants on over 1600 operators. The objectives were to examine the reliability of the turnover criterion, to determine the classification accuracy of the multivariate predictive models and of categories of predictors (individual, organizational, and environmental) and to determine if a homology existed between the individual-level and organizational-level models. The method was to examine the shrinkage that occurred between foldback design (in which the predictive models were reapplied to the data used to develop them) and cross-validation. Results did not support the hypothesis objectives. Turnover data were accurate but not stable between the two years. No significant differences were detected between the low and high turnover groups at the organization or individual level in cross-validation. Lack of stability in the criterion, restriction of range, and small sample size at the organizational level were serious limitations of this study. The results did support the methods. Considerable shrinkage occurred between foldback and cross-validation of the models

  6. Prediction of cognitive and motor development in preterm children using exhaustive feature selection and cross-validation of near-term white matter microstructure.

    Science.gov (United States)

    Schadl, Kornél; Vassar, Rachel; Cahill-Rowley, Katelyn; Yeom, Kristin W; Stevenson, David K; Rose, Jessica

    2018-01-01

    Advanced neuroimaging and computational methods offer opportunities for more accurate prognosis. We hypothesized that near-term regional white matter (WM) microstructure, assessed on diffusion tensor imaging (DTI), using exhaustive feature selection with cross-validation would predict neurodevelopment in preterm children. Near-term MRI and DTI obtained at 36.6 ± 1.8 weeks postmenstrual age in 66 very-low-birth-weight preterm neonates were assessed. 60/66 had follow-up neurodevelopmental evaluation with Bayley Scales of Infant-Toddler Development, 3rd-edition (BSID-III) at 18-22 months. Linear models with exhaustive feature selection and leave-one-out cross-validation computed based on DTI identified sets of three brain regions most predictive of cognitive and motor function; logistic regression models were computed to classify high-risk infants scoring one standard deviation below mean. Cognitive impairment was predicted (100% sensitivity, 100% specificity; AUC = 1) by near-term right middle-temporal gyrus MD, right cingulate-cingulum MD, left caudate MD. Motor impairment was predicted (90% sensitivity, 86% specificity; AUC = 0.912) by left precuneus FA, right superior occipital gyrus MD, right hippocampus FA. Cognitive score variance was explained (29.6%, cross-validated Rˆ2 = 0.296) by left posterior-limb-of-internal-capsule MD, Genu RD, right fusiform gyrus AD. Motor score variance was explained (31.7%, cross-validated Rˆ2 = 0.317) by left posterior-limb-of-internal-capsule MD, right parahippocampal gyrus AD, right middle-temporal gyrus AD. Search in large DTI feature space more accurately identified neonatal neuroimaging correlates of neurodevelopment.

  7. Reconciling the Rigor-Relevance Dilemma in Intellectual Capital Research

    Science.gov (United States)

    Andriessen, Daniel

    2004-01-01

    This paper raises the issue of research methodology for intellectual capital and other types of management research by focusing on the dilemma of rigour versus relevance. The more traditional explanatory approach to research often leads to rigorous results that are not of much help to solve practical problems. This paper describes an alternative…

  8. Paper 3: Content and Rigor of Algebra Credit Recovery Courses

    Science.gov (United States)

    Walters, Kirk; Stachel, Suzanne

    2014-01-01

    This paper describes the content, organization and rigor of the f2f and online summer algebra courses that were delivered in summers 2011 and 2012. Examining the content of both types of courses is important because research suggests that algebra courses with certain features may be better than others in promoting success for struggling students.…

  9. A rigorous treatment of uncertainty quantification for Silicon damage metrics

    International Nuclear Information System (INIS)

    Griffin, P.

    2016-01-01

    These report summaries the contributions made by Sandia National Laboratories in support of the International Atomic Energy Agency (IAEA) Nuclear Data Section (NDS) Technical Meeting (TM) on Nuclear Reaction Data and Uncertainties for Radiation Damage. This work focused on a rigorous treatment of the uncertainties affecting the characterization of the displacement damage seen in silicon semiconductors. (author)

  10. Effects of post mortem temperature on rigor tension, shortening and ...

    African Journals Online (AJOL)

    Fully developed rigor mortis in muscle is characterised by maximum loss of extensibility. The course of post mortem changes in ostrich muscle was studied by following isometric tension, shortening and change in pH during the first 24 h post mortem within muscle strips from the muscularis gastrocnemius, pars interna at ...

  11. Characterization of rigor mortis of longissimus dorsi and triceps ...

    African Journals Online (AJOL)

    24 h) of the longissimus dorsi (LD) and triceps brachii (TB) muscles as well as the shear force (meat tenderness) and colour were evaluated, aiming at characterizing the rigor mortis in the meat during industrial processing. Data statistic treatment demonstrated that carcass temperature and pH decreased gradually during ...

  12. Rigor, vigor, and the study of health disparities.

    Science.gov (United States)

    Adler, Nancy; Bush, Nicole R; Pantell, Matthew S

    2012-10-16

    Health disparities research spans multiple fields and methods and documents strong links between social disadvantage and poor health. Associations between socioeconomic status (SES) and health are often taken as evidence for the causal impact of SES on health, but alternative explanations, including the impact of health on SES, are plausible. Studies showing the influence of parents' SES on their children's health provide evidence for a causal pathway from SES to health, but have limitations. Health disparities researchers face tradeoffs between "rigor" and "vigor" in designing studies that demonstrate how social disadvantage becomes biologically embedded and results in poorer health. Rigorous designs aim to maximize precision in the measurement of SES and health outcomes through methods that provide the greatest control over temporal ordering and causal direction. To achieve precision, many studies use a single SES predictor and single disease. However, doing so oversimplifies the multifaceted, entwined nature of social disadvantage and may overestimate the impact of that one variable and underestimate the true impact of social disadvantage on health. In addition, SES effects on overall health and functioning are likely to be greater than effects on any one disease. Vigorous designs aim to capture this complexity and maximize ecological validity through more complete assessment of social disadvantage and health status, but may provide less-compelling evidence of causality. Newer approaches to both measurement and analysis may enable enhanced vigor as well as rigor. Incorporating both rigor and vigor into studies will provide a fuller understanding of the causes of health disparities.

  13. A rigorous proof for the Landauer-Büttiker formula

    DEFF Research Database (Denmark)

    Cornean, Horia Decebal; Jensen, Arne; Moldoveanu, V.

    Recently, Avron et al. shed new light on the question of quantum transport in mesoscopic samples coupled to particle reservoirs by semi-infinite leads. They rigorously treat the case when the sample undergoes an adiabatic evolution thus generating a current through th leads, and prove the so call...

  14. Rigorous simulation: a tool to enhance decision making

    Energy Technology Data Exchange (ETDEWEB)

    Neiva, Raquel; Larson, Mel; Baks, Arjan [KBC Advanced Technologies plc, Surrey (United Kingdom)

    2012-07-01

    The world refining industries continue to be challenged by population growth (increased demand), regional market changes and the pressure of regulatory requirements to operate a 'green' refinery. Environmental regulations are reducing the value and use of heavy fuel oils, and leading to convert more of the heavier products or even heavier crude into lighter products while meeting increasingly stringent transportation fuel specifications. As a result actions are required for establishing a sustainable advantage for future success. Rigorous simulation provides a key advantage improving the time and efficient use of capital investment and maximizing profitability. Sustainably maximizing profit through rigorous modeling is achieved through enhanced performance monitoring and improved Linear Programme (LP) model accuracy. This paper contains examples on these two items. The combination of both increases overall rates of return. As refiners consider optimizing existing assets and expanding projects, the process agreed to achieve these goals is key for a successful profit improvement. The benefit of rigorous kinetic simulation with detailed fractionation allows for optimizing existing asset utilization while focusing the capital investment in the new unit(s), and therefore optimizing the overall strategic plan and return on investment. Individual process unit's monitoring works as a mechanism for validating and optimizing the plant performance. Unit monitoring is important to rectify poor performance and increase profitability. The key to a good LP relies upon the accuracy of the data used to generate the LP sub-model data. The value of rigorous unit monitoring are that the results are heat and mass balanced consistently, and are unique for a refiners unit / refinery. With the improved match of the refinery operation, the rigorous simulation models will allow capturing more accurately the non linearity of those process units and therefore provide correct

  15. Einstein's Theory A Rigorous Introduction for the Mathematically Untrained

    CERN Document Server

    Grøn, Øyvind

    2011-01-01

    This book provides an introduction to the theory of relativity and the mathematics used in its processes. Three elements of the book make it stand apart from previously published books on the theory of relativity. First, the book starts at a lower mathematical level than standard books with tensor calculus of sufficient maturity to make it possible to give detailed calculations of relativistic predictions of practical experiments. Self-contained introductions are given, for example vector calculus, differential calculus and integrations. Second, in-between calculations have been included, making it possible for the non-technical reader to follow step-by-step calculations. Thirdly, the conceptual development is gradual and rigorous in order to provide the inexperienced reader with a philosophically satisfying understanding of the theory.  Einstein's Theory: A Rigorous Introduction for the Mathematically Untrained aims to provide the reader with a sound conceptual understanding of both the special and genera...

  16. Rigor mortis in an unusual position: Forensic considerations.

    Science.gov (United States)

    D'Souza, Deepak H; Harish, S; Rajesh, M; Kiran, J

    2011-07-01

    We report a case in which the dead body was found with rigor mortis in an unusual position. The dead body was lying on its back with limbs raised, defying gravity. Direction of the salivary stains on the face was also defying the gravity. We opined that the scene of occurrence of crime is unlikely to be the final place where the dead body was found. The clues were revealing a homicidal offence and an attempt to destroy the evidence. The forensic use of 'rigor mortis in an unusual position' is in furthering the investigations, and the scientific confirmation of two facts - the scene of death (occurrence) is different from the scene of disposal of dead body, and time gap between the two places.

  17. Some rigorous results concerning spectral theory for ideal MHD

    International Nuclear Information System (INIS)

    Laurence, P.

    1986-01-01

    Spectral theory for linear ideal MHD is laid on a firm foundation by defining appropriate function spaces for the operators associated with both the first- and second-order (in time and space) partial differential operators. Thus, it is rigorously established that a self-adjoint extension of F(xi) exists. It is shown that the operator L associated with the first-order formulation satisfies the conditions of the Hille--Yosida theorem. A foundation is laid thereby within which the domains associated with the first- and second-order formulations can be compared. This allows future work in a rigorous setting that will clarify the differences (in the two formulations) between the structure of the generalized eigenspaces corresponding to the marginal point of the spectrum ω = 0

  18. Some rigorous results concerning spectral theory for ideal MHD

    International Nuclear Information System (INIS)

    Laurence, P.

    1985-05-01

    Spectral theory for linear ideal MHD is laid on a firm foundation by defining appropriate function spaces for the operators associated with both the first and second order (in time and space) partial differential operators. Thus, it is rigorously established that a self-adjoint extension of F(xi) exists. It is shown that the operator L associated with the first order formulation satisfies the conditions of the Hille-Yosida theorem. A foundation is laid thereby within which the domains associated with the first and second order formulations can be compared. This allows future work in a rigorous setting that will clarify the differences (in the two formulations) between the structure of the generalized eigenspaces corresponding to the marginal point of the spectrum ω = 0

  19. Rigorous results on measuring the quark charge below color threshold

    International Nuclear Information System (INIS)

    Lipkin, H.J.

    1979-01-01

    Rigorous theorems are presented showing that contributions from a color nonsinglet component of the current to matrix elements of a second order electromagnetic transition are suppressed by factors inversely proportional to the energy of the color threshold. Parton models which obtain matrix elements proportional to the color average of the square of the quark charge are shown to neglect terms of the same order of magnitude as terms kept. (author)

  20. A Rigorous Methodology for Analyzing and Designing Plug-Ins

    DEFF Research Database (Denmark)

    Fasie, Marieta V.; Haxthausen, Anne Elisabeth; Kiniry, Joseph

    2013-01-01

    . This paper addresses these problems by describing a rigorous methodology for analyzing and designing plug-ins. The methodology is grounded in the Extended Business Object Notation (EBON) and covers informal analysis and design of features, GUI, actions, and scenarios, formal architecture design, including...... behavioral semantics, and validation. The methodology is illustrated via a case study whose focus is an Eclipse environment for the RAISE formal method's tool suite....

  1. Striation Patterns of Ox Muscle in Rigor Mortis

    Science.gov (United States)

    Locker, Ronald H.

    1959-01-01

    Ox muscle in rigor mortis offers a selection of myofibrils fixed at varying degrees of contraction from sarcomere lengths of 3.7 to 0.7 µ. A study of this material by phase contrast and electron microscopy has revealed four distinct successive patterns of contraction, including besides the familiar relaxed and contracture patterns, two intermediate types (2.4 to 1.9 µ, 1.8 to 1.5 µ) not previously well described. PMID:14417790

  2. Rigorous Analysis of a Randomised Number Field Sieve

    OpenAIRE

    Lee, Jonathan; Venkatesan, Ramarathnam

    2018-01-01

    Factorisation of integers $n$ is of number theoretic and cryptographic significance. The Number Field Sieve (NFS) introduced circa 1990, is still the state of the art algorithm, but no rigorous proof that it halts or generates relationships is known. We propose and analyse an explicitly randomised variant. For each $n$, we show that these randomised variants of the NFS and Coppersmith's multiple polynomial sieve find congruences of squares in expected times matching the best-known heuristic e...

  3. Reciprocity relations in transmission electron microscopy: A rigorous derivation.

    Science.gov (United States)

    Krause, Florian F; Rosenauer, Andreas

    2017-01-01

    A concise derivation of the principle of reciprocity applied to realistic transmission electron microscopy setups is presented making use of the multislice formalism. The equivalence of images acquired in conventional and scanning mode is thereby rigorously shown. The conditions for the applicability of the found reciprocity relations is discussed. Furthermore the positions of apertures in relation to the corresponding lenses are considered, a subject which scarcely has been addressed in previous publications. Copyright © 2016 Elsevier Ltd. All rights reserved.

  4. Critical Analysis of Strategies for Determining Rigor in Qualitative Inquiry.

    Science.gov (United States)

    Morse, Janice M

    2015-09-01

    Criteria for determining the trustworthiness of qualitative research were introduced by Guba and Lincoln in the 1980s when they replaced terminology for achieving rigor, reliability, validity, and generalizability with dependability, credibility, and transferability. Strategies for achieving trustworthiness were also introduced. This landmark contribution to qualitative research remains in use today, with only minor modifications in format. Despite the significance of this contribution over the past four decades, the strategies recommended to achieve trustworthiness have not been critically examined. Recommendations for where, why, and how to use these strategies have not been developed, and how well they achieve their intended goal has not been examined. We do not know, for example, what impact these strategies have on the completed research. In this article, I critique these strategies. I recommend that qualitative researchers return to the terminology of social sciences, using rigor, reliability, validity, and generalizability. I then make recommendations for the appropriate use of the strategies recommended to achieve rigor: prolonged engagement, persistent observation, and thick, rich description; inter-rater reliability, negative case analysis; peer review or debriefing; clarifying researcher bias; member checking; external audits; and triangulation. © The Author(s) 2015.

  5. Prediction of fat-free mass by bioelectrical impedance analysis in older adults from developing countries: a cross-validation study using the deuterium dilution method

    International Nuclear Information System (INIS)

    Mateo, H. Aleman; Romero, J. Esparza; Valencia, M.E.

    2010-01-01

    Objective: Several limitations of published bioelectrical impedance analysis (BIA) equations have been reported. The aims were to develop in a multiethnic, elderly population a new prediction equation and cross- validate it along with some published BIA equations for estimating fat-free mass using deuterium oxide dilution as the reference method. Design and setting: Cross-sectional study of elderly from five developing countries. Methods: Total body water (TBW) measured by deuterium dilution was used to determine fat-free mass (FFM) in 383 subjects. Anthropometric and BIA variables were also measured. Only 377 subjects were included for the analysis, randomly divided into development and cross-validation groups after stratified by gender. Stepwise model selection was used to generate the model and Bland Altman analysis was used to test agreement. Results: FFM = 2.95 - 3.89 (Gender) + 0.514 (Ht2/Z) + 0.090 (Waist) + 0.156 (Body weight). The model fit parameters were an R2, total F-Ratio, and the SEE of 0.88, 314.3, and 3.3, respectively. None of the published BIA equations met the criteria for agreement. The new BIA equation underestimated FFM by just 0.3 kg in the cross-validation sample. The mean of the difference between FFM by TBW and the new BIA equation were not significantly different; 95% of the differences were between the limits of agreement of -6.3 to 6.9 kg of FFM. There was no significant association between the mean of the differences and their averages (r= 0.008 and p= 0.2). Conclusions:This new BIA equation offers a valid option compared with some of the current published BIA equations to estimate FFM in elderly subjects from five developing countries. (Authors)

  6. Genome-Wide Association Studies and Comparison of Models and Cross-Validation Strategies for Genomic Prediction of Quality Traits in Advanced Winter Wheat Breeding Lines

    Directory of Open Access Journals (Sweden)

    Peter S. Kristensen

    2018-02-01

    Full Text Available The aim of the this study was to identify SNP markers associated with five important wheat quality traits (grain protein content, Zeleny sedimentation, test weight, thousand-kernel weight, and falling number, and to investigate the predictive abilities of GBLUP and Bayesian Power Lasso models for genomic prediction of these traits. In total, 635 winter wheat lines from two breeding cycles in the Danish plant breeding company Nordic Seed A/S were phenotyped for the quality traits and genotyped for 10,802 SNPs. GWAS were performed using single marker regression and Bayesian Power Lasso models. SNPs with large effects on Zeleny sedimentation were found on chromosome 1B, 1D, and 5D. However, GWAS failed to identify single SNPs with significant effects on the other traits, indicating that these traits were controlled by many QTL with small effects. The predictive abilities of the models for genomic prediction were studied using different cross-validation strategies. Leave-One-Out cross-validations resulted in correlations between observed phenotypes corrected for fixed effects and genomic estimated breeding values of 0.50 for grain protein content, 0.66 for thousand-kernel weight, 0.70 for falling number, 0.71 for test weight, and 0.79 for Zeleny sedimentation. Alternative cross-validations showed that the genetic relationship between lines in training and validation sets had a bigger impact on predictive abilities than the number of lines included in the training set. Using Bayesian Power Lasso instead of GBLUP models, gave similar or slightly higher predictive abilities. Genomic prediction based on all SNPs was more effective than prediction based on few associated SNPs.

  7. New rigorous asymptotic theorems for inverse scattering amplitudes

    International Nuclear Information System (INIS)

    Lomsadze, Sh.Yu.; Lomsadze, Yu.M.

    1984-01-01

    The rigorous asymptotic theorems both of integral and local types obtained earlier and establishing logarithmic and in some cases even power correlations aetdeen the real and imaginary parts of scattering amplitudes Fsub(+-) are extended to the inverse amplitudes 1/Fsub(+-). One also succeeds in establishing power correlations of a new type between the real and imaginary parts, both for the amplitudes themselves and for the inverse ones. All the obtained assertions are convenient to be tested in high energy experiments when the amplitudes show asymptotic behaviour

  8. Derivation and Cross-Validation of Cutoff Scores for Patients With Schizophrenia Spectrum Disorders on WAIS-IV Digit Span-Based Performance Validity Measures.

    Science.gov (United States)

    Glassmire, David M; Toofanian Ross, Parnian; Kinney, Dominique I; Nitch, Stephen R

    2016-06-01

    Two studies were conducted to identify and cross-validate cutoff scores on the Wechsler Adult Intelligence Scale-Fourth Edition Digit Span-based embedded performance validity (PV) measures for individuals with schizophrenia spectrum disorders. In Study 1, normative scores were identified on Digit Span-embedded PV measures among a sample of patients (n = 84) with schizophrenia spectrum diagnoses who had no known incentive to perform poorly and who put forth valid effort on external PV tests. Previously identified cutoff scores resulted in unacceptable false positive rates and lower cutoff scores were adopted to maintain specificity levels ≥90%. In Study 2, the revised cutoff scores were cross-validated within a sample of schizophrenia spectrum patients (n = 96) committed as incompetent to stand trial. Performance on Digit Span PV measures was significantly related to Full Scale IQ in both studies, indicating the need to consider the intellectual functioning of examinees with psychotic spectrum disorders when interpreting scores on Digit Span PV measures. © The Author(s) 2015.

  9. Comparison of a new expert elicitation model with the Classical Model, equal weights and single experts, using a cross-validation technique

    Energy Technology Data Exchange (ETDEWEB)

    Flandoli, F. [Dip.to di Matematica Applicata, Universita di Pisa, Pisa (Italy); Giorgi, E. [Dip.to di Matematica Applicata, Universita di Pisa, Pisa (Italy); Istituto Nazionale di Geofisica e Vulcanologia, Sezione di Pisa, via della Faggiola 32, 56126 Pisa (Italy); Aspinall, W.P. [Dept. of Earth Sciences, University of Bristol, and Aspinall and Associates, Tisbury (United Kingdom); Neri, A., E-mail: neri@pi.ingv.it [Istituto Nazionale di Geofisica e Vulcanologia, Sezione di Pisa, via della Faggiola 32, 56126 Pisa (Italy)

    2011-10-15

    The problem of ranking and weighting experts' performances when quantitative judgments are being elicited for decision support is considered. A new scoring model, the Expected Relative Frequency model, is presented, based on the closeness between central values provided by the expert and known values used for calibration. Using responses from experts in five different elicitation datasets, a cross-validation technique is used to compare this new approach with the Cooke Classical Model, the Equal Weights model, and individual experts. The analysis is performed using alternative reward schemes designed to capture proficiency either in quantifying uncertainty, or in estimating true central values. Results show that although there is only a limited probability that one approach is consistently better than another, the Cooke Classical Model is generally the most suitable for assessing uncertainties, whereas the new ERF model should be preferred if the goal is central value estimation accuracy. - Highlights: > A new expert elicitation model, named Expected Relative Frequency (ERF), is presented. > A cross-validation approach to evaluate the performance of different elicitation models is applied. > The new ERF model shows the best performance with respect to the point-wise estimates.

  10. Sonoelasticity to monitor mechanical changes during rigor and ageing.

    Science.gov (United States)

    Ayadi, A; Culioli, J; Abouelkaram, S

    2007-06-01

    We propose the use of sonoelasticity as a non-destructive method to monitor changes in the resistance of muscle fibres, unaffected by connective tissue. Vibrations were applied at low frequency to induce oscillations in soft tissues and an ultrasound transducer was used to detect the motions. The experiments were carried out on the M. biceps femoris muscles of three beef cattle. In addition to the sonoelasticity measurements, the changes in meat during rigor and ageing were followed by measurements of both the mechanical resistance of myofibres and pH. The variations of mechanical resistance and pH were compared to those of the sonoelastic variables (velocity and attenuation) at two frequencies. The relationships between pH and velocity or attenuation and between the velocity or attenuation and the stress at 20% deformation were highly correlated. We concluded that sonoelasticity is a non-destructive method that can be used to monitor mechanical changes in muscle fibers during rigor-mortis and ageing.

  11. Rigorous quantum limits on monitoring free masses and harmonic oscillators

    Science.gov (United States)

    Roy, S. M.

    2018-03-01

    There are heuristic arguments proposing that the accuracy of monitoring position of a free mass m is limited by the standard quantum limit (SQL): σ2( X (t ) ) ≥σ2( X (0 ) ) +(t2/m2) σ2( P (0 ) ) ≥ℏ t /m , where σ2( X (t ) ) and σ2( P (t ) ) denote variances of the Heisenberg representation position and momentum operators. Yuen [Phys. Rev. Lett. 51, 719 (1983), 10.1103/PhysRevLett.51.719] discovered that there are contractive states for which this result is incorrect. Here I prove universally valid rigorous quantum limits (RQL), viz. rigorous upper and lower bounds on σ2( X (t ) ) in terms of σ2( X (0 ) ) and σ2( P (0 ) ) , given by Eq. (12) for a free mass and by Eq. (36) for an oscillator. I also obtain the maximally contractive and maximally expanding states which saturate the RQL, and use the contractive states to set up an Ozawa-type measurement theory with accuracies respecting the RQL but beating the standard quantum limit. The contractive states for oscillators improve on the Schrödinger coherent states of constant variance and may be useful for gravitational wave detection and optical communication.

  12. A rigorous test for a new conceptual model for collisions

    International Nuclear Information System (INIS)

    Peixoto, E.M.A.; Mu-Tao, L.

    1979-01-01

    A rigorous theoretical foundation for the previously proposed model is formulated and applied to electron scattering by H 2 in the gas phase. An rigorous treatment of the interaction potential between the incident electron and the Hydrogen molecule is carried out to calculate Differential Cross Sections for 1 KeV electrons, using Glauber's approximation Wang's molecular wave function for the ground electronic state of H 2 . Moreover, it is shown for the first time that, when adequately done, the omission of two center terms does not adversely influence the results of molecular calculations. It is shown that the new model is far superior to the Independent Atom Model (or Independent Particle Model). The accuracy and simplicity of the new model suggest that it may be fruitfully applied to the description of other collision phenomena (e.g., in molecular beam experiments and nuclear physics). A new techniques is presented for calculations involving two center integrals within the frame work of the Glauber's approximation for scattering. (Author) [pt

  13. Volume Holograms in Photopolymers: Comparison between Analytical and Rigorous Theories

    Science.gov (United States)

    Gallego, Sergi; Neipp, Cristian; Estepa, Luis A.; Ortuño, Manuel; Márquez, Andrés; Francés, Jorge; Pascual, Inmaculada; Beléndez, Augusto

    2012-01-01

    There is no doubt that the concept of volume holography has led to an incredibly great amount of scientific research and technological applications. One of these applications is the use of volume holograms as optical memories, and in particular, the use of a photosensitive medium like a photopolymeric material to record information in all its volume. In this work we analyze the applicability of Kogelnik’s Coupled Wave theory to the study of volume holograms recorded in photopolymers. Some of the theoretical models in the literature describing the mechanism of hologram formation in photopolymer materials use Kogelnik’s theory to analyze the gratings recorded in photopolymeric materials. If Kogelnik’s theory cannot be applied is necessary to use a more general Coupled Wave theory (CW) or the Rigorous Coupled Wave theory (RCW). The RCW does not incorporate any approximation and thus, since it is rigorous, permits judging the accurateness of the approximations included in Kogelnik’s and CW theories. In this article, a comparison between the predictions of the three theories for phase transmission diffraction gratings is carried out. We have demonstrated the agreement in the prediction of CW and RCW and the validity of Kogelnik’s theory only for gratings with spatial frequencies higher than 500 lines/mm for the usual values of the refractive index modulations obtained in photopolymers.

  14. Volume Holograms in Photopolymers: Comparison between Analytical and Rigorous Theories

    Directory of Open Access Journals (Sweden)

    Augusto Beléndez

    2012-08-01

    Full Text Available There is no doubt that the concept of volume holography has led to an incredibly great amount of scientific research and technological applications. One of these applications is the use of volume holograms as optical memories, and in particular, the use of a photosensitive medium like a photopolymeric material to record information in all its volume. In this work we analyze the applicability of Kogelnik’s Coupled Wave theory to the study of volume holograms recorded in photopolymers. Some of the theoretical models in the literature describing the mechanism of hologram formation in photopolymer materials use Kogelnik’s theory to analyze the gratings recorded in photopolymeric materials. If Kogelnik’s theory cannot be applied is necessary to use a more general Coupled Wave theory (CW or the Rigorous Coupled Wave theory (RCW. The RCW does not incorporate any approximation and thus, since it is rigorous, permits judging the accurateness of the approximations included in Kogelnik’s and CW theories. In this article, a comparison between the predictions of the three theories for phase transmission diffraction gratings is carried out. We have demonstrated the agreement in the prediction of CW and RCW and the validity of Kogelnik’s theory only for gratings with spatial frequencies higher than 500 lines/mm for the usual values of the refractive index modulations obtained in photopolymers.

  15. A methodology for the rigorous verification of plasma simulation codes

    Science.gov (United States)

    Riva, Fabio

    2016-10-01

    The methodology used to assess the reliability of numerical simulation codes constitutes the Verification and Validation (V&V) procedure. V&V is composed by two separate tasks: the verification, which is a mathematical issue targeted to assess that the physical model is correctly solved, and the validation, which determines the consistency of the code results, and therefore of the physical model, with experimental data. In the present talk we focus our attention on the verification, which in turn is composed by the code verification, targeted to assess that a physical model is correctly implemented in a simulation code, and the solution verification, that quantifies the numerical error affecting a simulation. Bridging the gap between plasma physics and other scientific domains, we introduced for the first time in our domain a rigorous methodology for the code verification, based on the method of manufactured solutions, as well as a solution verification based on the Richardson extrapolation. This methodology was applied to GBS, a three-dimensional fluid code based on a finite difference scheme, used to investigate the plasma turbulence in basic plasma physics experiments and in the tokamak scrape-off layer. Overcoming the difficulty of dealing with a numerical method intrinsically affected by statistical noise, we have now generalized the rigorous verification methodology to simulation codes based on the particle-in-cell algorithm, which are employed to solve Vlasov equation in the investigation of a number of plasma physics phenomena.

  16. Experimental evaluation of rigor mortis IX. The influence of the breaking (mechanical solution) on the development of rigor mortis.

    Science.gov (United States)

    Krompecher, Thomas; Gilles, André; Brandt-Casadevall, Conception; Mangin, Patrice

    2008-04-07

    Objective measurements were carried out to study the possible re-establishment of rigor mortis on rats after "breaking" (mechanical solution). Our experiments showed that: *Cadaveric rigidity can re-establish after breaking. *A significant rigidity can reappear if the breaking occurs before the process is complete. *Rigidity will be considerably weaker after the breaking. *The time course of the intensity does not change in comparison to the controls: --the re-establishment begins immediately after the breaking; --maximal values are reached at the same time as in the controls; --the course of the resolution is the same as in the controls.

  17. Reframing Rigor: A Modern Look at Challenge and Support in Higher Education

    Science.gov (United States)

    Campbell, Corbin M.; Dortch, Deniece; Burt, Brian A.

    2018-01-01

    This chapter describes the limitations of the traditional notions of academic rigor in higher education, and brings forth a new form of rigor that has the potential to support student success and equity.

  18. Rigorous force field optimization principles based on statistical distance minimization

    Energy Technology Data Exchange (ETDEWEB)

    Vlcek, Lukas, E-mail: vlcekl1@ornl.gov [Chemical Sciences Division, Geochemistry & Interfacial Sciences Group, Oak Ridge National Laboratory, Oak Ridge, Tennessee 37831-6110 (United States); Joint Institute for Computational Sciences, University of Tennessee, Oak Ridge National Laboratory, Oak Ridge, Tennessee 37831-6173 (United States); Chialvo, Ariel A. [Chemical Sciences Division, Geochemistry & Interfacial Sciences Group, Oak Ridge National Laboratory, Oak Ridge, Tennessee 37831-6110 (United States)

    2015-10-14

    We use the concept of statistical distance to define a measure of distinguishability between a pair of statistical mechanical systems, i.e., a model and its target, and show that its minimization leads to general convergence of the model’s static measurable properties to those of the target. We exploit this feature to define a rigorous basis for the development of accurate and robust effective molecular force fields that are inherently compatible with coarse-grained experimental data. The new model optimization principles and their efficient implementation are illustrated through selected examples, whose outcome demonstrates the higher robustness and predictive accuracy of the approach compared to other currently used methods, such as force matching and relative entropy minimization. We also discuss relations between the newly developed principles and established thermodynamic concepts, which include the Gibbs-Bogoliubov inequality and the thermodynamic length.

  19. From everyday communicative figurations to rigorous audience news repertoires

    DEFF Research Database (Denmark)

    Kobbernagel, Christian; Schrøder, Kim Christian

    2016-01-01

    In the last couple of decades there has been an unprecedented explosion of news media platforms and formats, as a succession of digital and social media have joined the ranks of legacy media. We live in a ‘hybrid media system’ (Chadwick, 2013), in which people build their cross-media news...... repertoires from the ensemble of old and new media available. This article presents an innovative mixed-method approach with considerable explanatory power to the exploration of patterns of news media consumption. This approach tailors Q-methodology in the direction of a qualitative study of news consumption......, in which a card sorting exercise serves to translate the participants’ news media preferences into a form that enables the researcher to undertake a rigorous factor-analytical construction of their news consumption repertoires. This interpretive, factor-analytical procedure, which results in the building...

  20. Fast and Rigorous Assignment Algorithm Multiple Preference and Calculation

    Directory of Open Access Journals (Sweden)

    Ümit Çiftçi

    2010-03-01

    Full Text Available The goal of paper is to develop an algorithm that evaluates students then places them depending on their desired choices according to dependant preferences. The developed algorithm is also used to implement software. The success and accuracy of the software as well as the algorithm are tested by applying it to ability test at Beykent University. This ability test is repeated several times in order to fill all available places at Fine Art Faculty departments in every academic year. It has been shown that this algorithm is very fast and rigorous after application of 2008-2009 and 2009-20010 academic years.Key Words: Assignment algorithm, student placement, ability test

  1. Student’s rigorous mathematical thinking based on cognitive style

    Science.gov (United States)

    Fitriyani, H.; Khasanah, U.

    2017-12-01

    The purpose of this research was to determine the rigorous mathematical thinking (RMT) of mathematics education students in solving math problems in terms of reflective and impulsive cognitive styles. The research used descriptive qualitative approach. Subjects in this research were 4 students of the reflective and impulsive cognitive style which was each consisting male and female subjects. Data collection techniques used problem-solving test and interview. Analysis of research data used Miles and Huberman model that was reduction of data, presentation of data, and conclusion. The results showed that impulsive male subjects used three levels of the cognitive function required for RMT that were qualitative thinking, quantitative thinking with precision, and relational thinking completely while the other three subjects were only able to use cognitive function at qualitative thinking level of RMT. Therefore the subject of impulsive male has a better RMT ability than the other three research subjects.

  2. Rigorous Quantum Field Theory A Festschrift for Jacques Bros

    CERN Document Server

    Monvel, Anne Boutet; Iagolnitzer, Daniel; Moschella, Ugo

    2007-01-01

    Jacques Bros has greatly advanced our present understanding of rigorous quantum field theory through numerous fundamental contributions. This book arose from an international symposium held in honour of Jacques Bros on the occasion of his 70th birthday, at the Department of Theoretical Physics of the CEA in Saclay, France. The impact of the work of Jacques Bros is evident in several articles in this book. Quantum fields are regarded as genuine mathematical objects, whose various properties and relevant physical interpretations must be studied in a well-defined mathematical framework. The key topics in this volume include analytic structures of Quantum Field Theory (QFT), renormalization group methods, gauge QFT, stability properties and extension of the axiomatic framework, QFT on models of curved spacetimes, QFT on noncommutative Minkowski spacetime. Contributors: D. Bahns, M. Bertola, R. Brunetti, D. Buchholz, A. Connes, F. Corbetta, S. Doplicher, M. Dubois-Violette, M. Dütsch, H. Epstein, C.J. Fewster, K....

  3. Desarrollo constitucional, legal y jurisprudencia del principio de rigor subsidiario

    Directory of Open Access Journals (Sweden)

    Germán Eduardo Cifuentes Sandoval

    2013-09-01

    Full Text Available In colombia the environment state administration is in charge of environmental national system, SINA, SINA is made up of states entities that coexist beneath a mixed organization of centralization and decentralization. SINA decentralization express itself in a administrative and territorial level, and is waited that entities that function under this structure act in a coordinated way in order to reach suggested objectives in the environmental national politicy. To achieve the coordinated environmental administration through entities that define the SINA, the environmental legislation of Colombia has include three basic principles: 1. The principle of “armorial regional” 2. The principle of “gradationnormative” 3. The principle of “rigorsubsidiaries”. These principles belong to the article 63, law 99 of 1933, and even in the case of the two first, it is possible to find equivalents in other norms that integrate the Colombian legal system, it does not happen in that way with the “ rigor subsidiaries” because its elements are uniques of the environmental normativity and do not seem to be similar to those that make part of the principle of “ subsidiaridad” present in the article 288 of the politic constitution. The “ rigor subsidiaries” give to decentralizates entities certain type of special ability to modify the current environmental legislation to defend the local ecological patrimony. It is an administrative ability with a foundation in the decentralization autonomy that allows to take place of the reglamentary denied of the legislative power with the condition that the new normativity be more demanding that the one that belongs to the central level

  4. Rigorous patient-prosthesis matching of Perimount Magna aortic bioprosthesis.

    Science.gov (United States)

    Nakamura, Hiromasa; Yamaguchi, Hiroki; Takagaki, Masami; Kadowaki, Tasuku; Nakao, Tatsuya; Amano, Atsushi

    2015-03-01

    Severe patient-prosthesis mismatch, defined as effective orifice area index ≤0.65 cm(2) m(-2), has demonstrated poor long-term survival after aortic valve replacement. Reported rates of severe mismatch involving the Perimount Magna aortic bioprosthesis range from 4% to 20% in patients with a small annulus. Between June 2008 and August 2011, 251 patients (mean age 70.5 ± 10.2 years; mean body surface area 1.55 ± 0.19 m(2)) underwent aortic valve replacement with a Perimount Magna bioprosthesis, with or without concomitant procedures. We performed our procedure with rigorous patient-prosthesis matching to implant a valve appropriately sized to each patient, and carried out annular enlargement when a 19-mm valve did not fit. The bioprosthetic performance was evaluated by transthoracic echocardiography predischarge and at 1 and 2 years after surgery. Overall hospital mortality was 1.6%. Only 5 (2.0%) patients required annular enlargement. The mean follow-up period was 19.1 ± 10.7 months with a 98.4% completion rate. Predischarge data showed a mean effective orifice area index of 1.21 ± 0.20 cm(2) m(-2). Moderate mismatch, defined as effective orifice area index ≤0.85 cm(2) m(-2), developed in 4 (1.6%) patients. None developed severe mismatch. Data at 1 and 2 years showed only two cases of moderate mismatch; neither was severe. Rigorous patient-prosthesis matching maximized the performance of the Perimount Magna, and no severe mismatch resulted in this Japanese population of aortic valve replacement patients. © The Author(s) 2014 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.

  5. Cross-Validation of a Glucose-Insulin-Glucagon Pharmacodynamics Model for Simulation using Data from Patients with Type 1 Diabetes

    DEFF Research Database (Denmark)

    Wendt, Sabrina Lyngbye; Ranjan, Ajenthen; Møller, Jan Kloppenborg

    2017-01-01

    three PD model test fits in each of the seven subjects. Thus, we successfully validated the PD model by leave-one-out cross-validation in seven out of eight T1D patients. Conclusions: The PD model accurately simulates glucose excursions based on plasma insulin and glucagon concentrations. The reported...... for concentrations of glucagon, insulin, and glucose. We fitted pharmacokinetic (PK) models to insulin and glucagon data using maximum likelihood and maximum a posteriori estimation methods. Similarly, we fitted a pharmacodynamic (PD) model to glucose data. The PD model included multiplicative effects of insulin...... and glucagon on EGP. Bias and precision of PD model test fits were assessed by mean predictive error (MPE) and mean absolute predictive error (MAPE). Results: Assuming constant variables in a subject across nonoutlier visits and using thresholds of ±15% MPE and 20% MAPE, we accepted at least one and at most...

  6. Body fat measurement by bioelectrical impedance and air displacement plethysmography: a cross-validation study to design bioelectrical impedance equations in Mexican adults

    Directory of Open Access Journals (Sweden)

    Valencia Mauro E

    2007-08-01

    Full Text Available Abstract Background The study of body composition in specific populations by techniques such as bio-impedance analysis (BIA requires validation based on standard reference methods. The aim of this study was to develop and cross-validate a predictive equation for bioelectrical impedance using air displacement plethysmography (ADP as standard method to measure body composition in Mexican adult men and women. Methods This study included 155 male and female subjects from northern Mexico, 20–50 years of age, from low, middle, and upper income levels. Body composition was measured by ADP. Body weight (BW, kg and height (Ht, cm were obtained by standard anthropometric techniques. Resistance, R (ohms and reactance, Xc (ohms were also measured. A random-split method was used to obtain two samples: one was used to derive the equation by the "all possible regressions" procedure and was cross-validated in the other sample to test predicted versus measured values of fat-free mass (FFM. Results and Discussion The final model was: FFM (kg = 0.7374 * (Ht2 /R + 0.1763 * (BW - 0.1773 * (Age + 0.1198 * (Xc - 2.4658. R2 was 0.97; the square root of the mean square error (SRMSE was 1.99 kg, and the pure error (PE was 2.96. There was no difference between FFM predicted by the new equation (48.57 ± 10.9 kg and that measured by ADP (48.43 ± 11.3 kg. The new equation did not differ from the line of identity, had a high R2 and a low SRMSE, and showed no significant bias (0.87 ± 2.84 kg. Conclusion The new bioelectrical impedance equation based on the two-compartment model (2C was accurate, precise, and free of bias. This equation can be used to assess body composition and nutritional status in populations similar in anthropometric and physical characteristics to this sample.

  7. Remote sensing and GIS-based landslide hazard analysis and cross-validation using multivariate logistic regression model on three test areas in Malaysia

    Science.gov (United States)

    Pradhan, Biswajeet

    2010-05-01

    This paper presents the results of the cross-validation of a multivariate logistic regression model using remote sensing data and GIS for landslide hazard analysis on the Penang, Cameron, and Selangor areas in Malaysia. Landslide locations in the study areas were identified by interpreting aerial photographs and satellite images, supported by field surveys. SPOT 5 and Landsat TM satellite imagery were used to map landcover and vegetation index, respectively. Maps of topography, soil type, lineaments and land cover were constructed from the spatial datasets. Ten factors which influence landslide occurrence, i.e., slope, aspect, curvature, distance from drainage, lithology, distance from lineaments, soil type, landcover, rainfall precipitation, and normalized difference vegetation index (ndvi), were extracted from the spatial database and the logistic regression coefficient of each factor was computed. Then the landslide hazard was analysed using the multivariate logistic regression coefficients derived not only from the data for the respective area but also using the logistic regression coefficients calculated from each of the other two areas (nine hazard maps in all) as a cross-validation of the model. For verification of the model, the results of the analyses were then compared with the field-verified landslide locations. Among the three cases of the application of logistic regression coefficient in the same study area, the case of Selangor based on the Selangor logistic regression coefficients showed the highest accuracy (94%), where as Penang based on the Penang coefficients showed the lowest accuracy (86%). Similarly, among the six cases from the cross application of logistic regression coefficient in other two areas, the case of Selangor based on logistic coefficient of Cameron showed highest (90%) prediction accuracy where as the case of Penang based on the Selangor logistic regression coefficients showed the lowest accuracy (79%). Qualitatively, the cross

  8. Emergency cricothyrotomy for trismus caused by instantaneous rigor in cardiac arrest patients.

    Science.gov (United States)

    Lee, Jae Hee; Jung, Koo Young

    2012-07-01

    Instantaneous rigor as muscle stiffening occurring in the moment of death (or cardiac arrest) can be confused with rigor mortis. If trismus is caused by instantaneous rigor, orotracheal intubation is impossible and a surgical airway should be secured. Here, we report 2 patients who had emergency cricothyrotomy for trismus caused by instantaneous rigor. This case report aims to help physicians understand instantaneous rigor and to emphasize the importance of securing a surgical airway quickly on the occurrence of trismus. Copyright © 2012 Elsevier Inc. All rights reserved.

  9. Memory sparing, fast scattering formalism for rigorous diffraction modeling

    Science.gov (United States)

    Iff, W.; Kämpfe, T.; Jourlin, Y.; Tishchenko, A. V.

    2017-07-01

    The basics and algorithmic steps of a novel scattering formalism suited for memory sparing and fast electromagnetic calculations are presented. The formalism, called ‘S-vector algorithm’ (by analogy with the known scattering-matrix algorithm), allows the calculation of the collective scattering spectra of individual layered micro-structured scattering objects. A rigorous method of linear complexity is applied to model the scattering at individual layers; here the generalized source method (GSM) resorting to Fourier harmonics as basis functions is used as one possible method of linear complexity. The concatenation of the individual scattering events can be achieved sequentially or in parallel, both having pros and cons. The present development will largely concentrate on a consecutive approach based on the multiple reflection series. The latter will be reformulated into an implicit formalism which will be associated with an iterative solver, resulting in improved convergence. The examples will first refer to 1D grating diffraction for the sake of simplicity and intelligibility, with a final 2D application example.

  10. Rigorous Results for the Distribution of Money on Connected Graphs

    Science.gov (United States)

    Lanchier, Nicolas; Reed, Stephanie

    2018-05-01

    This paper is concerned with general spatially explicit versions of three stochastic models for the dynamics of money that have been introduced and studied numerically by statistical physicists: the uniform reshuffling model, the immediate exchange model and the model with saving propensity. All three models consist of systems of economical agents that consecutively engage in pairwise monetary transactions. Computer simulations performed in the physics literature suggest that, when the number of agents and the average amount of money per agent are large, the limiting distribution of money as time goes to infinity approaches the exponential distribution for the first model, the gamma distribution with shape parameter two for the second model and a distribution similar but not exactly equal to a gamma distribution whose shape parameter depends on the saving propensity for the third model. The main objective of this paper is to give rigorous proofs of these conjectures and also extend these conjectures to generalizations of the first two models and a variant of the third model that include local rather than global interactions, i.e., instead of choosing the two interacting agents uniformly at random from the system, the agents are located on the vertex set of a general connected graph and can only interact with their neighbors.

  11. Rigorous vector wave propagation for arbitrary flat media

    Science.gov (United States)

    Bos, Steven P.; Haffert, Sebastiaan Y.; Keller, Christoph U.

    2017-08-01

    Precise modelling of the (off-axis) point spread function (PSF) to identify geometrical and polarization aberrations is important for many optical systems. In order to characterise the PSF of the system in all Stokes parameters, an end-to-end simulation of the system has to be performed in which Maxwell's equations are rigorously solved. We present the first results of a python code that we are developing to perform multiscale end-to-end wave propagation simulations that include all relevant physics. Currently we can handle plane-parallel near- and far-field vector diffraction effects of propagating waves in homogeneous isotropic and anisotropic materials, refraction and reflection of flat parallel surfaces, interference effects in thin films and unpolarized light. We show that the code has a numerical precision on the order of 10-16 for non-absorbing isotropic and anisotropic materials. For absorbing materials the precision is on the order of 10-8. The capabilities of the code are demonstrated by simulating a converging beam reflecting from a flat aluminium mirror at normal incidence.

  12. Dynamics of harmonically-confined systems: Some rigorous results

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Zhigang, E-mail: zwu@physics.queensu.ca; Zaremba, Eugene, E-mail: zaremba@sparky.phy.queensu.ca

    2014-03-15

    In this paper we consider the dynamics of harmonically-confined atomic gases. We present various general results which are independent of particle statistics, interatomic interactions and dimensionality. Of particular interest is the response of the system to external perturbations which can be either static or dynamic in nature. We prove an extended Harmonic Potential Theorem which is useful in determining the damping of the centre of mass motion when the system is prepared initially in a highly nonequilibrium state. We also study the response of the gas to a dynamic external potential whose position is made to oscillate sinusoidally in a given direction. We show in this case that either the energy absorption rate or the centre of mass dynamics can serve as a probe of the optical conductivity of the system. -- Highlights: •We derive various rigorous results on the dynamics of harmonically-confined atomic gases. •We derive an extension of the Harmonic Potential Theorem. •We demonstrate the link between the energy absorption rate in a harmonically-confined system and the optical conductivity.

  13. Concrete ensemble Kalman filters with rigorous catastrophic filter divergence.

    Science.gov (United States)

    Kelly, David; Majda, Andrew J; Tong, Xin T

    2015-08-25

    The ensemble Kalman filter and ensemble square root filters are data assimilation methods used to combine high-dimensional, nonlinear dynamical models with observed data. Ensemble methods are indispensable tools in science and engineering and have enjoyed great success in geophysical sciences, because they allow for computationally cheap low-ensemble-state approximation for extremely high-dimensional turbulent forecast models. From a theoretical perspective, the dynamical properties of these methods are poorly understood. One of the central mysteries is the numerical phenomenon known as catastrophic filter divergence, whereby ensemble-state estimates explode to machine infinity, despite the true state remaining in a bounded region. In this article we provide a breakthrough insight into the phenomenon, by introducing a simple and natural forecast model that transparently exhibits catastrophic filter divergence under all ensemble methods and a large set of initializations. For this model, catastrophic filter divergence is not an artifact of numerical instability, but rather a true dynamical property of the filter. The divergence is not only validated numerically but also proven rigorously. The model cleanly illustrates mechanisms that give rise to catastrophic divergence and confirms intuitive accounts of the phenomena given in past literature.

  14. PRO development: rigorous qualitative research as the crucial foundation.

    Science.gov (United States)

    Lasch, Kathryn Eilene; Marquis, Patrick; Vigneux, Marc; Abetz, Linda; Arnould, Benoit; Bayliss, Martha; Crawford, Bruce; Rosa, Kathleen

    2010-10-01

    Recently published articles have described criteria to assess qualitative research in the health field in general, but very few articles have delineated qualitative methods to be used in the development of Patient-Reported Outcomes (PROs). In fact, how PROs are developed with subject input through focus groups and interviews has been given relatively short shrift in the PRO literature when compared to the plethora of quantitative articles on the psychometric properties of PROs. If documented at all, most PRO validation articles give little for the reader to evaluate the content validity of the measures and the credibility and trustworthiness of the methods used to develop them. Increasingly, however, scientists and authorities want to be assured that PRO items and scales have meaning and relevance to subjects. This article was developed by an international, interdisciplinary group of psychologists, psychometricians, regulatory experts, a physician, and a sociologist. It presents rigorous and appropriate qualitative research methods for developing PROs with content validity. The approach described combines an overarching phenomenological theoretical framework with grounded theory data collection and analysis methods to yield PRO items and scales that have content validity.

  15. Rigorous derivation of porous-media phase-field equations

    Science.gov (United States)

    Schmuck, Markus; Kalliadasis, Serafim

    2017-11-01

    The evolution of interfaces in Complex heterogeneous Multiphase Systems (CheMSs) plays a fundamental role in a wide range of scientific fields such as thermodynamic modelling of phase transitions, materials science, or as a computational tool for interfacial flow studies or material design. Here, we focus on phase-field equations in CheMSs such as porous media. To the best of our knowledge, we present the first rigorous derivation of error estimates for fourth order, upscaled, and nonlinear evolution equations. For CheMs with heterogeneity ɛ, we obtain the convergence rate ɛ 1 / 4 , which governs the error between the solution of the new upscaled formulation and the solution of the microscopic phase-field problem. This error behaviour has recently been validated computationally in. Due to the wide range of application of phase-field equations, we expect this upscaled formulation to allow for new modelling, analytic, and computational perspectives for interfacial transport and phase transformations in CheMSs. This work was supported by EPSRC, UK, through Grant Nos. EP/H034587/1, EP/L027186/1, EP/L025159/1, EP/L020564/1, EP/K008595/1, and EP/P011713/1 and from ERC via Advanced Grant No. 247031.

  16. Rigorous time slicing approach to Feynman path integrals

    CERN Document Server

    Fujiwara, Daisuke

    2017-01-01

    This book proves that Feynman's original definition of the path integral actually converges to the fundamental solution of the Schrödinger equation at least in the short term if the potential is differentiable sufficiently many times and its derivatives of order equal to or higher than two are bounded. The semi-classical asymptotic formula up to the second term of the fundamental solution is also proved by a method different from that of Birkhoff. A bound of the remainder term is also proved. The Feynman path integral is a method of quantization using the Lagrangian function, whereas Schrödinger's quantization uses the Hamiltonian function. These two methods are believed to be equivalent. But equivalence is not fully proved mathematically, because, compared with Schrödinger's method, there is still much to be done concerning rigorous mathematical treatment of Feynman's method. Feynman himself defined a path integral as the limit of a sequence of integrals over finite-dimensional spaces which is obtained by...

  17. Stochastic Geometry and Quantum Gravity: Some Rigorous Results

    Science.gov (United States)

    Zessin, H.

    The aim of these lectures is a short introduction into some recent developments in stochastic geometry which have one of its origins in simplicial gravity theory (see Regge Nuovo Cimento 19: 558-571, 1961). The aim is to define and construct rigorously point processes on spaces of Euclidean simplices in such a way that the configurations of these simplices are simplicial complexes. The main interest then is concentrated on their curvature properties. We illustrate certain basic ideas from a mathematical point of view. An excellent representation of this area can be found in Schneider and Weil (Stochastic and Integral Geometry, Springer, Berlin, 2008. German edition: Stochastische Geometrie, Teubner, 2000). In Ambjørn et al. (Quantum Geometry Cambridge University Press, Cambridge, 1997) you find a beautiful account from the physical point of view. More recent developments in this direction can be found in Ambjørn et al. ("Quantum gravity as sum over spacetimes", Lect. Notes Phys. 807. Springer, Heidelberg, 2010). After an informal axiomatic introduction into the conceptual foundations of Regge's approach the first lecture recalls the concepts and notations used. It presents the fundamental zero-infinity law of stochastic geometry and the construction of cluster processes based on it. The second lecture presents the main mathematical object, i.e. Poisson-Delaunay surfaces possessing an intrinsic random metric structure. The third and fourth lectures discuss their ergodic behaviour and present the two-dimensional Regge model of pure simplicial quantum gravity. We terminate with the formulation of basic open problems. Proofs are given in detail only in a few cases. In general the main ideas are developed. Sufficiently complete references are given.

  18. RIGOROUS GEOREFERENCING OF ALSAT-2A PANCHROMATIC AND MULTISPECTRAL IMAGERY

    Directory of Open Access Journals (Sweden)

    I. Boukerch

    2013-04-01

    Full Text Available The exploitation of the full geometric capabilities of the High-Resolution Satellite Imagery (HRSI, require the development of an appropriate sensor orientation model. Several authors studied this problem; generally we have two categories of geometric models: physical and empirical models. Based on the analysis of the metadata provided with ALSAT-2A, a rigorous pushbroom camera model can be developed. This model has been successfully applied to many very high resolution imagery systems. The relation between the image and ground coordinates by the time dependant collinearity involving many coordinates systems has been tested. The interior orientation parameters must be integrated in the model, the interior parameters can be estimated from the viewing angles corresponding to the pointing directions of any detector, these values are derived from cubic polynomials provided in the metadata. The developed model integrates all the necessary elements with 33 unknown. All the approximate values of the 33 unknowns parameters may be derived from the informations contained in the metadata files provided with the imagery technical specifications or they are simply fixed to zero, so the condition equation is linearized and solved using SVD in a least square sense in order to correct the initial values using a suitable number of well-distributed GCPs. Using Alsat-2A images over the town of Toulouse in the south west of France, three experiments are done. The first is about 2D accuracy analysis using several sets of parameters. The second is about GCPs number and distribution. The third experiment is about georeferencing multispectral image by applying the model calculated from panchromatic image.

  19. A rigorous derivation of gravitational self-force

    International Nuclear Information System (INIS)

    Gralla, Samuel E; Wald, Robert M

    2008-01-01

    There is general agreement that the MiSaTaQuWa equations should describe the motion of a 'small body' in general relativity, taking into account the leading order self-force effects. However, previous derivations of these equations have made a number of ad hoc assumptions and/or contain a number of unsatisfactory features. For example, all previous derivations have invoked, without proper justification, the step of 'Lorenz gauge relaxation', wherein the linearized Einstein equation is written in the form appropriate to the Lorenz gauge, but the Lorenz gauge condition is then not imposed-thereby making the resulting equations for the metric perturbation inequivalent to the linearized Einstein equations. (Such a 'relaxation' of the linearized Einstein equations is essential in order to avoid the conclusion that 'point particles' move on geodesics.) In this paper, we analyze the issue of 'particle motion' in general relativity in a systematic and rigorous way by considering a one-parameter family of metrics, g ab (λ), corresponding to having a body (or black hole) that is 'scaled down' to zero size and mass in an appropriate manner. We prove that the limiting worldline of such a one-parameter family must be a geodesic of the background metric, g ab (λ = 0). Gravitational self-force-as well as the force due to coupling of the spin of the body to curvature-then arises as a first-order perturbative correction in λ to this worldline. No assumptions are made in our analysis apart from the smoothness and limit properties of the one-parameter family of metrics, g ab (λ). Our approach should provide a framework for systematically calculating higher order corrections to gravitational self-force, including higher multipole effects, although we do not attempt to go beyond first-order calculations here. The status of the MiSaTaQuWa equations is explained

  20. Rigorous covariance propagation of geoid errors to geodetic MDT estimates

    Science.gov (United States)

    Pail, R.; Albertella, A.; Fecher, T.; Savcenko, R.

    2012-04-01

    The mean dynamic topography (MDT) is defined as the difference between the mean sea surface (MSS) derived from satellite altimetry, averaged over several years, and the static geoid. Assuming geostrophic conditions, from the MDT the ocean surface velocities as important component of global ocean circulation can be derived from it. Due to the availability of GOCE gravity field models, for the very first time MDT can now be derived solely from satellite observations (altimetry and gravity) down to spatial length-scales of 100 km and even below. Global gravity field models, parameterized in terms of spherical harmonic coefficients, are complemented by the full variance-covariance matrix (VCM). Therefore, for the geoid component a realistic statistical error estimate is available, while the error description of the altimetric component is still an open issue and is, if at all, attacked empirically. In this study we make the attempt to perform, based on the full gravity VCM, rigorous error propagation to derived geostrophic surface velocities, thus also considering all correlations. For the definition of the static geoid we use the third release of the time-wise GOCE model, as well as the satellite-only combination model GOCO03S. In detail, we will investigate the velocity errors resulting from the geoid component in dependence of the harmonic degree, and the impact of using/no using covariances on the MDT errors and its correlations. When deriving an MDT, it is spectrally filtered to a certain maximum degree, which is usually driven by the signal content of the geoid model, by applying isotropic or non-isotropic filters. Since this filtering is acting also on the geoid component, the consistent integration of this filter process into the covariance propagation shall be performed, and its impact shall be quantified. The study will be performed for MDT estimates in specific test areas of particular oceanographic interest.

  1. Simultaneous estimation of cross-validation errors in least squares collocation applied for statistical testing and evaluation of the noise variance components

    Science.gov (United States)

    Behnabian, Behzad; Mashhadi Hossainali, Masoud; Malekzadeh, Ahad

    2018-02-01

    The cross-validation technique is a popular method to assess and improve the quality of prediction by least squares collocation (LSC). We present a formula for direct estimation of the vector of cross-validation errors (CVEs) in LSC which is much faster than element-wise CVE computation. We show that a quadratic form of CVEs follows Chi-squared distribution. Furthermore, a posteriori noise variance factor is derived by the quadratic form of CVEs. In order to detect blunders in the observations, estimated standardized CVE is proposed as the test statistic which can be applied when noise variances are known or unknown. We use LSC together with the methods proposed in this research for interpolation of crustal subsidence in the northern coast of the Gulf of Mexico. The results show that after detection and removing outliers, the root mean square (RMS) of CVEs and estimated noise standard deviation are reduced about 51 and 59%, respectively. In addition, RMS of LSC prediction error at data points and RMS of estimated noise of observations are decreased by 39 and 67%, respectively. However, RMS of LSC prediction error on a regular grid of interpolation points covering the area is only reduced about 4% which is a consequence of sparse distribution of data points for this case study. The influence of gross errors on LSC prediction results is also investigated by lower cutoff CVEs. It is indicated that after elimination of outliers, RMS of this type of errors is also reduced by 19.5% for a 5 km radius of vicinity. We propose a method using standardized CVEs for classification of dataset into three groups with presumed different noise variances. The noise variance components for each of the groups are estimated using restricted maximum-likelihood method via Fisher scoring technique. Finally, LSC assessment measures were computed for the estimated heterogeneous noise variance model and compared with those of the homogeneous model. The advantage of the proposed method is the

  2. Cross-validation of the factorial structure of the Neighborhood Environment Walkability Scale (NEWS and its abbreviated form (NEWS-A

    Directory of Open Access Journals (Sweden)

    Cerin Ester

    2009-06-01

    Full Text Available Abstract Background The Neighborhood Environment Walkability Scale (NEWS and its abbreviated form (NEWS-A assess perceived environmental attributes believed to influence physical activity. A multilevel confirmatory factor analysis (MCFA conducted on a sample from Seattle, WA showed that, at the respondent level, the factor-analyzable items of the NEWS and NEWS-A measured 11 and 10 constructs of perceived neighborhood environment, respectively. At the census blockgroup (used by the US Census Bureau as a subunit of census tracts level, the MCFA yielded five factors for both NEWS and NEWS-A. The aim of this study was to cross-validate the individual- and blockgroup-level measurement models of the NEWS and NEWS-A in a geographical location and population different from those used in the original validation study. Methods A sample of 912 adults was recruited from 16 selected neighborhoods (116 census blockgroups in the Baltimore, MD region. Neighborhoods were stratified according to their socio-economic status and transport-related walkability level measured using Geographic Information Systems. Participants self-completed the NEWS. MCFA was used to cross-validate the individual- and blockgroup-level measurement models of the NEWS and NEWS-A. Results The data provided sufficient support for the factorial validity of the original individual-level measurement models, which consisted of 11 (NEWS and 10 (NEWS-A correlated factors. The original blockgroup-level measurement model of the NEWS and NEWS-A showed poor fit to the data and required substantial modifications. These included the combining of aspects of building aesthetics with safety from crime into one factor; the separation of natural aesthetics and building aesthetics into two factors; and for the NEWS-A, the separation of presence of sidewalks/walking routes from other infrastructure for walking. Conclusion This study provided support for the generalizability of the individual

  3. Cross validation of gas chromatography-flame photometric detection and gas chromatography-mass spectrometry methods for measuring dialkylphosphate metabolites of organophosphate pesticides in human urine.

    Science.gov (United States)

    Prapamontol, Tippawan; Sutan, Kunrunya; Laoyang, Sompong; Hongsibsong, Surat; Lee, Grace; Yano, Yukiko; Hunter, Ronald Elton; Ryan, P Barry; Barr, Dana Boyd; Panuwet, Parinya

    2014-01-01

    We report two analytical methods for the measurement of dialkylphosphate (DAP) metabolites of organophosphate pesticides in human urine. These methods were independently developed/modified and implemented in two separate laboratories and cross validated. The aim was to develop simple, cost effective, and reliable methods that could use available resources and sample matrices in Thailand and the United States. While several methods already exist, we found that direct application of these methods required modification of sample preparation and chromatographic conditions to render accurate, reliable data. The problems encountered with existing methods were attributable to urinary matrix interferences, and differences in the pH of urine samples and reagents used during the extraction and derivatization processes. Thus, we provide information on key parameters that require attention during method modification and execution that affect the ruggedness of the methods. The methods presented here employ gas chromatography (GC) coupled with either flame photometric detection (FPD) or electron impact ionization-mass spectrometry (EI-MS) with isotopic dilution quantification. The limits of detection were reported from 0.10ng/mL urine to 2.5ng/mL urine (for GC-FPD), while the limits of quantification were reported from 0.25ng/mL urine to 2.5ng/mL urine (for GC-MS), for all six common DAP metabolites (i.e., dimethylphosphate, dimethylthiophosphate, dimethyldithiophosphate, diethylphosphate, diethylthiophosphate, and diethyldithiophosphate). Each method showed a relative recovery range of 94-119% (for GC-FPD) and 92-103% (for GC-MS), and relative standard deviations (RSD) of less than 20%. Cross-validation was performed on the same set of urine samples (n=46) collected from pregnant women residing in the agricultural areas of northern Thailand. The results from split sample analysis from both laboratories agreed well for each metabolite, suggesting that each method can produce

  4. Genomic prediction using different estimation methodology, blending and cross-validation techniques for growth traits and visual scores in Hereford and Braford cattle.

    Science.gov (United States)

    Campos, G S; Reimann, F A; Cardoso, L L; Ferreira, C E R; Junqueira, V S; Schmidt, P I; Braccini Neto, J; Yokoo, M J I; Sollero, B P; Boligon, A A; Cardoso, F F

    2018-05-07

    The objective of the present study was to evaluate the accuracy and bias of direct and blended genomic predictions using different methods and cross-validation techniques for growth traits (weight and weight gains) and visual scores (conformation, precocity, muscling and size) obtained at weaning and at yearling in Hereford and Braford breeds. Phenotypic data contained 126,290 animals belonging to the Delta G Connection genetic improvement program, and a set of 3,545 animals genotyped with the 50K chip and 131 sires with the 777K. After quality control, 41,045 markers remained for all animals. An animal model was used to estimate (co)variances components and to predict breeding values, which were later used to calculate the deregressed estimated breeding values (DEBV). Animals with genotype and phenotype for the traits studied were divided into four or five groups by random and k-means clustering cross-validation strategies. The values of accuracy of the direct genomic values (DGV) were moderate to high magnitude for at weaning and at yearling traits, ranging from 0.19 to 0.45 for the k-means and 0.23 to 0.78 for random clustering among all traits. The greatest gain in relation to the pedigree BLUP (PBLUP) was 9.5% with the BayesB method with both the k-means and the random clustering. Blended genomic value accuracies ranged from 0.19 to 0.56 for k-means and from 0.21 to 0.82 for random clustering. The analyzes using the historical pedigree and phenotypes contributed additional information to calculate the GEBV and in general, the largest gains were for the single-step (ssGBLUP) method in bivariate analyses with a mean increase of 43.00% among all traits measured at weaning and of 46.27% for those evaluated at yearling. The accuracy values for the marker effects estimation methods were lower for k-means clustering, indicating that the training set relationship to the selection candidates is a major factor affecting accuracy of genomic predictions. The gains in

  5. Slips of action and sequential decisions: a cross-validation study of tasks assessing habitual and goal-directed action control

    Directory of Open Access Journals (Sweden)

    Zsuzsika Sjoerds

    2016-12-01

    Full Text Available Instrumental learning and decision-making rely on two parallel systems: a goal-directed and a habitual system. In the past decade, several paradigms have been developed to study these systems in animals and humans by means of e.g. overtraining, devaluation procedures and sequential decision-making. These different paradigms are thought to measure the same constructs, but cross-validation has rarely been investigated. In this study we compared two widely used paradigms that assess aspects of goal-directed and habitual behavior. We correlated parameters from a two-step sequential decision-making task that assesses model-based and model-free learning with a slips-of-action paradigm that assesses the ability to suppress cue-triggered, learnt responses when the outcome has been devalued and is therefore no longer desirable. Model-based control during the two-step task showed a very moderately positive correlation with goal-directed devaluation sensitivity, whereas model-free control did not. Interestingly, parameter estimates of model-based and goal-directed behavior in the two tasks were positively correlated with higher-order cognitive measures (e.g. visual short-term memory. These cognitive measures seemed to (at least partly mediate the association between model-based control during sequential decision-making and goal-directed behavior after instructed devaluation. This study provides moderate support for a common framework to describe the propensity towards goal-directed behavior as measured with two frequently used tasks. However, we have to caution that the amount of shared variance between the goal-directed and model-based system in both tasks was rather low, suggesting that each task does also pick up distinct aspects of goal-directed behavior. Further investigation of the commonalities and differences between the model-free and habit systems as measured with these, and other, tasks is needed. Also, a follow-up cross-validation on the neural

  6. Experimental evaluation of rigor mortis. III. Comparative study of the evolution of rigor mortis in different sized muscle groups in rats.

    Science.gov (United States)

    Krompecher, T; Fryc, O

    1978-01-01

    The use of new methods and an appropriate apparatus has allowed us to make successive measurements of rigor mortis and a study of its evolution in the rat. By a comparative examination on the front and hind limbs, we have determined the following: (1) The muscular mass of the hind limbs is 2.89 times greater than that of the front limbs. (2) In the initial phase rigor mortis is more pronounced in the front limbs. (3) The front and hind limbs reach maximum rigor mortis at the same time and this state is maintained for 2 hours. (4) Resolution of rigor mortis is accelerated in the front limbs during the initial phase, but both front and hind limbs reach complete resolution at the same time.

  7. How Individual Scholars Can Reduce the Rigor-Relevance Gap in Management Research

    OpenAIRE

    Wolf, Joachim; Rosenberg, Timo

    2012-01-01

    This paper discusses a number of avenues management scholars could follow to reduce the existing gap between scientific rigor and practical relevance without relativizing the importance of the first goal dimension. Such changes are necessary because many management studies do not fully exploit the possibilities to increase their practical relevance while maintaining scientific rigor. We argue that this rigor-relevance gap is not only the consequence of the currently prevailing institutional c...

  8. RIGOR MORTIS AND THE INFLUENCE OF CALCIUM AND MAGNESIUM SALTS UPON ITS DEVELOPMENT.

    Science.gov (United States)

    Meltzer, S J; Auer, J

    1908-01-01

    Calcium salts hasten and magnesium salts retard the development of rigor mortis, that is, when these salts are administered subcutaneously or intravenously. When injected intra-arterially, concentrated solutions of both kinds of salts cause nearly an immediate onset of a strong stiffness of the muscles which is apparently a contraction, brought on by a stimulation caused by these salts and due to osmosis. This contraction, if strong, passes over without a relaxation into a real rigor. This form of rigor may be classed as work-rigor (Arbeitsstarre). In animals, at least in frogs, with intact cords, the early contraction and the following rigor are stronger than in animals with destroyed cord. If M/8 solutions-nearly equimolecular to "physiological" solutions of sodium chloride-are used, even when injected intra-arterially, calcium salts hasten and magnesium salts retard the onset of rigor. The hastening and retardation in this case as well as in the cases of subcutaneous and intravenous injections, are ion effects and essentially due to the cations, calcium and magnesium. In the rigor hastened by calcium the effects of the extensor muscles mostly prevail; in the rigor following magnesium injection, on the other hand, either the flexor muscles prevail or the muscles become stiff in the original position of the animal at death. There seems to be no difference in the degree of stiffness in the final rigor, only the onset and development of the rigor is hastened in the case of the one salt and retarded in the other. Calcium hastens also the development of heat rigor. No positive facts were obtained with regard to the effect of magnesium upon heat vigor. Calcium also hastens and magnesium retards the onset of rigor in the left ventricle of the heart. No definite data were gathered with regard to the effects of these salts upon the right ventricle.

  9. Sediment transport patterns in the San Francisco Bay Coastal System from cross-validation of bedform asymmetry and modeled residual flux

    Science.gov (United States)

    Barnard, Patrick L.; Erikson, Li H.; Elias, Edwin P.L.; Dartnell, Peter; Barnard, P.L.; Jaffee, B.E.; Schoellhamer, D.H.

    2013-01-01

    The morphology of ~ 45,000 bedforms from 13 multibeam bathymetry surveys was used as a proxy for identifying net bedload sediment transport directions and pathways throughout the San Francisco Bay estuary and adjacent outer coast. The spatially-averaged shape asymmetry of the bedforms reveals distinct pathways of ebb and flood transport. Additionally, the region-wide, ebb-oriented asymmetry of 5% suggests net seaward-directed transport within the estuarine-coastal system, with significant seaward asymmetry at the mouth of San Francisco Bay (11%), through the northern reaches of the Bay (7–8%), and among the largest bedforms (21% for λ > 50 m). This general indication for the net transport of sand to the open coast strongly suggests that anthropogenic removal of sediment from the estuary, particularly along clearly defined seaward transport pathways, will limit the supply of sand to chronically eroding, open-coast beaches. The bedform asymmetry measurements significantly agree (up to ~ 76%) with modeled annual residual transport directions derived from a hydrodynamically-calibrated numerical model, and the orientation of adjacent, flow-sculpted seafloor features such as mega-flute structures, providing a comprehensive validation of the technique. The methods described in this paper to determine well-defined, cross-validated sediment transport pathways can be applied to estuarine-coastal systems globally where bedforms are present. The results can inform and improve regional sediment management practices to more efficiently utilize often limited sediment resources and mitigate current and future sediment supply-related impacts.

  10. Estimation of influential points in any data set from coefficient of determination and its leave-one-out cross-validated counterpart.

    Science.gov (United States)

    Tóth, Gergely; Bodai, Zsolt; Héberger, Károly

    2013-10-01

    Coefficient of determination (R (2)) and its leave-one-out cross-validated analogue (denoted by Q (2) or R cv (2) ) are the most frequantly published values to characterize the predictive performance of models. In this article we use R (2) and Q (2) in a reversed aspect to determine uncommon points, i.e. influential points in any data sets. The term (1 - Q (2))/(1 - R (2)) corresponds to the ratio of predictive residual sum of squares and the residual sum of squares. The ratio correlates to the number of influential points in experimental and random data sets. We propose an (approximate) F test on (1 - Q (2))/(1 - R (2)) term to quickly pre-estimate the presence of influential points in training sets of models. The test is founded upon the routinely calculated Q (2) and R (2) values and warns the model builders to verify the training set, to perform influence analysis or even to change to robust modeling.

  11. Shuffling cross-validation-bee algorithm as a new descriptor selection method for retention studies of pesticides in biopartitioning micellar chromatography.

    Science.gov (United States)

    Zarei, Kobra; Atabati, Morteza; Ahmadi, Monire

    2017-05-04

    Bee algorithm (BA) is an optimization algorithm inspired by the natural foraging behaviour of honey bees to find the optimal solution which can be proposed to feature selection. In this paper, shuffling cross-validation-BA (CV-BA) was applied to select the best descriptors that could describe the retention factor (log k) in the biopartitioning micellar chromatography (BMC) of 79 heterogeneous pesticides. Six descriptors were obtained using BA and then the selected descriptors were applied for model development using multiple linear regression (MLR). The descriptor selection was also performed using stepwise, genetic algorithm and simulated annealing methods and MLR was applied to model development and then the results were compared with those obtained from shuffling CV-BA. The results showed that shuffling CV-BA can be applied as a powerful descriptor selection method. Support vector machine (SVM) was also applied for model development using six selected descriptors by BA. The obtained statistical results using SVM were better than those obtained using MLR, as the root mean square error (RMSE) and correlation coefficient (R) for whole data set (training and test), using shuffling CV-BA-MLR, were obtained as 0.1863 and 0.9426, respectively, while these amounts for the shuffling CV-BA-SVM method were obtained as 0.0704 and 0.9922, respectively.

  12. Generalized Jackknife Estimators of Weighted Average Derivatives

    DEFF Research Database (Denmark)

    Cattaneo, Matias D.; Crump, Richard K.; Jansson, Michael

    With the aim of improving the quality of asymptotic distributional approximations for nonlinear functionals of nonparametric estimators, this paper revisits the large-sample properties of an important member of that class, namely a kernel-based weighted average derivative estimator. Asymptotic...

  13. ATP, IMP, and glycogen in cod muscle at onset and during development of rigor mortis depend on the sampling location

    DEFF Research Database (Denmark)

    Cappeln, Gertrud; Jessen, Flemming

    2002-01-01

    Variation in glycogen, ATP, and IMP contents within individual cod muscles were studied in ice stored fish during the progress of rigor mortis. Rigor index was determined before muscle samples for chemical analyzes were taken at 16 different positions on the fish. During development of rigor......, the contents of glycogen and ATP decreased differently in relation to rigor index depending on sampling location. Although fish were considered to be in strong rigor according to the rigor index method, parts of the muscle were not in rigor as high ATP concentrations were found in dorsal and tall muscle....

  14. Trends in Methodological Rigor in Intervention Research Published in School Psychology Journals

    Science.gov (United States)

    Burns, Matthew K.; Klingbeil, David A.; Ysseldyke, James E.; Petersen-Brown, Shawna

    2012-01-01

    Methodological rigor in intervention research is important for documenting evidence-based practices and has been a recent focus in legislation, including the No Child Left Behind Act. The current study examined the methodological rigor of intervention research in four school psychology journals since the 1960s. Intervention research has increased…

  15. Effect of Pre-rigor Salting Levels on Physicochemical and Textural Properties of Chicken Breast Muscles.

    Science.gov (United States)

    Kim, Hyun-Wook; Hwang, Ko-Eun; Song, Dong-Heon; Kim, Yong-Jae; Ham, Youn-Kyung; Yeo, Eui-Joo; Jeong, Tae-Jun; Choi, Yun-Sang; Kim, Cheon-Jei

    2015-01-01

    This study was conducted to evaluate the effect of pre-rigor salting level (0-4% NaCl concentration) on physicochemical and textural properties of pre-rigor chicken breast muscles. The pre-rigor chicken breast muscles were de-boned 10 min post-mortem and salted within 25 min post-mortem. An increase in pre-rigor salting level led to the formation of high ultimate pH of chicken breast muscles at post-mortem 24 h. The addition of minimum of 2% NaCl significantly improved water holding capacity, cooking loss, protein solubility, and hardness when compared to the non-salting chicken breast muscle (prigor salting level caused the inhibition of myofibrillar protein degradation and the acceleration of lipid oxidation. However, the difference in NaCl concentration between 3% and 4% had no great differences in the results of physicochemical and textural properties due to pre-rigor salting effects (p>0.05). Therefore, our study certified the pre-rigor salting effect of chicken breast muscle salted with 2% NaCl when compared to post-rigor muscle salted with equal NaCl concentration, and suggests that the 2% NaCl concentration is minimally required to ensure the definite pre-rigor salting effect on chicken breast muscle.

  16. Effect of Pre-rigor Salting Levels on Physicochemical and Textural Properties of Chicken Breast Muscles

    Science.gov (United States)

    Choi, Yun-Sang

    2015-01-01

    This study was conducted to evaluate the effect of pre-rigor salting level (0-4% NaCl concentration) on physicochemical and textural properties of pre-rigor chicken breast muscles. The pre-rigor chicken breast muscles were de-boned 10 min post-mortem and salted within 25 min post-mortem. An increase in pre-rigor salting level led to the formation of high ultimate pH of chicken breast muscles at post-mortem 24 h. The addition of minimum of 2% NaCl significantly improved water holding capacity, cooking loss, protein solubility, and hardness when compared to the non-salting chicken breast muscle (psalting level caused the inhibition of myofibrillar protein degradation and the acceleration of lipid oxidation. However, the difference in NaCl concentration between 3% and 4% had no great differences in the results of physicochemical and textural properties due to pre-rigor salting effects (p>0.05). Therefore, our study certified the pre-rigor salting effect of chicken breast muscle salted with 2% NaCl when compared to post-rigor muscle salted with equal NaCl concentration, and suggests that the 2% NaCl concentration is minimally required to ensure the definite pre-rigor salting effect on chicken breast muscle. PMID:26761884

  17. Rigorous bounds on the free energy of electron-phonon models

    NARCIS (Netherlands)

    Raedt, Hans De; Michielsen, Kristel

    1997-01-01

    We present a collection of rigorous upper and lower bounds to the free energy of electron-phonon models with linear electron-phonon interaction. These bounds are used to compare different variational approaches. It is shown rigorously that the ground states corresponding to the sharpest bounds do

  18. The Relationship between Project-Based Learning and Rigor in STEM-Focused High Schools

    Science.gov (United States)

    Edmunds, Julie; Arshavsky, Nina; Glennie, Elizabeth; Charles, Karen; Rice, Olivia

    2016-01-01

    Project-based learning (PjBL) is an approach often favored in STEM classrooms, yet some studies have shown that teachers struggle to implement it with academic rigor. This paper explores the relationship between PjBL and rigor in the classrooms of ten STEM-oriented high schools. Utilizing three different data sources reflecting three different…

  19. Moving beyond Data Transcription: Rigor as Issue in Representation of Digital Literacies

    Science.gov (United States)

    Hagood, Margaret Carmody; Skinner, Emily Neil

    2015-01-01

    Rigor in qualitative research has been based upon criteria of credibility, dependability, confirmability, and transferability. Drawing upon articles published during our editorship of the "Journal of Adolescent & Adult Literacy," we illustrate how the use of digital data in research study reporting may enhance these areas of rigor,…

  20. An intercomparison of a large ensemble of statistical downscaling methods for Europe: Overall results from the VALUE perfect predictor cross-validation experiment

    Science.gov (United States)

    Gutiérrez, Jose Manuel; Maraun, Douglas; Widmann, Martin; Huth, Radan; Hertig, Elke; Benestad, Rasmus; Roessler, Ole; Wibig, Joanna; Wilcke, Renate; Kotlarski, Sven

    2016-04-01

    VALUE is an open European network to validate and compare downscaling methods for climate change research (http://www.value-cost.eu). A key deliverable of VALUE is the development of a systematic validation framework to enable the assessment and comparison of both dynamical and statistical downscaling methods. This framework is based on a user-focused validation tree, guiding the selection of relevant validation indices and performance measures for different aspects of the validation (marginal, temporal, spatial, multi-variable). Moreover, several experiments have been designed to isolate specific points in the downscaling procedure where problems may occur (assessment of intrinsic performance, effect of errors inherited from the global models, effect of non-stationarity, etc.). The list of downscaling experiments includes 1) cross-validation with perfect predictors, 2) GCM predictors -aligned with EURO-CORDEX experiment- and 3) pseudo reality predictors (see Maraun et al. 2015, Earth's Future, 3, doi:10.1002/2014EF000259, for more details). The results of these experiments are gathered, validated and publicly distributed through the VALUE validation portal, allowing for a comprehensive community-open downscaling intercomparison study. In this contribution we describe the overall results from Experiment 1), consisting of a European wide 5-fold cross-validation (with consecutive 6-year periods from 1979 to 2008) using predictors from ERA-Interim to downscale precipitation and temperatures (minimum and maximum) over a set of 86 ECA&D stations representative of the main geographical and climatic regions in Europe. As a result of the open call for contribution to this experiment (closed in Dec. 2015), over 40 methods representative of the main approaches (MOS and Perfect Prognosis, PP) and techniques (linear scaling, quantile mapping, analogs, weather typing, linear and generalized regression, weather generators, etc.) were submitted, including information both data

  1. Cross-validation of the Dot Counting Test in a large sample of credible and non-credible patients referred for neuropsychological testing.

    Science.gov (United States)

    McCaul, Courtney; Boone, Kyle B; Ermshar, Annette; Cottingham, Maria; Victor, Tara L; Ziegler, Elizabeth; Zeller, Michelle A; Wright, Matthew

    2018-01-18

    To cross-validate the Dot Counting Test in a large neuropsychological sample. Dot Counting Test scores were compared in credible (n = 142) and non-credible (n = 335) neuropsychology referrals. Non-credible patients scored significantly higher than credible patients on all Dot Counting Test scores. While the original E-score cut-off of ≥17 achieved excellent specificity (96.5%), it was associated with mediocre sensitivity (52.8%). However, the cut-off could be substantially lowered to ≥13.80, while still maintaining adequate specificity (≥90%), and raising sensitivity to 70.0%. Examination of non-credible subgroups revealed that Dot Counting Test sensitivity in feigned mild traumatic brain injury (mTBI) was 55.8%, whereas sensitivity was 90.6% in patients with non-credible cognitive dysfunction in the context of claimed psychosis, and 81.0% in patients with non-credible cognitive performance in depression or severe TBI. Thus, the Dot Counting Test may have a particular role in detection of non-credible cognitive symptoms in claimed psychiatric disorders. Alternative to use of the E-score, failure on ≥1 cut-offs applied to individual Dot Counting Test scores (≥6.0″ for mean grouped dot counting time, ≥10.0″ for mean ungrouped dot counting time, and ≥4 errors), occurred in 11.3% of the credible sample, while nearly two-thirds (63.6%) of the non-credible sample failed one of more of these cut-offs. An E-score cut-off of 13.80, or failure on ≥1 individual score cut-offs, resulted in few false positive identifications in credible patients, and achieved high sensitivity (64.0-70.0%), and therefore appear appropriate for use in identifying neurocognitive performance invalidity.

  2. WE-DE-201-04: Cross Validation of Knowledge-Based Treatment Planning for Prostate LDR Brachytherapy Using Principle Component Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Roper, J; Ghavidel, B; Godette, K; Schreibmann, E [Winship Cancer Institute of Emory University, GA (United States); Chanyavanich, V [Rocky Mountain Cancer Centers, CO (United States)

    2016-06-15

    Purpose: To validate a knowledge-based algorithm for prostate LDR brachytherapy treatment planning. Methods: A dataset of 100 cases was compiled from an active prostate seed implant service. Cases were randomized into 10 subsets. For each subset, the 90 remaining library cases were registered to a common reference frame and then characterized on a point by point basis using principle component analysis (PCA). Each test case was converted to PCA vectors using the same process and compared with each library case using a Mahalanobis distance to evaluate similarity. Rank order PCA scores were used to select the best-matched library case. The seed arrangement was extracted from the best-matched case and used as a starting point for planning the test case. Any subsequent modifications were recorded that required input from a treatment planner to achieve V100>95%, V150<60%, V200<20%. To simulate operating-room planning constraints, seed activity was held constant, and the seed count could not increase. Results: The computational time required to register test-case contours and evaluate PCA similarity across the library was 10s. Preliminary analysis of 2 subsets shows that 9 of 20 test cases did not require any seed modifications to obtain an acceptable plan. Five test cases required fewer than 10 seed modifications or a grid shift. Another 5 test cases required approximately 20 seed modifications. An acceptable plan was not achieved for 1 outlier, which was substantially larger than its best match. Modifications took between 5s and 6min. Conclusion: A knowledge-based treatment planning algorithm for prostate LDR brachytherapy is being cross validated using 100 prior cases. Preliminary results suggest that for this size library, acceptable plans can be achieved without planner input in about half of the cases while varying amounts of planner input are needed in remaining cases. Computational time and planning time are compatible with clinical practice.

  3. A model of prediction and cross-validation of fat-free mass in men with motor complete spinal cord injury.

    Science.gov (United States)

    Gorgey, Ashraf S; Dolbow, David R; Gater, David R

    2012-07-01

    To establish and validate prediction equations by using body weight to predict legs, trunk, and whole-body fat-free mass (FFM) in men with chronic complete spinal cord injury (SCI). Cross-sectional design. Research setting in a large medical center. Individuals with SCI (N=63) divided into prediction (n=42) and cross-validation (n=21) groups. Not applicable. Whole-body FFM and regional FFM were determined by using dual-energy x-ray absorptiometry. Body weight was measured by using a wheelchair weighing scale after subtracting the weight of the chair. Body weight predicted legs FFM (legs FFM=.09×body weight+6.1; R(2)=.25, standard error of the estimate [SEE]=3.1kg, PFFM (trunk FFM=.21×body weight+8.6; R(2)=.56, SEE=3.6kg, PFFM (whole-body FFM=.288×body weight+26.3; R(2)=.53, SEE=5.3kg, PFFM(predicted) (FFM predicted from the derived equations) shared 86% of the variance in whole-body FFM(measured) (FFM measured using dual-energy x-ray absorptiometry scan) (R(2)=.86, SEE=1.8kg, PFFM(measured), and 66% of legs FFM(measured). The trunk FFM(predicted) shared 69% of the variance in trunk FFM(measured) (R(2)=.69, SEE=2.7kg, PFFM(predicted) shared 67% of the variance in legs FFM(measured) (R(2)=.67, SEE=2.8kg, PFFM did not differ between the prediction and validation groups. Body weight can be used to predict whole-body FFM and regional FFM. The predicted whole-body FFM improved the prediction of trunk FFM and legs FFM. Copyright © 2012 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.

  4. Evolution of Precipitation Structure During the November DYNAMO MJO Event: Cloud-Resolving Model Intercomparison and Cross Validation Using Radar Observations

    Science.gov (United States)

    Li, Xiaowen; Janiga, Matthew A.; Wang, Shuguang; Tao, Wei-Kuo; Rowe, Angela; Xu, Weixin; Liu, Chuntao; Matsui, Toshihisa; Zhang, Chidong

    2018-04-01

    Evolution of precipitation structures are simulated and compared with radar observations for the November Madden-Julian Oscillation (MJO) event during the DYNAmics of the MJO (DYNAMO) field campaign. Three ground-based, ship-borne, and spaceborne precipitation radars and three cloud-resolving models (CRMs) driven by observed large-scale forcing are used to study precipitation structures at different locations over the central equatorial Indian Ocean. Convective strength is represented by 0-dBZ echo-top heights, and convective organization by contiguous 17-dBZ areas. The multi-radar and multi-model framework allows for more stringent model validations. The emphasis is on testing models' ability to simulate subtle differences observed at different radar sites when the MJO event passed through. The results show that CRMs forced by site-specific large-scale forcing can reproduce not only common features in cloud populations but also subtle variations observed by different radars. The comparisons also revealed common deficiencies in CRM simulations where they underestimate radar echo-top heights for the strongest convection within large, organized precipitation features. Cross validations with multiple radars and models also enable quantitative comparisons in CRM sensitivity studies using different large-scale forcing, microphysical schemes and parameters, resolutions, and domain sizes. In terms of radar echo-top height temporal variations, many model sensitivity tests have better correlations than radar/model comparisons, indicating robustness in model performance on this aspect. It is further shown that well-validated model simulations could be used to constrain uncertainties in observed echo-top heights when the low-resolution surveillance scanning strategy is used.

  5. Higher risks when working unusual times? A cross-validation of the effects on safety, health, and work-life balance.

    Science.gov (United States)

    Greubel, Jana; Arlinghaus, Anna; Nachreiner, Friedhelm; Lombardi, David A

    2016-11-01

    Replication and cross-validation of results on health and safety risks of work at unusual times. Data from two independent surveys (European Working Conditions Surveys 2005 and 2010; EU 2005: n = 23,934 and EU 2010: n = 35,187) were used to examine the relative risks of working at unusual times (evenings, Saturdays, and Sundays) on work-life balance, work-related health complaints, and occupational accidents using logistic regression while controlling for potential confounders such as demographics, work load, and shift work. For the EU 2005 survey, evening work was significantly associated with an increased risk of poor work-life balance (OR 1.69) and work-related health complaints (OR 1.14), Saturday work with poor work-life balance (OR 1.49) and occupational accidents (OR 1.34), and Sunday work with poor work-life balance (OR 1.15) and work-related health complaints (OR 1.17). For EU 2010, evening work was associated with poor work-life balance (OR 1.51) and work-related health complaints (OR 1.12), Saturday work with poor work-life balance (OR 1.60) and occupational accidents (OR 1.19) but a decrease in risk for work-related health complaints (OR 0.86) and Sunday work with work-related health complaints (OR 1.13). Risk estimates in both samples yielded largely similar results with comparable ORs and overlapping confidence intervals. Work at unusual times constitutes a considerable risk to social participation and health and showed structurally consistent effects over time and across samples.

  6. A cross-validation trial of an Internet-based prevention program for alcohol and cannabis: Preliminary results from a cluster randomised controlled trial.

    Science.gov (United States)

    Champion, Katrina E; Newton, Nicola C; Stapinski, Lexine; Slade, Tim; Barrett, Emma L; Teesson, Maree

    2016-01-01

    Replication is an important step in evaluating evidence-based preventive interventions and is crucial for establishing the generalizability and wider impact of a program. Despite this, few replications have occurred in the prevention science field. This study aims to fill this gap by conducting a cross-validation trial of the Climate Schools: Alcohol and Cannabis course, an Internet-based prevention program, among a new cohort of Australian students. A cluster randomized controlled trial was conducted among 1103 students (Mage: 13.25 years) from 13 schools in Australia in 2012. Six schools received the Climate Schools course and 7 schools were randomized to a control group (health education as usual). All students completed a self-report survey at baseline and immediately post-intervention. Mixed-effects regressions were conducted for all outcome variables. Outcomes assessed included alcohol and cannabis use, knowledge and intentions to use these substances. Compared to the control group, immediately post-intervention the intervention group reported significantly greater alcohol (d = 0.67) and cannabis knowledge (d = 0.72), were less likely to have consumed any alcohol (even a sip or taste) in the past 6 months (odds ratio = 0.69) and were less likely to intend on using alcohol in the future (odds ratio = 0.62). However, there were no effects for binge drinking, cannabis use or intentions to use cannabis. These preliminary results provide some support for the Internet-based Climate Schools: Alcohol and Cannabis course as a feasible way of delivering alcohol and cannabis prevention. Intervention effects for alcohol and cannabis knowledge were consistent with results from the original trial; however, analyses of longer-term follow-up data are needed to provide a clearer indication of the efficacy of the intervention, particularly in relation to behavioral changes. © The Royal Australian and New Zealand College of Psychiatrists 2015.

  7. Onset of rigor mortis is earlier in red muscle than in white muscle.

    Science.gov (United States)

    Kobayashi, M; Takatori, T; Nakajima, M; Sakurada, K; Hatanaka, K; Ikegaya, H; Matsuda, Y; Iwase, H

    2000-01-01

    Rigor mortis is thought to be related to falling ATP levels in muscles postmortem. We measured rigor mortis as tension determined isometrically in three rat leg muscles in liquid paraffin kept at 37 degrees C or 25 degrees C--two red muscles, red gastrocnemius (RG) and soleus (SO) and one white muscle, white gastrocnemius (WG). Onset, half and full rigor mortis occurred earlier in RG and SO than in WG both at 37 degrees C and at 25 degrees C even though RG and WG were portions of the same muscle. This suggests that rigor mortis directly reflects the postmortem intramuscular ATP level, which decreases more rapidly in red muscle than in white muscle after death. Rigor mortis was more retarded at 25 degrees C than at 37 degrees C in each type of muscle.

  8. High and low rigor temperature effects on sheep meat tenderness and ageing.

    Science.gov (United States)

    Devine, Carrick E; Payne, Steven R; Peachey, Bridget M; Lowe, Timothy E; Ingram, John R; Cook, Christian J

    2002-02-01

    Immediately after electrical stimulation, the paired m. longissimus thoracis et lumborum (LT) of 40 sheep were boned out and wrapped tightly with a polyethylene cling film. One of the paired LT's was chilled in 15°C air to reach a rigor mortis (rigor) temperature of 18°C and the other side was placed in a water bath at 35°C and achieved rigor at this temperature. Wrapping reduced rigor shortening and mimicked meat left on the carcass. After rigor, the meat was aged at 15°C for 0, 8, 26 and 72 h and then frozen. The frozen meat was cooked to 75°C in an 85°C water bath and shear force values obtained from a 1×1 cm cross-section. The shear force values of meat for 18 and 35°C rigor were similar at zero ageing, but as ageing progressed, the 18 rigor meat aged faster and became more tender than meat that went into rigor at 35°C (Prigor at each ageing time were significantly different (Prigor were still significantly greater. Thus the toughness of 35°C meat was not a consequence of muscle shortening and appears to be due to both a faster rate of tenderisation and the meat tenderising to a greater extent at the lower temperature. The cook loss at 35°C rigor (30.5%) was greater than that at 18°C rigor (28.4%) (P<0.01) and the colour Hunter L values were higher at 35°C (P<0.01) compared with 18°C, but there were no significant differences in a or b values.

  9. Differential rigor development in red and white muscle revealed by simultaneous measurement of tension and stiffness.

    Science.gov (United States)

    Kobayashi, Masahiko; Takemori, Shigeru; Yamaguchi, Maki

    2004-02-10

    Based on the molecular mechanism of rigor mortis, we have proposed that stiffness (elastic modulus evaluated with tension response against minute length perturbations) can be a suitable index of post-mortem rigidity in skeletal muscle. To trace the developmental process of rigor mortis, we measured stiffness and tension in both red and white rat skeletal muscle kept in liquid paraffin at 37 and 25 degrees C. White muscle (in which type IIB fibres predominate) developed stiffness and tension significantly more slowly than red muscle, except for soleus red muscle at 25 degrees C, which showed disproportionately slow rigor development. In each of the examined muscles, stiffness and tension developed more slowly at 25 degrees C than at 37 degrees C. In each specimen, tension always reached its maximum level earlier than stiffness, and then decreased more rapidly and markedly than stiffness. These phenomena may account for the sequential progress of rigor mortis in human cadavers.

  10. Studies on the estimation of the postmortem interval. 3. Rigor mortis (author's transl).

    Science.gov (United States)

    Suzutani, T; Ishibashi, H; Takatori, T

    1978-11-01

    The authors have devised a method for classifying rigor mortis into 10 types based on its appearance and strength in various parts of a cadaver. By applying the method to the findings of 436 cadavers which were subjected to medico-legal autopsies in our laboratory during the last 10 years, it has been demonstrated that the classifying method is effective for analyzing the phenomenon of onset, persistence and disappearance of rigor mortis statistically. The investigation of the relationship between each type of rigor mortis and the postmortem interval has demonstrated that rigor mortis may be utilized as a basis for estimating the postmortem interval but the values have greater deviation than those described in current textbooks.

  11. 75 FR 29732 - Career and Technical Education Program-Promoting Rigorous Career and Technical Education Programs...

    Science.gov (United States)

    2010-05-27

    ... rigorous knowledge and skills in English- language arts and mathematics that employers and colleges expect... specialists and to access the student outcome data needed to meet annual evaluation and reporting requirements...

  12. Rigorous derivation from Landau-de Gennes theory to Ericksen-Leslie theory

    OpenAIRE

    Wang, Wei; Zhang, Pingwen; Zhang, Zhifei

    2013-01-01

    Starting from Beris-Edwards system for the liquid crystal, we present a rigorous derivation of Ericksen-Leslie system with general Ericksen stress and Leslie stress by using the Hilbert expansion method.

  13. MASTERS OF ANALYTICAL TRADECRAFT: CERTIFYING THE STANDARDS AND ANALYTIC RIGOR OF INTELLIGENCE PRODUCTS

    Science.gov (United States)

    2016-04-01

    AU/ACSC/2016 AIR COMMAND AND STAFF COLLEGE AIR UNIVERSITY MASTERS OF ANALYTICAL TRADECRAFT: CERTIFYING THE STANDARDS AND ANALYTIC RIGOR OF...establishing unit level certified Masters of Analytic Tradecraft (MAT) analysts to be trained and entrusted to evaluate and rate the standards and...cues) ideally should meet or exceed effective rigor (based on analytical process).4 To accomplish this, decision makers should not be left to their

  14. Rigor or Reliability and Validity in Qualitative Research: Perspectives, Strategies, Reconceptualization, and Recommendations.

    Science.gov (United States)

    Cypress, Brigitte S

    Issues are still raised even now in the 21st century by the persistent concern with achieving rigor in qualitative research. There is also a continuing debate about the analogous terms reliability and validity in naturalistic inquiries as opposed to quantitative investigations. This article presents the concept of rigor in qualitative research using a phenomenological study as an exemplar to further illustrate the process. Elaborating on epistemological and theoretical conceptualizations by Lincoln and Guba, strategies congruent with qualitative perspective for ensuring validity to establish the credibility of the study are described. A synthesis of the historical development of validity criteria evident in the literature during the years is explored. Recommendations are made for use of the term rigor instead of trustworthiness and the reconceptualization and renewed use of the concept of reliability and validity in qualitative research, that strategies for ensuring rigor must be built into the qualitative research process rather than evaluated only after the inquiry, and that qualitative researchers and students alike must be proactive and take responsibility in ensuring the rigor of a research study. The insights garnered here will move novice researchers and doctoral students to a better conceptual grasp of the complexity of reliability and validity and its ramifications for qualitative inquiry.

  15. Single-case synthesis tools I: Comparing tools to evaluate SCD quality and rigor.

    Science.gov (United States)

    Zimmerman, Kathleen N; Ledford, Jennifer R; Severini, Katherine E; Pustejovsky, James E; Barton, Erin E; Lloyd, Blair P

    2018-03-03

    Tools for evaluating the quality and rigor of single case research designs (SCD) are often used when conducting SCD syntheses. Preferred components include evaluations of design features related to the internal validity of SCD to obtain quality and/or rigor ratings. Three tools for evaluating the quality and rigor of SCD (Council for Exceptional Children, What Works Clearinghouse, and Single-Case Analysis and Design Framework) were compared to determine if conclusions regarding the effectiveness of antecedent sensory-based interventions for young children changed based on choice of quality evaluation tool. Evaluation of SCD quality differed across tools, suggesting selection of quality evaluation tools impacts evaluation findings. Suggestions for selecting an appropriate quality and rigor assessment tool are provided and across-tool conclusions are drawn regarding the quality and rigor of studies. Finally, authors provide guidance for using quality evaluations in conjunction with outcome analyses when conducting syntheses of interventions evaluated in the context of SCD. Copyright © 2018 Elsevier Ltd. All rights reserved.

  16. The effect of temperature on the mechanical aspects of rigor mortis in a liquid paraffin model.

    Science.gov (United States)

    Ozawa, Masayoshi; Iwadate, Kimiharu; Matsumoto, Sari; Asakura, Kumiko; Ochiai, Eriko; Maebashi, Kyoko

    2013-11-01

    Rigor mortis is an important phenomenon to estimate the postmortem interval in forensic medicine. Rigor mortis is affected by temperature. We measured stiffness of rat muscles using a liquid paraffin model to monitor the mechanical aspects of rigor mortis at five temperatures (37, 25, 10, 5 and 0°C). At 37, 25 and 10°C, the progression of stiffness was slower in cooler conditions. At 5 and 0°C, the muscle stiffness increased immediately after the muscles were soaked in cooled liquid paraffin and then muscles gradually became rigid without going through a relaxed state. This phenomenon suggests that it is important to be careful when estimating the postmortem interval in cold seasons. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  17. Quality properties of pre- and post-rigor beef muscle after interventions with high frequency ultrasound.

    Science.gov (United States)

    Sikes, Anita L; Mawson, Raymond; Stark, Janet; Warner, Robyn

    2014-11-01

    The delivery of a consistent quality product to the consumer is vitally important for the food industry. The aim of this study was to investigate the potential for using high frequency ultrasound applied to pre- and post-rigor beef muscle on the metabolism and subsequent quality. High frequency ultrasound (600kHz at 48kPa and 65kPa acoustic pressure) applied to post-rigor beef striploin steaks resulted in no significant effect on the texture (peak force value) of cooked steaks as measured by a Tenderometer. There was no added benefit of ultrasound treatment above that of the normal ageing process after ageing of the steaks for 7days at 4°C. Ultrasound treatment of post-rigor beef steaks resulted in a darkening of fresh steaks but after ageing for 7days at 4°C, the ultrasound-treated steaks were similar in colour to that of the aged, untreated steaks. High frequency ultrasound (2MHz at 48kPa acoustic pressure) applied to pre-rigor beef neck muscle had no effect on the pH, but the calculated exhaustion factor suggested that there was some effect on metabolism and actin-myosin interaction. However, the resultant texture of cooked, ultrasound-treated muscle was lower in tenderness compared to the control sample. After ageing for 3weeks at 0°C, the ultrasound-treated samples had the same peak force value as the control. High frequency ultrasound had no significant effect on the colour parameters of pre-rigor beef neck muscle. This proof-of-concept study showed no effect of ultrasound on quality but did indicate that the application of high frequency ultrasound to pre-rigor beef muscle shows potential for modifying ATP turnover and further investigation is warranted. Crown Copyright © 2014. Published by Elsevier B.V. All rights reserved.

  18. Disciplining Bioethics: Towards a Standard of Methodological Rigor in Bioethics Research

    Science.gov (United States)

    Adler, Daniel; Shaul, Randi Zlotnik

    2012-01-01

    Contemporary bioethics research is often described as multi- or interdisciplinary. Disciplines are characterized, in part, by their methods. Thus, when bioethics research draws on a variety of methods, it crosses disciplinary boundaries. Yet each discipline has its own standard of rigor—so when multiple disciplinary perspectives are considered, what constitutes rigor? This question has received inadequate attention, as there is considerable disagreement regarding the disciplinary status of bioethics. This disagreement has presented five challenges to bioethics research. Addressing them requires consideration of the main types of cross-disciplinary research, and consideration of proposals aiming to ensure rigor in bioethics research. PMID:22686634

  19. Mathematical framework for fast and rigorous track fit for the ZEUS detector

    Energy Technology Data Exchange (ETDEWEB)

    Spiridonov, Alexander

    2008-12-15

    In this note we present a mathematical framework for a rigorous approach to a common track fit for trackers located in the inner region of the ZEUS detector. The approach makes use of the Kalman filter and offers a rigorous treatment of magnetic field inhomogeneity, multiple scattering and energy loss. We describe mathematical details of the implementation of the Kalman filter technique with a reduced amount of computations for a cylindrical drift chamber, barrel and forward silicon strip detectors and a forward straw drift chamber. Options with homogeneous and inhomogeneous field are discussed. The fitting of tracks in one ZEUS event takes about of 20ms on standard PC. (orig.)

  20. Electrocardiogram artifact caused by rigors mimicking narrow complex tachycardia: a case report.

    Science.gov (United States)

    Matthias, Anne Thushara; Indrakumar, Jegarajah

    2014-02-04

    The electrocardiogram (ECG) is useful in the diagnosis of cardiac and non-cardiac conditions. Rigors due to shivering can cause electrocardiogram artifacts mimicking various cardiac rhythm abnormalities. We describe an 80-year-old Sri Lankan man with an abnormal electrocardiogram mimicking narrow complex tachycardia during the immediate post-operative period. Electrocardiogram changes caused by muscle tremor during rigors could mimic a narrow complex tachycardia. Identification of muscle tremor as a cause of electrocardiogram artifact can avoid unnecessary pharmacological and non-pharmacological intervention to prevent arrhythmias.

  1. Increased scientific rigor will improve reliability of research and effectiveness of management

    Science.gov (United States)

    Sells, Sarah N.; Bassing, Sarah B.; Barker, Kristin J.; Forshee, Shannon C.; Keever, Allison; Goerz, James W.; Mitchell, Michael S.

    2018-01-01

    Rigorous science that produces reliable knowledge is critical to wildlife management because it increases accurate understanding of the natural world and informs management decisions effectively. Application of a rigorous scientific method based on hypothesis testing minimizes unreliable knowledge produced by research. To evaluate the prevalence of scientific rigor in wildlife research, we examined 24 issues of the Journal of Wildlife Management from August 2013 through July 2016. We found 43.9% of studies did not state or imply a priori hypotheses, which are necessary to produce reliable knowledge. We posit that this is due, at least in part, to a lack of common understanding of what rigorous science entails, how it produces more reliable knowledge than other forms of interpreting observations, and how research should be designed to maximize inferential strength and usefulness of application. Current primary literature does not provide succinct explanations of the logic behind a rigorous scientific method or readily applicable guidance for employing it, particularly in wildlife biology; we therefore synthesized an overview of the history, philosophy, and logic that define scientific rigor for biological studies. A rigorous scientific method includes 1) generating a research question from theory and prior observations, 2) developing hypotheses (i.e., plausible biological answers to the question), 3) formulating predictions (i.e., facts that must be true if the hypothesis is true), 4) designing and implementing research to collect data potentially consistent with predictions, 5) evaluating whether predictions are consistent with collected data, and 6) drawing inferences based on the evaluation. Explicitly testing a priori hypotheses reduces overall uncertainty by reducing the number of plausible biological explanations to only those that are logically well supported. Such research also draws inferences that are robust to idiosyncratic observations and

  2. Experimental evaluation of rigor mortis. VIII. Estimation of time since death by repeated measurements of the intensity of rigor mortis on rats.

    Science.gov (United States)

    Krompecher, T

    1994-10-21

    The development of the intensity of rigor mortis was monitored in nine groups of rats. The measurements were initiated after 2, 4, 5, 6, 8, 12, 15, 24, and 48 h post mortem (p.m.) and lasted 5-9 h, which ideally should correspond to the usual procedure after the discovery of a corpse. The experiments were carried out at an ambient temperature of 24 degrees C. Measurements initiated early after death resulted in curves with a rising portion, a plateau, and a descending slope. Delaying the initial measurement translated into shorter rising portions, and curves initiated 8 h p.m. or later are comprised of a plateau and/or a downward slope only. Three different phases were observed suggesting simple rules that can help estimate the time since death: (1) if an increase in intensity was found, the initial measurements were conducted not later than 5 h p.m.; (2) if only a decrease in intensity was observed, the initial measurements were conducted not earlier than 7 h p.m.; and (3) at 24 h p.m., the resolution is complete, and no further changes in intensity should occur. Our results clearly demonstrate that repeated measurements of the intensity of rigor mortis allow a more accurate estimation of the time since death of the experimental animals than the single measurement method used earlier. A critical review of the literature on the estimation of time since death on the basis of objective measurements of the intensity of rigor mortis is also presented.

  3. Effects of rigor status during high-pressure processing on the physical qualities of farm-raised abalone (Haliotis rufescens).

    Science.gov (United States)

    Hughes, Brianna H; Greenberg, Neil J; Yang, Tom C; Skonberg, Denise I

    2015-01-01

    High-pressure processing (HPP) is used to increase meat safety and shelf-life, with conflicting quality effects depending on rigor status during HPP. In the seafood industry, HPP is used to shuck and pasteurize oysters, but its use on abalones has only been minimally evaluated and the effect of rigor status during HPP on abalone quality has not been reported. Farm-raised abalones (Haliotis rufescens) were divided into 12 HPP treatments and 1 unprocessed control treatment. Treatments were processed pre-rigor or post-rigor at 2 pressures (100 and 300 MPa) and 3 processing times (1, 3, and 5 min). The control was analyzed post-rigor. Uniform plugs were cut from adductor and foot meat for texture profile analysis, shear force, and color analysis. Subsamples were used for scanning electron microscopy of muscle ultrastructure. Texture profile analysis revealed that post-rigor processed abalone was significantly (P abalone meat was more tender than pre-rigor processed meat, and post-rigor processed foot meat was lighter in color than pre-rigor processed foot meat, suggesting that waiting for rigor to resolve prior to processing abalones may improve consumer perceptions of quality and market value. © 2014 Institute of Food Technologists®

  4. Rigorous lower bound on the dynamic critical exponent of some multilevel Swendsen-Wang algorithms

    International Nuclear Information System (INIS)

    Li, X.; Sokal, A.D.

    1991-01-01

    We prove the rigorous lower bound z exp ≥α/ν for the dynamic critical exponent of a broad class of multilevel (or ''multigrid'') variants of the Swendsen-Wang algorithm. This proves that such algorithms do suffer from critical slowing down. We conjecture that such algorithms in fact lie in the same dynamic universality class as the stanard Swendsen-Wang algorithm

  5. Rigorous modelling of light's intensity angular-profile in Abbe refractometers with absorbing homogeneous fluids

    International Nuclear Information System (INIS)

    García-Valenzuela, A; Contreras-Tello, H; Márquez-Islas, R; Sánchez-Pérez, C

    2013-01-01

    We derive an optical model for the light intensity distribution around the critical angle in a standard Abbe refractometer when used on absorbing homogenous fluids. The model is developed using rigorous electromagnetic optics. The obtained formula is very simple and can be used suitably in the analysis and design of optical sensors relying on Abbe type refractometry.

  6. Rigorous approximation of stationary measures and convergence to equilibrium for iterated function systems

    International Nuclear Information System (INIS)

    Galatolo, Stefano; Monge, Maurizio; Nisoli, Isaia

    2016-01-01

    We study the problem of the rigorous computation of the stationary measure and of the rate of convergence to equilibrium of an iterated function system described by a stochastic mixture of two or more dynamical systems that are either all uniformly expanding on the interval, either all contracting. In the expanding case, the associated transfer operators satisfy a Lasota–Yorke inequality, we show how to compute a rigorous approximations of the stationary measure in the L "1 norm and an estimate for the rate of convergence. The rigorous computation requires a computer-aided proof of the contraction of the transfer operators for the maps, and we show that this property propagates to the transfer operators of the IFS. In the contracting case we perform a rigorous approximation of the stationary measure in the Wasserstein–Kantorovich distance and rate of convergence, using the same functional analytic approach. We show that a finite computation can produce a realistic computation of all contraction rates for the whole parameter space. We conclude with a description of the implementation and numerical experiments. (paper)

  7. Double phosphorylation of the myosin regulatory light chain during rigor mortis of bovine Longissimus muscle.

    Science.gov (United States)

    Muroya, Susumu; Ohnishi-Kameyama, Mayumi; Oe, Mika; Nakajima, Ikuyo; Shibata, Masahiro; Chikuni, Koichi

    2007-05-16

    To investigate changes in myosin light chains (MyLCs) during postmortem aging of the bovine longissimus muscle, we performed two-dimensional gel electrophoresis followed by identification with matrix-assisted laser desorption ionization time-of-flight mass spectrometry. The results of fluorescent differential gel electrophoresis showed that two spots of the myosin regulatory light chain (MyLC2) at pI values of 4.6 and 4.7 shifted toward those at pI values of 4.5 and 4.6, respectively, by 24 h postmortem when rigor mortis was completed. Meanwhile, the MyLC1 and MyLC3 spots did not change during the 14 days postmortem. Phosphoprotein-specific staining of the gels demonstrated that the MyLC2 proteins at pI values of 4.5 and 4.6 were phosphorylated. Furthermore, possible N-terminal region peptides containing one and two phosphoserine residues were detected in each mass spectrum of the MyLC2 spots at pI values of 4.5 and 4.6, respectively. These results demonstrated that MyLC2 became doubly phosphorylated during rigor formation of the bovine longissimus, suggesting involvement of the MyLC2 phosphorylation in the progress of beef rigor mortis. Bovine; myosin regulatory light chain (RLC, MyLC2); phosphorylation; rigor mortis; skeletal muscle.

  8. Some comments on rigorous quantum field path integrals in the analytical regularization scheme

    Energy Technology Data Exchange (ETDEWEB)

    Botelho, Luiz C.L. [Universidade Federal Fluminense (UFF), Niteroi, RJ (Brazil). Dept. de Matematica Aplicada]. E-mail: botelho.luiz@superig.com.br

    2008-07-01

    Through the systematic use of the Minlos theorem on the support of cylindrical measures on R{sup {infinity}}, we produce several mathematically rigorous path integrals in interacting euclidean quantum fields with Gaussian free measures defined by generalized powers of the Laplacian operator. (author)

  9. Some comments on rigorous quantum field path integrals in the analytical regularization scheme

    International Nuclear Information System (INIS)

    Botelho, Luiz C.L.

    2008-01-01

    Through the systematic use of the Minlos theorem on the support of cylindrical measures on R ∞ , we produce several mathematically rigorous path integrals in interacting euclidean quantum fields with Gaussian free measures defined by generalized powers of the Laplacian operator. (author)

  10. Community historians and the dilemma of rigor vs relevance : A comment on Danziger and van Rappard

    NARCIS (Netherlands)

    Dehue, Trudy

    1998-01-01

    Since the transition from finalism to contextualism, the history of science seems to be caught up in a basic dilemma. Many historians fear that with the new contextualist standards of rigorous historiography, historical research can no longer be relevant to working scientists themselves. The present

  11. Beyond the RCT: Integrating Rigor and Relevance to Evaluate the Outcomes of Domestic Violence Programs

    Science.gov (United States)

    Goodman, Lisa A.; Epstein, Deborah; Sullivan, Cris M.

    2018-01-01

    Programs for domestic violence (DV) victims and their families have grown exponentially over the last four decades. The evidence demonstrating the extent of their effectiveness, however, often has been criticized as stemming from studies lacking scientific rigor. A core reason for this critique is the widespread belief that credible evidence can…

  12. A plea for rigorous conceptual analysis as central method in transnational law design

    NARCIS (Netherlands)

    Rijgersberg, R.; van der Kaaij, H.

    2013-01-01

    Although shared problems are generally easily identified in transnational law design, it is considerably more difficult to design frameworks that transcend the peculiarities of local law in a univocal fashion. The following exposition is a plea for giving more prominence to rigorous conceptual

  13. College Readiness in California: A Look at Rigorous High School Course-Taking

    Science.gov (United States)

    Gao, Niu

    2016-01-01

    Recognizing the educational and economic benefits of a college degree, education policymakers at the federal, state, and local levels have made college preparation a priority. There are many ways to measure college readiness, but one key component is rigorous high school coursework. California has not yet adopted a statewide college readiness…

  14. Evaluation of physical dimension changes as nondestructive measurements for monitoring rigor mortis development in broiler muscles.

    Science.gov (United States)

    Cavitt, L C; Sams, A R

    2003-07-01

    Studies were conducted to develop a non-destructive method for monitoring the rate of rigor mortis development in poultry and to evaluate the effectiveness of electrical stimulation (ES). In the first study, 36 male broilers in each of two trials were processed at 7 wk of age. After being bled, half of the birds received electrical stimulation (400 to 450 V, 400 to 450 mA, for seven pulses of 2 s on and 1 s off), and the other half were designated as controls. At 0.25 and 1.5 h postmortem (PM), carcasses were evaluated for the angles of the shoulder, elbow, and wing tip and the distance between the elbows. Breast fillets were harvested at 1.5 h PM (after chilling) from all carcasses. Fillet samples were excised and frozen for later measurement of pH and R-value, and the remainder of each fillet was held on ice until 24 h postmortem. Shear value and pH means were significantly lower, but R-value means were higher (P rigor mortis by ES. The physical dimensions of the shoulder and elbow changed (P rigor mortis development and with ES. These results indicate that physical measurements of the wings maybe useful as a nondestructive indicator of rigor development and for monitoring the effectiveness of ES. In the second study, 60 male broilers in each of two trials were processed at 7 wk of age. At 0.25, 1.5, 3.0, and 6.0 h PM, carcasses were evaluated for the distance between the elbows. At each time point, breast fillets were harvested from each carcass. Fillet samples were excised and frozen for later measurement of pH and sacromere length, whereas the remainder of each fillet was held on ice until 24 h PM. Shear value and pH means (P rigor mortis development. Elbow distance decreased (P rigor development and was correlated (P rigor mortis development in broiler carcasses.

  15. [A new formula for the measurement of rigor mortis: the determination of the FRR-index (author's transl)].

    Science.gov (United States)

    Forster, B; Ropohl, D; Raule, P

    1977-07-05

    The manual examination of rigor mortis as currently used and its often subjective evaluation frequently produced highly incorrect deductions. It is therefore desirable that such inaccuracies should be replaced by the objective measuring of rigor mortis at the extremities. To that purpose a method is described which can also be applied in on-the-spot investigations and a new formula for the determination of rigor mortis--indices (FRR) is introduced.

  16. Measurements of the degree of development of rigor mortis as an indicator of stress in slaughtered pigs.

    Science.gov (United States)

    Warriss, P D; Brown, S N; Knowles, T G

    2003-12-13

    The degree of development of rigor mortis in the carcases of slaughter pigs was assessed subjectively on a three-point scale 35 minutes after they were exsanguinated, and related to the levels of cortisol, lactate and creatine kinase in blood collected at exsanguination. Earlier rigor development was associated with higher concentrations of these stress indicators in the blood. This relationship suggests that the mean rigor score, and the frequency distribution of carcases that had or had not entered rigor, could be used as an index of the degree of stress to which the pigs had been subjected.

  17. Use of the Rigor Mortis Process as a Tool for Better Understanding of Skeletal Muscle Physiology: Effect of the Ante-Mortem Stress on the Progression of Rigor Mortis in Brook Charr (Salvelinus fontinalis).

    Science.gov (United States)

    Diouf, Boucar; Rioux, Pierre

    1999-01-01

    Presents the rigor mortis process in brook charr (Salvelinus fontinalis) as a tool for better understanding skeletal muscle metabolism. Describes an activity that demonstrates how rigor mortis is related to the post-mortem decrease of muscular glycogen and ATP, how glycogen degradation produces lactic acid that lowers muscle pH, and how…

  18. A rigorous pole representation of multilevel cross sections and its practical applications

    International Nuclear Information System (INIS)

    Hwang, R.N.

    1987-01-01

    In this article a rigorous method for representing the multilevel cross sections and its practical applications are described. It is a generalization of the rationale suggested by de Saussure and Perez for the s-wave resonances. A computer code WHOPPER has been developed to convert the Reich-Moore parameters into the pole and residue parameters in momentum space. Sample calculations have been carried out to illustrate that the proposed method preserves the rigor of the Reich-Moore cross sections exactly. An analytical method has been developed to evaluate the pertinent Doppler-broadened line shape functions. A discussion is presented on how to minimize the number of pole parameters so that the existing reactor codes can be best utilized

  19. Rigorous simulations of a helical core fiber by the use of transformation optics formalism.

    Science.gov (United States)

    Napiorkowski, Maciej; Urbanczyk, Waclaw

    2014-09-22

    We report for the first time on rigorous numerical simulations of a helical-core fiber by using a full vectorial method based on the transformation optics formalism. We modeled the dependence of circular birefringence of the fundamental mode on the helix pitch and analyzed the effect of a birefringence increase caused by the mode displacement induced by a core twist. Furthermore, we analyzed the complex field evolution versus the helix pitch in the first order modes, including polarization and intensity distribution. Finally, we show that the use of the rigorous vectorial method allows to better predict the confinement loss of the guided modes compared to approximate methods based on equivalent in-plane bending models.

  20. Estimation of the time since death--reconsidering the re-establishment of rigor mortis.

    Science.gov (United States)

    Anders, Sven; Kunz, Michaela; Gehl, Axel; Sehner, Susanne; Raupach, Tobias; Beck-Bornholdt, Hans-Peter

    2013-01-01

    In forensic medicine, there is an undefined data background for the phenomenon of re-establishment of rigor mortis after mechanical loosening, a method used in establishing time since death in forensic casework that is thought to occur up to 8 h post-mortem. Nevertheless, the method is widely described in textbooks on forensic medicine. We examined 314 joints (elbow and knee) of 79 deceased at defined time points up to 21 h post-mortem (hpm). Data were analysed using a random intercept model. Here, we show that re-establishment occurred in 38.5% of joints at 7.5 to 19 hpm. Therefore, the maximum time span for the re-establishment of rigor mortis appears to be 2.5-fold longer than thought so far. These findings have major impact on the estimation of time since death in forensic casework.

  1. Quality of nuchal translucency measurements correlates with broader aspects of program rigor and culture of excellence.

    Science.gov (United States)

    Evans, Mark I; Krantz, David A; Hallahan, Terrence; Sherwin, John; Britt, David W

    2013-01-01

    To determine if nuchal translucency (NT) quality correlates with the extent to which clinics vary in rigor and quality control. We correlated NT performance quality (bias and precision) of 246,000 patients with two alternative measures of clinic culture - % of cases for whom nasal bone (NB) measurements were performed and % of requisitions correctly filled for race-ethnicity and weight. When requisition errors occurred in 5% (33%), the curve lowered to 0.93 MoM (p 90%, MoM was 0.99 compared to those quality exists independent of individual variation in NT quality, and two divergent indices of program rigor are associated with NT quality. Quality control must be program wide, and to effect continued improvement in the quality of NT results across time, the cultures of clinics must become a target for intervention. Copyright © 2013 S. Karger AG, Basel.

  2. A Framework for Rigorously Identifying Research Gaps in Qualitative Literature Reviews

    DEFF Research Database (Denmark)

    Müller-Bloch, Christoph; Kranz, Johann

    2015-01-01

    Identifying research gaps is a fundamental goal of literature reviewing. While it is widely acknowledged that literature reviews should identify research gaps, there are no methodological guidelines for how to identify research gaps in qualitative literature reviews ensuring rigor and replicability....... Our study addresses this gap and proposes a framework that should help scholars in this endeavor without stifling creativity. To develop the framework we thoroughly analyze the state-of-the-art procedure of identifying research gaps in 40 recent literature reviews using a grounded theory approach....... Based on the data, we subsequently derive a framework for identifying research gaps in qualitative literature reviews and demonstrate its application with an example. Our results provide a modus operandi for identifying research gaps, thus enabling scholars to conduct literature reviews more rigorously...

  3. Derivation of basic equations for rigorous dynamic simulation of cryogenic distillation column for hydrogen isotope separation

    International Nuclear Information System (INIS)

    Kinoshita, Masahiro; Naruse, Yuji

    1981-08-01

    The basic equations are derived for rigorous dynamic simulation of cryogenic distillation columns for hydrogen isotope separation. The model accounts for such factors as differences in latent heat of vaporization among the six isotopic species of molecular hydrogen, decay heat of tritium, heat transfer through the column wall and nonideality of the solutions. Provision is also made for simulation of columns with multiple feeds and multiple sidestreams. (author)

  4. Rigor mortis development in turkey breast muscle and the effect of electrical stunning.

    Science.gov (United States)

    Alvarado, C Z; Sams, A R

    2000-11-01

    Rigor mortis development in turkey breast muscle and the effect of electrical stunning on this process are not well characterized. Some electrical stunning procedures have been known to inhibit postmortem (PM) biochemical reactions, thereby delaying the onset of rigor mortis in broilers. Therefore, this study was designed to characterize rigor mortis development in stunned and unstunned turkeys. A total of 154 turkey toms in two trials were conventionally processed at 20 to 22 wk of age. Turkeys were either stunned with a pulsed direct current (500 Hz, 50% duty cycle) at 35 mA (40 V) in a saline bath for 12 seconds or left unstunned as controls. At 15 min and 1, 2, 4, 8, 12, and 24 h PM, pectoralis samples were collected to determine pH, R-value, L* value, sarcomere length, and shear value. In Trial 1, the samples obtained for pH, R-value, and sarcomere length were divided into surface and interior samples. There were no significant differences between the surface and interior samples among any parameters measured. Muscle pH significantly decreased over time in stunned and unstunned birds through 2 h PM. The R-values increased to 8 h PM in unstunned birds and 24 h PM in stunned birds. The L* values increased over time, with no significant differences after 1 h PM for the controls and 2 h PM for the stunned birds. Sarcomere length increased through 2 h PM in the controls and 12 h PM in the stunned fillets. Cooked meat shear values decreased through the 1 h PM deboning time in the control fillets and 2 h PM in the stunned fillets. These results suggest that stunning delayed the development of rigor mortis through 2 h PM, but had no significant effect on the measured parameters at later time points, and that deboning turkey breasts at 2 h PM or later will not significantly impair meat tenderness.

  5. Learning from Science and Sport - How we, Safety, "Engage with Rigor"

    Science.gov (United States)

    Herd, A.

    2012-01-01

    As the world of spaceflight safety is relatively small and potentially inward-looking, we need to be aware of the "outside world". We should then try to remind ourselves to be open to the possibility that data, knowledge or experience from outside of the spaceflight community may provide some constructive alternate perspectives. This paper will assess aspects from two seemingly tangential fields, science and sport, and align these with the world of safety. In doing so some useful insights will be given to the challenges we face and may provide solutions relevant in our everyday (of safety engineering). Sport, particularly a contact sport such as rugby union, requires direct interaction between members of two (opposing) teams. Professional, accurately timed and positioned interaction for a desired outcome. These interactions, whilst an essential part of the game, are however not without their constraints. The rugby scrum has constraints as to the formation and engagement of the two teams. The controlled engagement provides for an interaction between the two teams in a safe manner. The constraints arising from the reality that an incorrect engagement could cause serious injury to members of either team. In academia, scientific rigor is applied to assure that the arguments provided and the conclusions drawn in academic papers presented for publication are valid, legitimate and credible. The scientific goal of the need for rigor may be expressed in the example of achieving a statistically relevant sample size, n, in order to assure analysis validity of the data pool. A failure to apply rigor could then place the entire study at risk of failing to have the respective paper published. This paper will consider the merits of these two different aspects, scientific rigor and sports engagement, and offer a reflective look at how this may provide a "modus operandi" for safety engineers at any level whether at their desks (creating or reviewing safety assessments) or in a

  6. Rigor force responses of permeabilized fibres from fast and slow skeletal muscles of aged rats.

    Science.gov (United States)

    Plant, D R; Lynch, G S

    2001-09-01

    1. Ageing is generally associated with a decline in skeletal muscle mass and strength and a slowing of muscle contraction, factors that impact upon the quality of life for the elderly. The mechanisms underlying this age-related muscle weakness have not been fully resolved. The purpose of the present study was to determine whether the decrease in muscle force as a consequence of age could be attributed partly to a decrease in the number of cross-bridges participating during contraction. 2. Given that the rigor force is proportional to the approximate total number of interacting sites between the actin and myosin filaments, we tested the null hypothesis that the rigor force of permeabilized muscle fibres from young and old rats would not be different. 3. Permeabilized fibres from the extensor digitorum longus (fast-twitch; EDL) and soleus (predominantly slow-twitch) muscles of young (6 months of age) and old (27 months of age) male F344 rats were activated in Ca2+-buffered solutions to determine force-pCa characteristics (where pCa = -log(10)[Ca2+]) and then in solutions lacking ATP and Ca2+ to determine rigor force levels. 4. The rigor forces for EDL and soleus muscle fibres were not different between young and old rats, indicating that the approximate total number of cross-bridges that can be formed between filaments did not decline with age. We conclude that the age-related decrease in force output is more likely attributed to a decrease in the force per cross-bridge and/or decreases in the efficiency of excitation-contraction coupling.

  7. Rigorous Integration of Non-Linear Ordinary Differential Equations in Chebyshev Basis

    Czech Academy of Sciences Publication Activity Database

    Dzetkulič, Tomáš

    2015-01-01

    Roč. 69, č. 1 (2015), s. 183-205 ISSN 1017-1398 R&D Projects: GA MŠk OC10048; GA ČR GD201/09/H057 Institutional research plan: CEZ:AV0Z10300504 Keywords : Initial value problem * Rigorous integration * Taylor model * Chebyshev basis Subject RIV: IN - Informatics, Computer Science Impact factor: 1.366, year: 2015

  8. A rigorous proof of the Landau-Peierls formula and much more

    DEFF Research Database (Denmark)

    Briet, Philippe; Cornean, Horia; Savoie, Baptiste

    2012-01-01

    We present a rigorous mathematical treatment of the zero-field orbital magnetic susceptibility of a non-interacting Bloch electron gas, at fixed temperature and density, for both metals and semiconductors/insulators. In particular, we obtain the Landau-Peierls formula in the low temperature and d...... and density limit as conjectured by Kjeldaas and Kohn (Phys Rev 105:806–813, 1957)....

  9. Association Between Maximal Skin Dose and Breast Brachytherapy Outcome: A Proposal for More Rigorous Dosimetric Constraints

    International Nuclear Information System (INIS)

    Cuttino, Laurie W.; Heffernan, Jill; Vera, Robyn; Rosu, Mihaela; Ramakrishnan, V. Ramesh; Arthur, Douglas W.

    2011-01-01

    Purpose: Multiple investigations have used the skin distance as a surrogate for the skin dose and have shown that distances 4.05 Gy/fraction. Conclusion: The initial skin dose recommendations have been based on safe use and the avoidance of significant toxicity. The results from the present study have suggested that patients might further benefit if more rigorous constraints were applied and if the skin dose were limited to 120% of the prescription dose.

  10. Re-establishment of rigor mortis: evidence for a considerably longer post-mortem time span.

    Science.gov (United States)

    Crostack, Chiara; Sehner, Susanne; Raupach, Tobias; Anders, Sven

    2017-07-01

    Re-establishment of rigor mortis following mechanical loosening is used as part of the complex method for the forensic estimation of the time since death in human bodies and has formerly been reported to occur up to 8-12 h post-mortem (hpm). We recently described our observation of the phenomenon in up to 19 hpm in cases with in-hospital death. Due to the case selection (preceding illness, immobilisation), transfer of these results to forensic cases might be limited. We therefore examined 67 out-of-hospital cases of sudden death with known time points of death. Re-establishment of rigor mortis was positive in 52.2% of cases and was observed up to 20 hpm. In contrast to the current doctrine that a recurrence of rigor mortis is always of a lesser degree than its first manifestation in a given patient, muscular rigidity at re-establishment equalled or even exceeded the degree observed before dissolving in 21 joints. Furthermore, this is the first study to describe that the phenomenon appears to be independent of body or ambient temperature.

  11. The MIXED framework: A novel approach to evaluating mixed-methods rigor.

    Science.gov (United States)

    Eckhardt, Ann L; DeVon, Holli A

    2017-10-01

    Evaluation of rigor in mixed-methods (MM) research is a persistent challenge due to the combination of inconsistent philosophical paradigms, the use of multiple research methods which require different skill sets, and the need to combine research at different points in the research process. Researchers have proposed a variety of ways to thoroughly evaluate MM research, but each method fails to provide a framework that is useful for the consumer of research. In contrast, the MIXED framework is meant to bridge the gap between an academic exercise and practical assessment of a published work. The MIXED framework (methods, inference, expertise, evaluation, and design) borrows from previously published frameworks to create a useful tool for the evaluation of a published study. The MIXED framework uses an experimental eight-item scale that allows for comprehensive integrated assessment of MM rigor in published manuscripts. Mixed methods are becoming increasingly prevalent in nursing and healthcare research requiring researchers and consumers to address issues unique to MM such as evaluation of rigor. © 2017 John Wiley & Sons Ltd.

  12. Application of the rigorous method to x-ray and neutron beam scattering on rough surfaces

    International Nuclear Information System (INIS)

    Goray, Leonid I.

    2010-01-01

    The paper presents a comprehensive numerical analysis of x-ray and neutron scattering from finite-conducting rough surfaces which is performed in the frame of the boundary integral equation method in a rigorous formulation for high ratios of characteristic dimension to wavelength. The single integral equation obtained involves boundary integrals of the single and double layer potentials. A more general treatment of the energy conservation law applicable to absorption gratings and rough mirrors is considered. In order to compute the scattering intensity of rough surfaces using the forward electromagnetic solver, Monte Carlo simulation is employed to average the deterministic diffraction grating efficiency due to individual surfaces over an ensemble of realizations. Some rules appropriate for numerical implementation of the theory at small wavelength-to-period ratios are presented. The difference between the rigorous approach and approximations can be clearly seen in specular reflectances of Au mirrors with different roughness parameters at wavelengths where grazing incidence occurs at close to or larger than the critical angle. This difference may give rise to wrong estimates of rms roughness and correlation length if they are obtained by comparing experimental data with calculations. Besides, the rigorous approach permits taking into account any known roughness statistics and allows exact computation of diffuse scattering.

  13. A Draft Conceptual Framework of Relevant Theories to Inform Future Rigorous Research on Student Service-Learning Outcomes

    Science.gov (United States)

    Whitley, Meredith A.

    2014-01-01

    While the quality and quantity of research on service-learning has increased considerably over the past 20 years, researchers as well as governmental and funding agencies have called for more rigor in service-learning research. One key variable in improving rigor is using relevant existing theories to improve the research. The purpose of this…

  14. Feedback for relatedness and competence : Can feedback in blended learning contribute to optimal rigor, basic needs, and motivation?

    NARCIS (Netherlands)

    Bombaerts, G.; Nickel, P.J.

    2017-01-01

    We inquire how peer and tutor feedback influences students' optimal rigor, basic needs and motivation. We analyze questionnaires from two courses in two subsequent years. We conclude that feedback in blended learning can contribute to rigor and basic needs, but it is not clear from our data what

  15. Optimal correction and design parameter search by modern methods of rigorous global optimization

    International Nuclear Information System (INIS)

    Makino, K.; Berz, M.

    2011-01-01

    Frequently the design of schemes for correction of aberrations or the determination of possible operating ranges for beamlines and cells in synchrotrons exhibit multitudes of possibilities for their correction, usually appearing in disconnected regions of parameter space which cannot be directly qualified by analytical means. In such cases, frequently an abundance of optimization runs are carried out, each of which determines a local minimum depending on the specific chosen initial conditions. Practical solutions are then obtained through an often extended interplay of experienced manual adjustment of certain suitable parameters and local searches by varying other parameters. However, in a formal sense this problem can be viewed as a global optimization problem, i.e. the determination of all solutions within a certain range of parameters that lead to a specific optimum. For example, it may be of interest to find all possible settings of multiple quadrupoles that can achieve imaging; or to find ahead of time all possible settings that achieve a particular tune; or to find all possible manners to adjust nonlinear parameters to achieve correction of high order aberrations. These tasks can easily be phrased in terms of such an optimization problem; but while mathematically this formulation is often straightforward, it has been common belief that it is of limited practical value since the resulting optimization problem cannot usually be solved. However, recent significant advances in modern methods of rigorous global optimization make these methods feasible for optics design for the first time. The key ideas of the method lie in an interplay of rigorous local underestimators of the objective functions, and by using the underestimators to rigorously iteratively eliminate regions that lie above already known upper bounds of the minima, in what is commonly known as a branch-and-bound approach. Recent enhancements of the Differential Algebraic methods used in particle

  16. Unmet Need: Improving mHealth Evaluation Rigor to Build the Evidence Base.

    Science.gov (United States)

    Mookherji, Sangeeta; Mehl, Garrett; Kaonga, Nadi; Mechael, Patricia

    2015-01-01

    mHealth-the use of mobile technologies for health-is a growing element of health system activity globally, but evaluation of those activities remains quite scant, and remains an important knowledge gap for advancing mHealth activities. In 2010, the World Health Organization and Columbia University implemented a small-scale survey to generate preliminary data on evaluation activities used by mHealth initiatives. The authors describe self-reported data from 69 projects in 29 countries. The majority (74%) reported some sort of evaluation activity, primarily nonexperimental in design (62%). The authors developed a 6-point scale of evaluation rigor comprising information on use of comparison groups, sample size calculation, data collection timing, and randomization. The mean score was low (2.4); half (47%) were conducting evaluations with a minimum threshold (4+) of rigor, indicating use of a comparison group, while less than 20% had randomized the mHealth intervention. The authors were unable to assess whether the rigor score was appropriate for the type of mHealth activity being evaluated. What was clear was that although most data came from mHealth projects pilots aimed for scale-up, few had designed evaluations that would support crucial decisions on whether to scale up and how. Whether the mHealth activity is a strategy to improve health or a tool for achieving intermediate outcomes that should lead to better health, mHealth evaluations must be improved to generate robust evidence for cost-effectiveness assessment and to allow for accurate identification of the contribution of mHealth initiatives to health systems strengthening and the impact on actual health outcomes.

  17. Effect of muscle restraint on sheep meat tenderness with rigor mortis at 18°C.

    Science.gov (United States)

    Devine, Carrick E; Payne, Steven R; Wells, Robyn W

    2002-02-01

    The effect on shear force of skeletal restraint and removing muscles from lamb m. longissimus thoracis et lumborum (LT) immediately after slaughter and electrical stimulation was undertaken at a rigor temperature of 18°C (n=15). The temperature of 18°C was achieved through chilling of electrically stimulated sheep carcasses in air at 12°C, air flow 1-1.5 ms(-2). In other groups, the muscle was removed at 2.5 h post-mortem and either wrapped or left non-wrapped before being placed back on the carcass to follow carcass cooling regimes. Following rigor mortis, the meat was aged for 0, 16, 40 and 65 h at 15°C and frozen. For the non-stimulated samples, the meat was aged for 0, 12, 36 and 60 h before being frozen. The frozen meat was cooked to 75°C in an 85°C water bath and shear force values obtained from a 1 × 1 cm cross-section. Commencement of ageing was considered to take place at rigor mortis and this was taken as zero aged meat. There were no significant differences in the rate of tenderisation and initial shear force for all treatments. The 23% cook loss was similar for all wrapped and non-wrapped situations and the values decreased slightly with longer ageing durations. Wrapping was shown to mimic meat left intact on the carcass, as it prevented significant prerigor shortening. Such techniques allows muscles to be removed and placed in a controlled temperature environment to enable precise studies of ageing processes.

  18. Biomedical text mining for research rigor and integrity: tasks, challenges, directions.

    Science.gov (United States)

    Kilicoglu, Halil

    2017-06-13

    An estimated quarter of a trillion US dollars is invested in the biomedical research enterprise annually. There is growing alarm that a significant portion of this investment is wasted because of problems in reproducibility of research findings and in the rigor and integrity of research conduct and reporting. Recent years have seen a flurry of activities focusing on standardization and guideline development to enhance the reproducibility and rigor of biomedical research. Research activity is primarily communicated via textual artifacts, ranging from grant applications to journal publications. These artifacts can be both the source and the manifestation of practices leading to research waste. For example, an article may describe a poorly designed experiment, or the authors may reach conclusions not supported by the evidence presented. In this article, we pose the question of whether biomedical text mining techniques can assist the stakeholders in the biomedical research enterprise in doing their part toward enhancing research integrity and rigor. In particular, we identify four key areas in which text mining techniques can make a significant contribution: plagiarism/fraud detection, ensuring adherence to reporting guidelines, managing information overload and accurate citation/enhanced bibliometrics. We review the existing methods and tools for specific tasks, if they exist, or discuss relevant research that can provide guidance for future work. With the exponential increase in biomedical research output and the ability of text mining approaches to perform automatic tasks at large scale, we propose that such approaches can support tools that promote responsible research practices, providing significant benefits for the biomedical research enterprise. Published by Oxford University Press 2017. This work is written by a US Government employee and is in the public domain in the US.

  19. Study Design Rigor in Animal-Experimental Research Published in Anesthesia Journals.

    Science.gov (United States)

    Hoerauf, Janine M; Moss, Angela F; Fernandez-Bustamante, Ana; Bartels, Karsten

    2018-01-01

    Lack of reproducibility of preclinical studies has been identified as an impediment for translation of basic mechanistic research into effective clinical therapies. Indeed, the National Institutes of Health has revised its grant application process to require more rigorous study design, including sample size calculations, blinding procedures, and randomization steps. We hypothesized that the reporting of such metrics of study design rigor has increased over time for animal-experimental research published in anesthesia journals. PubMed was searched for animal-experimental studies published in 2005, 2010, and 2015 in primarily English-language anesthesia journals. A total of 1466 publications were graded on the performance of sample size estimation, randomization, and blinding. Cochran-Armitage test was used to assess linear trends over time for the primary outcome of whether or not a metric was reported. Interrater agreement for each of the 3 metrics (power, randomization, and blinding) was assessed using the weighted κ coefficient in a 10% random sample of articles rerated by a second investigator blinded to the ratings of the first investigator. A total of 1466 manuscripts were analyzed. Reporting for all 3 metrics of experimental design rigor increased over time (2005 to 2010 to 2015): for power analysis, from 5% (27/516), to 12% (59/485), to 17% (77/465); for randomization, from 41% (213/516), to 50% (243/485), to 54% (253/465); and for blinding, from 26% (135/516), to 38% (186/485), to 47% (217/465). The weighted κ coefficients and 98.3% confidence interval indicate almost perfect agreement between the 2 raters beyond that which occurs by chance alone (power, 0.93 [0.85, 1.0], randomization, 0.91 [0.85, 0.98], and blinding, 0.90 [0.84, 0.96]). Our hypothesis that reported metrics of rigor in animal-experimental studies in anesthesia journals have increased during the past decade was confirmed. More consistent reporting, or explicit justification for absence

  20. Rigorous spin-spin correlation function of Ising model on a special kind of Sierpinski Carpets

    International Nuclear Information System (INIS)

    Yang, Z.R.

    1993-10-01

    We have exactly calculated the rigorous spin-spin correlation function of Ising model on a special kind of Sierpinski Carpets (SC's) by means of graph expansion and a combinatorial approach and investigated the asymptotic behaviour in the limit of long distance. The result show there is no long range correlation between spins at any finite temperature which indicates no existence of phase transition and thus finally confirms the conclusion produced by the renormalization group method and other physical arguments. (author). 7 refs, 6 figs

  1. An efficient and rigorous thermodynamic library and optimal-control of a cryogenic air separation unit

    DEFF Research Database (Denmark)

    Gaspar, Jozsef; Ritschel, Tobias Kasper Skovborg; Jørgensen, John Bagterp

    2017-01-01

    -linear model based control to achieve optimal techno-economic performance. Accordingly, this work presents a computationally efficient and novel approach for solving a tray-by-tray equilibrium model and its implementation for open-loop optimal-control of a cryogenic distillation column. Here, the optimisation...... objective is to reduce the cost of compression in a volatile electricity market while meeting the production requirements, i.e. product flow rate and purity. This model is implemented in Matlab and uses the ThermoLib rigorous thermodynamic library. The present work represents a first step towards plant...

  2. A study into first-year engineering education success using a rigorous mixed methods approach

    DEFF Research Database (Denmark)

    van den Bogaard, M.E.D.; de Graaff, Erik; Verbraek, Alexander

    2015-01-01

    The aim of this paper is to combine qualitative and quantitative research methods into rigorous research into student success. Research methods have weaknesses that can be overcome by clever combinations. In this paper we use a situated study into student success as an example of how methods...... using statistical techniques. The main elements of the model were student behaviour and student disposition, which were influenced by the students’ perceptions of the education environment. The outcomes of the qualitative studies were useful in interpreting the outcomes of the structural equation...

  3. Supersymmetry and the Parisi-Sourlas dimensional reduction: A rigorous proof

    International Nuclear Information System (INIS)

    Klein, A.; Landau, L.J.; Perez, J.F.

    1984-01-01

    Functional integrals that are formally related to the average correlation functions of a classical field theory in the presence of random external sources are given a rigorous meaning. Their dimensional reduction to the Schwinger functions of the corresponding quantum field theory in two fewer dimensions is proven. This is done by reexpressing those functional integrals as expectations of a supersymmetric field theory. The Parisi-Sourlas dimensional reduction of a supersymmetric field theory to a usual quantum field theory in two fewer dimensions is proven. (orig.)

  4. A Rigorous Treatment of Energy Extraction from a Rotating Black Hole

    Science.gov (United States)

    Finster, F.; Kamran, N.; Smoller, J.; Yau, S.-T.

    2009-05-01

    The Cauchy problem is considered for the scalar wave equation in the Kerr geometry. We prove that by choosing a suitable wave packet as initial data, one can extract energy from the black hole, thereby putting supperradiance, the wave analogue of the Penrose process, into a rigorous mathematical framework. We quantify the maximal energy gain. We also compute the infinitesimal change of mass and angular momentum of the black hole, in agreement with Christodoulou’s result for the Penrose process. The main mathematical tool is our previously derived integral representation of the wave propagator.

  5. Pre-rigor temperature and the relationship between lamb tenderisation, free water production, bound water and dry matter.

    Science.gov (United States)

    Devine, Carrick; Wells, Robyn; Lowe, Tim; Waller, John

    2014-01-01

    The M. longissimus from lambs electrically stimulated at 15 min post-mortem were removed after grading, wrapped in polythene film and held at 4 (n=6), 7 (n=6), 15 (n=6, n=8) and 35°C (n=6), until rigor mortis then aged at 15°C for 0, 4, 24 and 72 h post-rigor. Centrifuged free water increased exponentially, and bound water, dry matter and shear force decreased exponentially over time. Decreases in shear force and increases in free water were closely related (r(2)=0.52) and were unaffected by pre-rigor temperatures. © 2013.

  6. Parent Management Training-Oregon Model: Adapting Intervention with Rigorous Research.

    Science.gov (United States)

    Forgatch, Marion S; Kjøbli, John

    2016-09-01

    Parent Management Training-Oregon Model (PMTO(®) ) is a set of theory-based parenting programs with status as evidence-based treatments. PMTO has been rigorously tested in efficacy and effectiveness trials in different contexts, cultures, and formats. Parents, the presumed agents of change, learn core parenting practices, specifically skill encouragement, limit setting, monitoring/supervision, interpersonal problem solving, and positive involvement. The intervention effectively prevents and ameliorates children's behavior problems by replacing coercive interactions with positive parenting practices. Delivery format includes sessions with individual families in agencies or families' homes, parent groups, and web-based and telehealth communication. Mediational models have tested parenting practices as mechanisms of change for children's behavior and found support for the theory underlying PMTO programs. Moderating effects include children's age, maternal depression, and social disadvantage. The Norwegian PMTO implementation is presented as an example of how PMTO has been tailored to reach diverse populations as delivered by multiple systems of care throughout the nation. An implementation and research center in Oslo provides infrastructure and promotes collaboration between practitioners and researchers to conduct rigorous intervention research. Although evidence-based and tested within a wide array of contexts and populations, PMTO must continue to adapt to an ever-changing world. © 2016 Family Process Institute.

  7. Efficiency versus speed in quantum heat engines: Rigorous constraint from Lieb-Robinson bound

    Science.gov (United States)

    Shiraishi, Naoto; Tajima, Hiroyasu

    2017-08-01

    A long-standing open problem whether a heat engine with finite power achieves the Carnot efficiency is investgated. We rigorously prove a general trade-off inequality on thermodynamic efficiency and time interval of a cyclic process with quantum heat engines. In a first step, employing the Lieb-Robinson bound we establish an inequality on the change in a local observable caused by an operation far from support of the local observable. This inequality provides a rigorous characterization of the following intuitive picture that most of the energy emitted from the engine to the cold bath remains near the engine when the cyclic process is finished. Using this description, we prove an upper bound on efficiency with the aid of quantum information geometry. Our result generally excludes the possibility of a process with finite speed at the Carnot efficiency in quantum heat engines. In particular, the obtained constraint covers engines evolving with non-Markovian dynamics, which almost all previous studies on this topic fail to address.

  8. Rigorous RG Algorithms and Area Laws for Low Energy Eigenstates in 1D

    Science.gov (United States)

    Arad, Itai; Landau, Zeph; Vazirani, Umesh; Vidick, Thomas

    2017-11-01

    One of the central challenges in the study of quantum many-body systems is the complexity of simulating them on a classical computer. A recent advance (Landau et al. in Nat Phys, 2015) gave a polynomial time algorithm to compute a succinct classical description for unique ground states of gapped 1D quantum systems. Despite this progress many questions remained unsolved, including whether there exist efficient algorithms when the ground space is degenerate (and of polynomial dimension in the system size), or for the polynomially many lowest energy states, or even whether such states admit succinct classical descriptions or area laws. In this paper we give a new algorithm, based on a rigorously justified RG type transformation, for finding low energy states for 1D Hamiltonians acting on a chain of n particles. In the process we resolve some of the aforementioned open questions, including giving a polynomial time algorithm for poly( n) degenerate ground spaces and an n O(log n) algorithm for the poly( n) lowest energy states (under a mild density condition). For these classes of systems the existence of a succinct classical description and area laws were not rigorously proved before this work. The algorithms are natural and efficient, and for the case of finding unique ground states for frustration-free Hamiltonians the running time is {\\tilde{O}(nM(n))} , where M( n) is the time required to multiply two n × n matrices.

  9. Upgrading geometry conceptual understanding and strategic competence through implementing rigorous mathematical thinking (RMT)

    Science.gov (United States)

    Nugraheni, Z.; Budiyono, B.; Slamet, I.

    2018-03-01

    To reach higher order thinking skill, needed to be mastered the conceptual understanding and strategic competence as they are two basic parts of high order thinking skill (HOTS). RMT is a unique realization of the cognitive conceptual construction approach based on Feurstein with his theory of Mediated Learning Experience (MLE) and Vygotsky’s sociocultural theory. This was quasi-experimental research which compared the experimental class that was given Rigorous Mathematical Thinking (RMT) as learning method and the control class that was given Direct Learning (DL) as the conventional learning activity. This study examined whether there was different effect of two learning model toward conceptual understanding and strategic competence of Junior High School Students. The data was analyzed by using Multivariate Analysis of Variance (MANOVA) and obtained a significant difference between experimental and control class when considered jointly on the mathematics conceptual understanding and strategic competence (shown by Wilk’s Λ = 0.84). Further, by independent t-test is known that there was significant difference between two classes both on mathematical conceptual understanding and strategic competence. By this result is known that Rigorous Mathematical Thinking (RMT) had positive impact toward Mathematics conceptual understanding and strategic competence.

  10. "Snow White" Coating Protects SpaceX Dragon's Trunk Against Rigors of Space

    Science.gov (United States)

    McMahan, Tracy

    2013-01-01

    He described it as "snow white." But NASA astronaut Don Pettit was not referring to the popular children's fairy tale. Rather, he was talking about the white coating of the Space Exploration Technologies Corp. (SpaceX) Dragon spacecraft that reflected from the International Space Station s light. As it approached the station for the first time in May 2012, the Dragon s trunk might have been described as the "fairest of them all," for its pristine coating, allowing Pettit to clearly see to maneuver the robotic arm to grab the Dragon for a successful nighttime berthing. This protective thermal control coating, developed by Alion Science and Technology Corp., based in McLean, Va., made its bright appearance again with the March 1 launch of SpaceX's second commercial resupply mission. Named Z-93C55, the coating was applied to the cargo portion of the Dragon to protect it from the rigors of space. "For decades, Alion has produced coatings to protect against the rigors of space," said Michael Kenny, senior chemist with Alion. "As space missions evolved, there was a growing need to dissipate electrical charges that build up on the exteriors of spacecraft, or there could be damage to the spacecraft s electronics. Alion's research led us to develop materials that would meet this goal while also providing thermal controls. The outcome of this research was Alion's proprietary Z-93C55 coating."

  11. Rigorous Screening Technology for Identifying Suitable CO2 Storage Sites II

    Energy Technology Data Exchange (ETDEWEB)

    George J. Koperna Jr.; Vello A. Kuuskraa; David E. Riestenberg; Aiysha Sultana; Tyler Van Leeuwen

    2009-06-01

    This report serves as the final technical report and users manual for the 'Rigorous Screening Technology for Identifying Suitable CO2 Storage Sites II SBIR project. Advanced Resources International has developed a screening tool by which users can technically screen, assess the storage capacity and quantify the costs of CO2 storage in four types of CO2 storage reservoirs. These include CO2-enhanced oil recovery reservoirs, depleted oil and gas fields (non-enhanced oil recovery candidates), deep coal seems that are amenable to CO2-enhanced methane recovery, and saline reservoirs. The screening function assessed whether the reservoir could likely serve as a safe, long-term CO2 storage reservoir. The storage capacity assessment uses rigorous reservoir simulation models to determine the timing, ultimate storage capacity, and potential for enhanced hydrocarbon recovery. Finally, the economic assessment function determines both the field-level and pipeline (transportation) costs for CO2 sequestration in a given reservoir. The screening tool has been peer reviewed at an Electrical Power Research Institute (EPRI) technical meeting in March 2009. A number of useful observations and recommendations emerged from the Workshop on the costs of CO2 transport and storage that could be readily incorporated into a commercial version of the Screening Tool in a Phase III SBIR.

  12. A Generalized Method for the Comparable and Rigorous Calculation of the Polytropic Efficiencies of Turbocompressors

    Science.gov (United States)

    Dimitrakopoulos, Panagiotis

    2018-03-01

    The calculation of polytropic efficiencies is a very important task, especially during the development of new compression units, like compressor impellers, stages and stage groups. Such calculations are also crucial for the determination of the performance of a whole compressor. As processors and computational capacities have substantially been improved in the last years, the need for a new, rigorous, robust, accurate and at the same time standardized method merged, regarding the computation of the polytropic efficiencies, especially based on thermodynamics of real gases. The proposed method is based on the rigorous definition of the polytropic efficiency. The input consists of pressure and temperature values at the end points of the compression path (suction and discharge), for a given working fluid. The average relative error for the studied cases was 0.536 %. Thus, this high-accuracy method is proposed for efficiency calculations related with turbocompressors and their compression units, especially when they are operating at high power levels, for example in jet engines and high-power plants.

  13. Differential algebras with remainder and rigorous proofs of long-term stability

    International Nuclear Information System (INIS)

    Berz, Martin

    1997-01-01

    It is shown how in addition to determining Taylor maps of general optical systems, it is possible to obtain rigorous interval bounds for the remainder term of the n-th order Taylor expansion. To this end, the three elementary operations of addition, multiplication, and differentiation in the Differential Algebraic approach are augmented by suitable interval operations in such a way that a remainder bound of the sum, product, and derivative is obtained from the Taylor polynomial and remainder bound of the operands. The method can be used to obtain bounds for the accuracy with which a Taylor map represents the true map of the particle optical system. In a more general sense, it is also useful for a variety of other numerical problems, including rigorous global optimization of highly complex functions. Combined with methods to obtain pseudo-invariants of repetitive motion and extensions of the Lyapunov- and Nekhoroshev stability theory, the latter can be used to guarantee stability for storage rings and other weakly nonlinear systems

  14. How to Map Theory: Reliable Methods Are Fruitless Without Rigorous Theory.

    Science.gov (United States)

    Gray, Kurt

    2017-09-01

    Good science requires both reliable methods and rigorous theory. Theory allows us to build a unified structure of knowledge, to connect the dots of individual studies and reveal the bigger picture. Some have criticized the proliferation of pet "Theories," but generic "theory" is essential to healthy science, because questions of theory are ultimately those of validity. Although reliable methods and rigorous theory are synergistic, Action Identification suggests psychological tension between them: The more we focus on methodological details, the less we notice the broader connections. Therefore, psychology needs to supplement training in methods (how to design studies and analyze data) with training in theory (how to connect studies and synthesize ideas). This article provides a technique for visually outlining theory: theory mapping. Theory mapping contains five elements, which are illustrated with moral judgment and with cars. Also included are 15 additional theory maps provided by experts in emotion, culture, priming, power, stress, ideology, morality, marketing, decision-making, and more (see all at theorymaps.org ). Theory mapping provides both precision and synthesis, which helps to resolve arguments, prevent redundancies, assess the theoretical contribution of papers, and evaluate the likelihood of surprising effects.

  15. Rigorous Combination of GNSS and VLBI: How it Improves Earth Orientation and Reference Frames

    Science.gov (United States)

    Lambert, S. B.; Richard, J. Y.; Bizouard, C.; Becker, O.

    2017-12-01

    Current reference series (C04) of the International Earth Rotation and Reference Systems Service (IERS) are produced by a weighted combination of Earth orientation parameters (EOP) time series built up by combination centers of each technique (VLBI, GNSS, Laser ranging, DORIS). In the future, we plan to derive EOP from a rigorous combination of the normal equation systems of the four techniques.We present here the results of a rigorous combination of VLBI and GNSS pre-reduced, constraint-free, normal equations with the DYNAMO geodetic analysis software package developed and maintained by the French GRGS (Groupe de Recherche en GeÌodeÌsie Spatiale). The used normal equations are those produced separately by the IVS and IGS combination centers to which we apply our own minimal constraints.We address the usefulness of such a method with respect to the classical, a posteriori, combination method, and we show whether EOP determinations are improved.Especially, we implement external validations of the EOP series based on comparison with geophysical excitation and examination of the covariance matrices. Finally, we address the potential of the technique for the next generation celestial reference frames, which are currently determined by VLBI only.

  16. Rigorous high-precision enclosures of fixed points and their invariant manifolds

    Science.gov (United States)

    Wittig, Alexander N.

    The well established concept of Taylor Models is introduced, which offer highly accurate C0 enclosures of functional dependencies, combining high-order polynomial approximation of functions and rigorous estimates of the truncation error, performed using verified arithmetic. The focus of this work is on the application of Taylor Models in algorithms for strongly non-linear dynamical systems. A method is proposed to extend the existing implementation of Taylor Models in COSY INFINITY from double precision coefficients to arbitrary precision coefficients. Great care is taken to maintain the highest efficiency possible by adaptively adjusting the precision of higher order coefficients in the polynomial expansion. High precision operations are based on clever combinations of elementary floating point operations yielding exact values for round-off errors. An experimental high precision interval data type is developed and implemented. Algorithms for the verified computation of intrinsic functions based on the High Precision Interval datatype are developed and described in detail. The application of these operations in the implementation of High Precision Taylor Models is discussed. An application of Taylor Model methods to the verification of fixed points is presented by verifying the existence of a period 15 fixed point in a near standard Henon map. Verification is performed using different verified methods such as double precision Taylor Models, High Precision intervals and High Precision Taylor Models. Results and performance of each method are compared. An automated rigorous fixed point finder is implemented, allowing the fully automated search for all fixed points of a function within a given domain. It returns a list of verified enclosures of each fixed point, optionally verifying uniqueness within these enclosures. An application of the fixed point finder to the rigorous analysis of beam transfer maps in accelerator physics is presented. Previous work done by

  17. Dosimetric effects of edema in permanent prostate seed implants: a rigorous solution

    International Nuclear Information System (INIS)

    Chen Zhe; Yue Ning; Wang Xiaohong; Roberts, Kenneth B.; Peschel, Richard; Nath, Ravinder

    2000-01-01

    Purpose: To derive a rigorous analytic solution to the dosimetric effects of prostate edema so that its impact on the conventional pre-implant and post-implant dosimetry can be studied for any given radioactive isotope and edema characteristics. Methods and Materials: The edema characteristics observed by Waterman et al (Int. J. Rad. Onc. Biol. Phys, 41:1069-1077; 1998) was used to model the time evolution of the prostate and the seed locations. The total dose to any part of prostate tissue from a seed implant was calculated analytically by parameterizing the dose fall-off from a radioactive seed as a single inverse power function of distance, with proper account of the edema-induced time evolution. The dosimetric impact of prostate edema was determined by comparing the dose calculated with full consideration of prostate edema to that calculated with the conventional dosimetry approach where the seed locations and the target volume are assumed to be stationary. Results: A rigorous analytic solution on the relative dosimetric effects of prostate edema was obtained. This solution proved explicitly that the relative dosimetric effects of edema, as found in the previous numerical studies by Yue et. al. (Int. J. Radiat. Oncol. Biol. Phys. 43, 447-454, 1999), are independent of the size and the shape of the implant target volume and are independent of the number and the locations of the seeds implanted. It also showed that the magnitude of relative dosimetric effects is independent of the location of dose evaluation point within the edematous target volume. It implies that the relative dosimetric effects of prostate edema are universal with respect to a given isotope and edema characteristic. A set of master tables for the relative dosimetric effects of edema were obtained for a wide range of edema characteristics for both 125 I and 103 Pd prostate seed implants. Conclusions: A rigorous analytic solution of the relative dosimetric effects of prostate edema has been

  18. Cross Validating Ocean Prediction and Monitoring Systems

    National Research Council Canada - National Science Library

    Mooers, Christopher; Meinen, Christopher; Baringer, Molly; Bang, Inkweon; Rhodes, Robert C; Barron, Charlie N; Bub, Frank

    2005-01-01

    With the ongoing development of ocean circulation models and real-time observing systems, routine estimation of the synoptic state of the ocean is becoming feasible for practical and scientific purposes...

  19. Study of the quality characteristics in cold-smoked salmon (Salmo salar) originating from pre- or post-rigor raw material.

    Science.gov (United States)

    Birkeland, S; Akse, L

    2010-01-01

    Improved slaughtering procedures in the salmon industry have caused a delayed onset of rigor mortis and, thus, a potential for pre-rigor secondary processing. The aim of this study was to investigate the effect of rigor status at time of processing on quality traits color, texture, sensory, microbiological, in injection salted, and cold-smoked Atlantic salmon (Salmo salar). Injection of pre-rigor fillets caused a significant (Prigor processed fillets; however, post-rigor (1477 ± 38 g) fillets had a significant (P>0.05) higher fracturability than pre-rigor fillets (1369 ± 71 g). Pre-rigor fillets were significantly (Prigor fillets (37.8 ± 0.8) and had significantly lower (Prigor processed fillets. This study showed that similar quality characteristics can be obtained in cold-smoked products processed either pre- or post-rigor when using suitable injection salting protocols and smoking techniques. © 2010 Institute of Food Technologists®

  20. Paulo Leminski : um estudo sobre o rigor e o relaxo em suas poesias

    OpenAIRE

    Dhynarte de Borba e Albuquerque

    2005-01-01

    O trabalho examina a trajetória da poesia de Paulo Leminski, buscando estabelecer os termos do humor, da pesquisa metalingüística e do eu-lírico, e que não deixa de exibir traços da poesia marginal dos 70. Um autor que trabalhou com a busca do rigor concretista mediante os procedimentos da fala cotidiana mais ou menos relaxada. O esforço poético do curitibano Leminski é uma “linha que nunca termina” – ele escreveu poesias, romances, peças de publicidade, letras de música e fez traduções. Em t...

  1. Rigorous decoupling between edge states in frustrated spin chains and ladders

    Science.gov (United States)

    Chepiga, Natalia; Mila, Frédéric

    2018-05-01

    We investigate the occurrence of exact zero modes in one-dimensional quantum magnets of finite length that possess edge states. Building on conclusions first reached in the context of the spin-1/2 X Y chain in a field and then for the spin-1 J1-J2 Heisenberg model, we show that the development of incommensurate correlations in the bulk invariably leads to oscillations in the sign of the coupling between edge states, and hence to exact zero energy modes at the crossing points where the coupling between the edge states rigorously vanishes. This is true regardless of the origin of the frustration (e.g., next-nearest-neighbor coupling or biquadratic coupling for the spin-1 chain), of the value of the bulk spin (we report on spin-1/2, spin-1, and spin-2 examples), and of the value of the edge-state emergent spin (spin-1/2 or spin-1).

  2. Using Project Complexity Determinations to Establish Required Levels of Project Rigor

    Energy Technology Data Exchange (ETDEWEB)

    Andrews, Thomas D.

    2015-10-01

    This presentation discusses the project complexity determination process that was developed by National Security Technologies, LLC, for the U.S. Department of Energy, National Nuclear Security Administration Nevada Field Office for implementation at the Nevada National Security Site (NNSS). The complexity determination process was developed to address the diversity of NNSS project types, size, and complexity; to fill the need for one procedure but with provision for tailoring the level of rigor to the project type, size, and complexity; and to provide consistent, repeatable, effective application of project management processes across the enterprise; and to achieve higher levels of efficiency in project delivery. These needs are illustrated by the wide diversity of NNSS projects: Defense Experimentation, Global Security, weapons tests, military training areas, sensor development and testing, training in realistic environments, intelligence community support, sensor development, environmental restoration/waste management, and disposal of radioactive waste, among others.

  3. Guidelines for conducting rigorous health care psychosocial cross-cultural/language qualitative research.

    Science.gov (United States)

    Arriaza, Pablo; Nedjat-Haiem, Frances; Lee, Hee Yun; Martin, Shadi S

    2015-01-01

    The purpose of this article is to synthesize and chronicle the authors' experiences as four bilingual and bicultural researchers, each experienced in conducting cross-cultural/cross-language qualitative research. Through narrative descriptions of experiences with Latinos, Iranians, and Hmong refugees, the authors discuss their rewards, challenges, and methods of enhancing rigor, trustworthiness, and transparency when conducting cross-cultural/cross-language research. The authors discuss and explore how to effectively manage cross-cultural qualitative data, how to effectively use interpreters and translators, how to identify best methods of transcribing data, and the role of creating strong community relationships. The authors provide guidelines for health care professionals to consider when engaging in cross-cultural qualitative research.

  4. Release of major ions during rigor mortis development in kid Longissimus dorsi muscle.

    Science.gov (United States)

    Feidt, C; Brun-Bellut, J

    1999-01-01

    Ionic strength plays an important role in post mortem muscle changes. Its increase is due to ion release during the development of rigor mortis. Twelve alpine kids were used to study the effects of chilling and meat pH on ion release. Free ions were measured in Longissimus dorsi muscle by capillary electrophoresis after water extraction. All free ion concentrations increased after death, but there were differences between ions. Temperature was not a factor affecting ion release in contrast to ultimate pH value. Three release mechanisms are believed to coexist: a passive binding to proteins, which stops as pH decreases, an active segregation which stops as ATP disappears and the production of metabolites due to anaerobic glycolysis.

  5. Revisiting the scientific method to improve rigor and reproducibility of immunohistochemistry in reproductive science.

    Science.gov (United States)

    Manuel, Sharrón L; Johnson, Brian W; Frevert, Charles W; Duncan, Francesca E

    2018-04-21

    Immunohistochemistry (IHC) is a robust scientific tool whereby cellular components are visualized within a tissue, and this method has been and continues to be a mainstay for many reproductive biologists. IHC is highly informative if performed and interpreted correctly, but studies have shown that the general use and reporting of appropriate controls in IHC experiments is low. This omission of the scientific method can result in data that lacks rigor and reproducibility. In this editorial, we highlight key concepts in IHC controls and describe an opportunity for our field to partner with the Histochemical Society to adopt their IHC guidelines broadly as researchers, authors, ad hoc reviewers, editorial board members, and editors-in-chief. Such cross-professional society interactions will ensure that we produce the highest quality data as new technologies emerge that still rely upon the foundations of classic histological and immunohistochemical principles.

  6. Rigorous approach to the comparison between experiment and theory in Casimir force measurements

    International Nuclear Information System (INIS)

    Klimchitskaya, G L; Chen, F; Decca, R S; Fischbach, E; Krause, D E; Lopez, D; Mohideen, U; Mostepanenko, V M

    2006-01-01

    In most experiments on the Casimir force the comparison between measurement data and theory was done using the concept of the root-mean-square deviation, a procedure that has been criticized in the literature. Here we propose a special statistical analysis which should be performed separately for the experimental data and for the results of the theoretical computations. In so doing, the random, systematic and total experimental errors are found as functions of separation, taking into account the distribution laws for each error at 95% confidence. Independently, all theoretical errors are combined to obtain the total theoretical error at the same confidence. Finally, the confidence interval for the differences between theoretical and experimental values is obtained as a function of separation. This rigorous approach is applied to two recent experiments on the Casimir effect

  7. Direct integration of the S-matrix applied to rigorous diffraction

    International Nuclear Information System (INIS)

    Iff, W; Lindlein, N; Tishchenko, A V

    2014-01-01

    A novel Fourier method for rigorous diffraction computation at periodic structures is presented. The procedure is based on a differential equation for the S-matrix, which allows direct integration of the S-matrix blocks. This results in a new method in Fourier space, which can be considered as a numerically stable and well-parallelizable alternative to the conventional differential method based on T-matrix integration and subsequent conversions from the T-matrices to S-matrix blocks. Integration of the novel differential equation in implicit manner is expounded. The applicability of the new method is shown on the basis of 1D periodic structures. It is clear however, that the new technique can also be applied to arbitrary 2D periodic or periodized structures. The complexity of the new method is O(N 3 ) similar to the conventional differential method with N being the number of diffraction orders. (fast track communication)

  8. Rigorous description of holograms of particles illuminated by an astigmatic elliptical Gaussian beam

    Energy Technology Data Exchange (ETDEWEB)

    Yuan, Y J; Ren, K F; Coetmellec, S; Lebrun, D, E-mail: fang.ren@coria.f [UMR 6614/CORIA, CNRS and Universite et INSA de Rouen Avenue de l' Universite BP 12, 76801 Saint Etienne du Rouvray (France)

    2009-02-01

    The digital holography is a non-intrusive optical metrology and well adapted for the measurement of the size and velocity field of particles in the spray of a fluid. The simplified model of an opaque disk is often used in the treatment of the diagrams and therefore the refraction and the third dimension diffraction of the particle are not taken into account. We present in this paper a rigorous description of the holographic diagrams and evaluate the effects of the refraction and the third dimension diffraction by comparison to the opaque disk model. It is found that the effects are important when the real part of the refractive index is near unity or the imaginary part is non zero but small.

  9. A new method for deriving rigorous results on ππ scattering

    International Nuclear Information System (INIS)

    Caprini, I.; Dita, P.

    1979-06-01

    We develop a new approach to the problem of constraining the ππ scattering amplitudes by means of the axiomatically proved properties of unitarity, analyticity and crossing symmetry. The method is based on the solution of an extremal problem on a convex set of analytic functions and provides a global description of the domain of values taken by any finite number of partial waves at an arbitrary set of unphysical energies, compatible with unitarity, the bounds at complex energies derived from generalized dispersion relations and the crossing integral relations. From this doma domain we obtain new absolute bounds for the amplitudes as well as rigorous correlations between the values of various partial waves. (author)

  10. Rigorous Numerics for ill-posed PDEs: Periodic Orbits in the Boussinesq Equation

    Science.gov (United States)

    Castelli, Roberto; Gameiro, Marcio; Lessard, Jean-Philippe

    2018-04-01

    In this paper, we develop computer-assisted techniques for the analysis of periodic orbits of ill-posed partial differential equations. As a case study, our proposed method is applied to the Boussinesq equation, which has been investigated extensively because of its role in the theory of shallow water waves. The idea is to use the symmetry of the solutions and a Newton-Kantorovich type argument (the radii polynomial approach) to obtain rigorous proofs of existence of the periodic orbits in a weighted ℓ1 Banach space of space-time Fourier coefficients with exponential decay. We present several computer-assisted proofs of the existence of periodic orbits at different parameter values.

  11. Study design elements for rigorous quasi-experimental comparative effectiveness research.

    Science.gov (United States)

    Maciejewski, Matthew L; Curtis, Lesley H; Dowd, Bryan

    2013-03-01

    Quasi-experiments are likely to be the workhorse study design used to generate evidence about the comparative effectiveness of alternative treatments, because of their feasibility, timeliness, affordability and external validity compared with randomized trials. In this review, we outline potential sources of discordance in results between quasi-experiments and experiments, review study design choices that can improve the internal validity of quasi-experiments, and outline innovative data linkage strategies that may be particularly useful in quasi-experimental comparative effectiveness research. There is an urgent need to resolve the debate about the evidentiary value of quasi-experiments since equal consideration of rigorous quasi-experiments will broaden the base of evidence that can be brought to bear in clinical decision-making and governmental policy-making.

  12. Increasing rigor in NMR-based metabolomics through validated and open source tools.

    Science.gov (United States)

    Eghbalnia, Hamid R; Romero, Pedro R; Westler, William M; Baskaran, Kumaran; Ulrich, Eldon L; Markley, John L

    2017-02-01

    The metabolome, the collection of small molecules associated with an organism, is a growing subject of inquiry, with the data utilized for data-intensive systems biology, disease diagnostics, biomarker discovery, and the broader characterization of small molecules in mixtures. Owing to their close proximity to the functional endpoints that govern an organism's phenotype, metabolites are highly informative about functional states. The field of metabolomics identifies and quantifies endogenous and exogenous metabolites in biological samples. Information acquired from nuclear magnetic spectroscopy (NMR), mass spectrometry (MS), and the published literature, as processed by statistical approaches, are driving increasingly wider applications of metabolomics. This review focuses on the role of databases and software tools in advancing the rigor, robustness, reproducibility, and validation of metabolomics studies. Copyright © 2016. Published by Elsevier Ltd.

  13. A rigorous phenomenological analysis of the ππ scattering lengths

    International Nuclear Information System (INIS)

    Caprini, I.; Dita, P.; Sararu, M.

    1979-11-01

    The constraining power of the present experimental data, combined with the general theoretical knowledge about ππ scattering, upon the scattering lengths of this process, is investigated by means of a rigorous functional method. We take as input the experimental phase shifts and make no hypotheses about the high energy behaviour of the amplitudes, using only absolute bounds derived from axiomatic field theory and exact consequences of crossing symmetry. In the simplest application of the method, involving only the π 0 π 0 S-wave, we explored numerically a number of values proposed by various authors for the scattering lengths a 0 and a 2 and found that no one appears to be especially favoured. (author)

  14. New tools for Content Innovation and data sharing: Enhancing reproducibility and rigor in biomechanics research.

    Science.gov (United States)

    Guilak, Farshid

    2017-03-21

    We are currently in one of the most exciting times for science and engineering as we witness unprecedented growth in our computational and experimental capabilities to generate new data and models. To facilitate data and model sharing, and to enhance reproducibility and rigor in biomechanics research, the Journal of Biomechanics has introduced a number of tools for Content Innovation to allow presentation, sharing, and archiving of methods, models, and data in our articles. The tools include an Interactive Plot Viewer, 3D Geometric Shape and Model Viewer, Virtual Microscope, Interactive MATLAB Figure Viewer, and Audioslides. Authors are highly encouraged to make use of these in upcoming journal submissions. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. Early rigorous control interventions can largely reduce dengue outbreak magnitude: experience from Chaozhou, China.

    Science.gov (United States)

    Liu, Tao; Zhu, Guanghu; He, Jianfeng; Song, Tie; Zhang, Meng; Lin, Hualiang; Xiao, Jianpeng; Zeng, Weilin; Li, Xing; Li, Zhihao; Xie, Runsheng; Zhong, Haojie; Wu, Xiaocheng; Hu, Wenbiao; Zhang, Yonghui; Ma, Wenjun

    2017-08-02

    Dengue fever is a severe public heath challenge in south China. A dengue outbreak was reported in Chaozhou city, China in 2015. Intensified interventions were implemented by the government to control the epidemic. However, it is still unknown the degree to which intensified control measures reduced the size of the epidemics, and when should such measures be initiated to reduce the risk of large dengue outbreaks developing? We selected Xiangqiao district as study setting because the majority of the indigenous cases (90.6%) in Chaozhou city were from this district. The numbers of daily indigenous dengue cases in 2015 were collected through the national infectious diseases and vectors surveillance system, and daily Breteau Index (BI) data were reported by local public health department. We used a compartmental dynamic SEIR (Susceptible, Exposed, Infected and Removed) model to assess the effectiveness of control interventions, and evaluate the control effect of intervention timing on dengue epidemic. A total of 1250 indigenous dengue cases was reported from Xiangqiao district. The results of SEIR modeling using BI as an indicator of actual control interventions showed a total of 1255 dengue cases, which is close to the reported number (n = 1250). The size and duration of the outbreak were highly sensitive to the intensity and timing of interventions. The more rigorous and earlier the control interventions implemented, the more effective it yielded. Even if the interventions were initiated several weeks after the onset of the dengue outbreak, the interventions were shown to greatly impact the prevalence and duration of dengue outbreak. This study suggests that early implementation of rigorous dengue interventions can effectively reduce the epidemic size and shorten the epidemic duration.

  16. Bringing scientific rigor to community-developed programs in Hong Kong

    Directory of Open Access Journals (Sweden)

    Fabrizio Cecilia S

    2012-12-01

    Full Text Available Abstract Background This paper describes efforts to generate evidence for community-developed programs to enhance family relationships in the Chinese culture of Hong Kong, within the framework of community-based participatory research (CBPR. Methods The CBPR framework was applied to help maximize the development of the intervention and the public health impact of the studies, while enhancing the capabilities of the social service sector partners. Results Four academic-community research teams explored the process of designing and implementing randomized controlled trials in the community. In addition to the expected cultural barriers between teams of academics and community practitioners, with their different outlooks, concerns and languages, the team navigated issues in utilizing the principles of CBPR unique to this Chinese culture. Eventually the team developed tools for adaptation, such as an emphasis on building the relationship while respecting role delineation and an iterative process of defining the non-negotiable parameters of research design while maintaining scientific rigor. Lessons learned include the risk of underemphasizing the size of the operational and skills shift between usual agency practices and research studies, the importance of minimizing non-negotiable parameters in implementing rigorous research designs in the community, and the need to view community capacity enhancement as a long term process. Conclusions The four pilot studies under the FAMILY Project demonstrated that nuanced design adaptations, such as wait list controls and shorter assessments, better served the needs of the community and led to the successful development and vigorous evaluation of a series of preventive, family-oriented interventions in the Chinese culture of Hong Kong.

  17. Rigorous Multicomponent Reactive Separations Modelling: Complete Consideration of Reaction-Diffusion Phenomena

    International Nuclear Information System (INIS)

    Ahmadi, A.; Meyer, M.; Rouzineau, D.; Prevost, M.; Alix, P.; Laloue, N.

    2010-01-01

    This paper gives the first step of the development of a rigorous multicomponent reactive separation model. Such a model is highly essential to further the optimization of acid gases removal plants (CO 2 capture, gas treating, etc.) in terms of size and energy consumption, since chemical solvents are conventionally used. Firstly, two main modelling approaches are presented: the equilibrium-based and the rate-based approaches. Secondly, an extended rate-based model with rigorous modelling methodology for diffusion-reaction phenomena is proposed. The film theory and the generalized Maxwell-Stefan equations are used in order to characterize multicomponent interactions. The complete chain of chemical reactions is taken into account. The reactions can be kinetically controlled or at chemical equilibrium, and they are considered for both liquid film and liquid bulk. Thirdly, the method of numerical resolution is described. Coupling the generalized Maxwell-Stefan equations with chemical equilibrium equations leads to a highly non-linear Differential-Algebraic Equations system known as DAE index 3. The set of equations is discretized with finite-differences as its integration by Gear method is complex. The resulting algebraic system is resolved by the Newton- Raphson method. Finally, the present model and the associated methods of numerical resolution are validated for the example of esterification of methanol. This archetype non-electrolytic system permits an interesting analysis of reaction impact on mass transfer, especially near the phase interface. The numerical resolution of the model by Newton-Raphson method gives good results in terms of calculation time and convergence. The simulations show that the impact of reactions at chemical equilibrium and that of kinetically controlled reactions with high kinetics on mass transfer is relatively similar. Moreover, the Fick's law is less adapted for multicomponent mixtures where some abnormalities such as counter

  18. Rigorous upper bounds for transport due to passive advection by inhomogeneous turbulence

    International Nuclear Information System (INIS)

    Krommes, J.A.; Smith, R.A.

    1987-05-01

    A variational procedure, due originally to Howard and explored by Busse and others for self-consistent turbulence problems, is employed to determine rigorous upper bounds for the advection of a passive scalar through an inhomogeneous turbulent slab with arbitrary generalized Reynolds number R and Kubo number K. In the basic version of the method, the steady-state energy balance is used as a constraint; the resulting bound, though rigorous, is independent of K. A pedagogical reference model (one dimension, K = ∞) is described in detail; the bound compares favorably with the exact solution. The direct-interaction approximation is also worked out for this model; it is somewhat more accurate than the bound, but requires considerably more labor to solve. For the basic bound, a general formalism is presented for several dimensions, finite correlation length, and reasonably general boundary conditions. Part of the general method, in which a Green's function technique is employed, applies to self-consistent as well as to passive problems, and thereby generalizes previous results in the fluid literature. The formalism is extended for the first time to include time-dependent constraints, and a bound is deduced which explicitly depends on K and has the correct physical scalings in all regimes of R and K. Two applications from the theory of turbulent plasmas ae described: flux in velocity space, and test particle transport in stochastic magnetic fields. For the velocity space problem the simplest bound reproduces Dupree's original scaling for the strong turbulence diffusion coefficient. For the case of stochastic magnetic fields, the scaling of the bounds is described for the magnetic diffusion coefficient as well as for the particle diffusion coefficient in the so-called collisionless, fluid, and double-streaming regimes

  19. Bringing scientific rigor to community-developed programs in Hong Kong.

    Science.gov (United States)

    Fabrizio, Cecilia S; Hirschmann, Malia R; Lam, Tai Hing; Cheung, Teresa; Pang, Irene; Chan, Sophia; Stewart, Sunita M

    2012-12-31

    This paper describes efforts to generate evidence for community-developed programs to enhance family relationships in the Chinese culture of Hong Kong, within the framework of community-based participatory research (CBPR). The CBPR framework was applied to help maximize the development of the intervention and the public health impact of the studies, while enhancing the capabilities of the social service sector partners. Four academic-community research teams explored the process of designing and implementing randomized controlled trials in the community. In addition to the expected cultural barriers between teams of academics and community practitioners, with their different outlooks, concerns and languages, the team navigated issues in utilizing the principles of CBPR unique to this Chinese culture. Eventually the team developed tools for adaptation, such as an emphasis on building the relationship while respecting role delineation and an iterative process of defining the non-negotiable parameters of research design while maintaining scientific rigor. Lessons learned include the risk of underemphasizing the size of the operational and skills shift between usual agency practices and research studies, the importance of minimizing non-negotiable parameters in implementing rigorous research designs in the community, and the need to view community capacity enhancement as a long term process. The four pilot studies under the FAMILY Project demonstrated that nuanced design adaptations, such as wait list controls and shorter assessments, better served the needs of the community and led to the successful development and vigorous evaluation of a series of preventive, family-oriented interventions in the Chinese culture of Hong Kong.

  20. Rigor mortis development at elevated temperatures induces pale exudative turkey meat characteristics.

    Science.gov (United States)

    McKee, S R; Sams, A R

    1998-01-01

    Development of rigor mortis at elevated post-mortem temperatures may contribute to turkey meat characteristics that are similar to those found in pale, soft, exudative pork. To evaluate this effect, 36 Nicholas tom turkeys were processed at 19 wk of age and placed in water at 40, 20, and 0 C immediately after evisceration. Pectoralis muscle samples were taken at 15 min, 30 min, 1 h, 2 h, and 4 h post-mortem and analyzed for R-value (an indirect measure of adenosine triphosphate), glycogen, pH, color, and sarcomere length. At 4 h, the remaining intact Pectoralis muscle was harvested, and aged on ice 23 h, and analyzed for drip loss, cook loss, shear values, and sarcomere length. By 15 min post-mortem, the 40 C treatment had higher R-values, which persisted through 4 h. By 1 h, the 40 C treatment pH and glycogen levels were lower than the 0 C treatment; however, they did not differ from those of the 20 C treatment. Increased L* values indicated that color became more pale by 2 h post-mortem in the 40 C treatment when compared to the 20 and 0 C treatments. Drip loss, cook loss, and shear value were increased whereas sarcomere lengths were decreased as a result of the 40 C treatment. These findings suggested that elevated post-mortem temperatures during processing resulted in acceleration of rigor mortis and biochemical changes in the muscle that produced pale, exudative meat characteristics in turkey.

  1. Early rigorous control interventions can largely reduce dengue outbreak magnitude: experience from Chaozhou, China

    Directory of Open Access Journals (Sweden)

    Tao Liu

    2017-08-01

    Full Text Available Abstract Background Dengue fever is a severe public heath challenge in south China. A dengue outbreak was reported in Chaozhou city, China in 2015. Intensified interventions were implemented by the government to control the epidemic. However, it is still unknown the degree to which intensified control measures reduced the size of the epidemics, and when should such measures be initiated to reduce the risk of large dengue outbreaks developing? Methods We selected Xiangqiao district as study setting because the majority of the indigenous cases (90.6% in Chaozhou city were from this district. The numbers of daily indigenous dengue cases in 2015 were collected through the national infectious diseases and vectors surveillance system, and daily Breteau Index (BI data were reported by local public health department. We used a compartmental dynamic SEIR (Susceptible, Exposed, Infected and Removed model to assess the effectiveness of control interventions, and evaluate the control effect of intervention timing on dengue epidemic. Results A total of 1250 indigenous dengue cases was reported from Xiangqiao district. The results of SEIR modeling using BI as an indicator of actual control interventions showed a total of 1255 dengue cases, which is close to the reported number (n = 1250. The size and duration of the outbreak were highly sensitive to the intensity and timing of interventions. The more rigorous and earlier the control interventions implemented, the more effective it yielded. Even if the interventions were initiated several weeks after the onset of the dengue outbreak, the interventions were shown to greatly impact the prevalence and duration of dengue outbreak. Conclusions This study suggests that early implementation of rigorous dengue interventions can effectively reduce the epidemic size and shorten the epidemic duration.

  2. Standards and Methodological Rigor in Pulmonary Arterial Hypertension Preclinical and Translational Research.

    Science.gov (United States)

    Provencher, Steeve; Archer, Stephen L; Ramirez, F Daniel; Hibbert, Benjamin; Paulin, Roxane; Boucherat, Olivier; Lacasse, Yves; Bonnet, Sébastien

    2018-03-30

    Despite advances in our understanding of the pathophysiology and the management of pulmonary arterial hypertension (PAH), significant therapeutic gaps remain for this devastating disease. Yet, few innovative therapies beyond the traditional pathways of endothelial dysfunction have reached clinical trial phases in PAH. Although there are inherent limitations of the currently available models of PAH, the leaky pipeline of innovative therapies relates, in part, to flawed preclinical research methodology, including lack of rigour in trial design, incomplete invasive hemodynamic assessment, and lack of careful translational studies that replicate randomized controlled trials in humans with attention to adverse effects and benefits. Rigorous methodology should include the use of prespecified eligibility criteria, sample sizes that permit valid statistical analysis, randomization, blinded assessment of standardized outcomes, and transparent reporting of results. Better design and implementation of preclinical studies can minimize inherent flaws in the models of PAH, reduce the risk of bias, and enhance external validity and our ability to distinguish truly promising therapies form many false-positive or overstated leads. Ideally, preclinical studies should use advanced imaging, study several preclinical pulmonary hypertension models, or correlate rodent and human findings and consider the fate of the right ventricle, which is the major determinant of prognosis in human PAH. Although these principles are widely endorsed, empirical evidence suggests that such rigor is often lacking in pulmonary hypertension preclinical research. The present article discusses the pitfalls in the design of preclinical pulmonary hypertension trials and discusses opportunities to create preclinical trials with improved predictive value in guiding early-phase drug development in patients with PAH, which will need support not only from researchers, peer reviewers, and editors but also from

  3. A statistically rigorous sampling design to integrate avian monitoring and management within Bird Conservation Regions.

    Science.gov (United States)

    Pavlacky, David C; Lukacs, Paul M; Blakesley, Jennifer A; Skorkowsky, Robert C; Klute, David S; Hahn, Beth A; Dreitz, Victoria J; George, T Luke; Hanni, David J

    2017-01-01

    Monitoring is an essential component of wildlife management and conservation. However, the usefulness of monitoring data is often undermined by the lack of 1) coordination across organizations and regions, 2) meaningful management and conservation objectives, and 3) rigorous sampling designs. Although many improvements to avian monitoring have been discussed, the recommendations have been slow to emerge in large-scale programs. We introduce the Integrated Monitoring in Bird Conservation Regions (IMBCR) program designed to overcome the above limitations. Our objectives are to outline the development of a statistically defensible sampling design to increase the value of large-scale monitoring data and provide example applications to demonstrate the ability of the design to meet multiple conservation and management objectives. We outline the sampling process for the IMBCR program with a focus on the Badlands and Prairies Bird Conservation Region (BCR 17). We provide two examples for the Brewer's sparrow (Spizella breweri) in BCR 17 demonstrating the ability of the design to 1) determine hierarchical population responses to landscape change and 2) estimate hierarchical habitat relationships to predict the response of the Brewer's sparrow to conservation efforts at multiple spatial scales. The collaboration across organizations and regions provided economy of scale by leveraging a common data platform over large spatial scales to promote the efficient use of monitoring resources. We designed the IMBCR program to address the information needs and core conservation and management objectives of the participating partner organizations. Although it has been argued that probabilistic sampling designs are not practical for large-scale monitoring, the IMBCR program provides a precedent for implementing a statistically defensible sampling design from local to bioregional scales. We demonstrate that integrating conservation and management objectives with rigorous statistical

  4. A statistically rigorous sampling design to integrate avian monitoring and management within Bird Conservation Regions.

    Directory of Open Access Journals (Sweden)

    David C Pavlacky

    Full Text Available Monitoring is an essential component of wildlife management and conservation. However, the usefulness of monitoring data is often undermined by the lack of 1 coordination across organizations and regions, 2 meaningful management and conservation objectives, and 3 rigorous sampling designs. Although many improvements to avian monitoring have been discussed, the recommendations have been slow to emerge in large-scale programs. We introduce the Integrated Monitoring in Bird Conservation Regions (IMBCR program designed to overcome the above limitations. Our objectives are to outline the development of a statistically defensible sampling design to increase the value of large-scale monitoring data and provide example applications to demonstrate the ability of the design to meet multiple conservation and management objectives. We outline the sampling process for the IMBCR program with a focus on the Badlands and Prairies Bird Conservation Region (BCR 17. We provide two examples for the Brewer's sparrow (Spizella breweri in BCR 17 demonstrating the ability of the design to 1 determine hierarchical population responses to landscape change and 2 estimate hierarchical habitat relationships to predict the response of the Brewer's sparrow to conservation efforts at multiple spatial scales. The collaboration across organizations and regions provided economy of scale by leveraging a common data platform over large spatial scales to promote the efficient use of monitoring resources. We designed the IMBCR program to address the information needs and core conservation and management objectives of the participating partner organizations. Although it has been argued that probabilistic sampling designs are not practical for large-scale monitoring, the IMBCR program provides a precedent for implementing a statistically defensible sampling design from local to bioregional scales. We demonstrate that integrating conservation and management objectives with rigorous

  5. Effects of Pre and Post-Rigor Marinade Injection on Some Quality Parameters of Longissimus Dorsi Muscles

    Science.gov (United States)

    Fadıloğlu, Eylem Ezgi; Serdaroğlu, Meltem

    2018-01-01

    Abstract This study was conducted to evaluate the effects of pre and post-rigor marinade injections on some quality parameters of Longissimus dorsi (LD) muscles. Three marinade formulations were prepared with 2% NaCl, 2% NaCl+0.5 M lactic acid and 2% NaCl+0.5 M sodium lactate. In this study marinade uptake, pH, free water, cooking loss, drip loss and color properties were analyzed. Injection time had significant effect on marinade uptake levels of samples. Regardless of marinate formulation, marinade uptake of pre-rigor samples injected with marinade solutions were higher than post rigor samples. Injection of sodium lactate increased pH values of samples whereas lactic acid injection decreased pH. Marinade treatment and storage period had significant effect on cooking loss. At each evaluation period interaction between marinade treatment and injection time showed different effect on free water content. Storage period and marinade application had significant effect on drip loss values. Drip loss in all samples increased during the storage. During all storage days, lowest CIE L* value was found in pre-rigor samples injected with sodium lactate. Lactic acid injection caused color fade in pre-rigor and post-rigor samples. Interaction between marinade treatment and storage period was found statistically significant (p<0.05). At day 0 and 3, the lowest CIE b* values obtained pre-rigor samples injected with sodium lactate and there were no differences were found in other samples. At day 6, no significant differences were found in CIE b* values of all samples. PMID:29805282

  6. The effect of rigor mortis on the passage of erythrocytes and fluid through the myocardium of isolated dog hearts.

    Science.gov (United States)

    Nevalainen, T J; Gavin, J B; Seelye, R N; Whitehouse, S; Donnell, M

    1978-07-01

    The effect of normal and artificially induced rigor mortis on the vascular passage of erythrocytes and fluid through isolated dog hearts was studied. Increased rigidity of 6-mm thick transmural sections through the centre of the posterior papillary muscle was used as an indication of rigor. The perfusibility of the myocardium was tested by injecting 10 ml of 1% sodium fluorescein in Hanks solution into the circumflex branch of the left coronary artery. In prerigor hearts (20 minute incubation) fluorescein perfused the myocardium evenly whether or not it was preceded by an injection of 10 ml of heparinized dog blood. Rigor mortis developed in all hearts after 90 minutes incubation or within 20 minutes of perfusing the heart with 50 ml of 5 mM iodoacetate in Hanks solution. Fluorescein injected into hearts in rigor did not enter the posterior papillary muscle and adjacent subendocardium whether or not it was preceded by heparinized blood. Thus the vascular occlusion caused by rigor in the dog heart appears to be so effective that it prevents flow into the subendocardium of small soluble ions such as fluorescein.

  7. Layout optimization of DRAM cells using rigorous simulation model for NTD

    Science.gov (United States)

    Jeon, Jinhyuck; Kim, Shinyoung; Park, Chanha; Yang, Hyunjo; Yim, Donggyu; Kuechler, Bernd; Zimmermann, Rainer; Muelders, Thomas; Klostermann, Ulrich; Schmoeller, Thomas; Do, Mun-hoe; Choi, Jung-Hoe

    2014-03-01

    scanning electron microscope (SEM) measurements. High resist impact and difficult model data acquisition demand for a simulation model that hat is capable of extrapolating reliably beyond its calibration dataset. We use rigorous simulation models to provide that predictive performance. We have discussed the need of a rigorous mask optimization process for DRAM contact cell layout yielding mask layouts that are optimal in process performance, mask manufacturability and accuracy. In this paper, we have shown the step by step process from analytical illumination source derivation, a NTD and application tailored model calibration to layout optimization such as OPC and SRAF placement. Finally the work has been verified with simulation and experimental results on wafer.

  8. A CUMULATIVE MIGRATION METHOD FOR COMPUTING RIGOROUS TRANSPORT CROSS SECTIONS AND DIFFUSION COEFFICIENTS FOR LWR LATTICES WITH MONTE CARLO

    Energy Technology Data Exchange (ETDEWEB)

    Zhaoyuan Liu; Kord Smith; Benoit Forget; Javier Ortensi

    2016-05-01

    A new method for computing homogenized assembly neutron transport cross sections and dif- fusion coefficients that is both rigorous and computationally efficient is proposed in this paper. In the limit of a homogeneous hydrogen slab, the new method is equivalent to the long-used, and only-recently-published CASMO transport method. The rigorous method is used to demonstrate the sources of inaccuracy in the commonly applied “out-scatter” transport correction. It is also demonstrated that the newly developed method is directly applicable to lattice calculations per- formed by Monte Carlo and is capable of computing rigorous homogenized transport cross sections for arbitrarily heterogeneous lattices. Comparisons of several common transport cross section ap- proximations are presented for a simple problem of infinite medium hydrogen. The new method has also been applied in computing 2-group diffusion data for an actual PWR lattice from BEAVRS benchmark.

  9. Building an Evidence Base to Inform Interventions for Pregnant and Parenting Adolescents: A Call for Rigorous Evaluation

    Science.gov (United States)

    Burrus, Barri B.; Scott, Alicia Richmond

    2012-01-01

    Adolescent parents and their children are at increased risk for adverse short- and long-term health and social outcomes. Effective interventions are needed to support these young families. We studied the evidence base and found a dearth of rigorously evaluated programs. Strategies from successful interventions are needed to inform both intervention design and policies affecting these adolescents. The lack of rigorous evaluations may be attributable to inadequate emphasis on and sufficient funding for evaluation, as well as to challenges encountered by program evaluators working with this population. More rigorous program evaluations are urgently needed to provide scientifically sound guidance for programming and policy decisions. Evaluation lessons learned have implications for other vulnerable populations. PMID:22897541

  10. Rigorous classification and carbon accounting principles for low and Zero Carbon Cities

    International Nuclear Information System (INIS)

    Kennedy, Scott; Sgouridis, Sgouris

    2011-01-01

    A large number of communities, new developments, and regions aim to lower their carbon footprint and aspire to become 'zero carbon' or 'Carbon Neutral.' Yet there are neither clear definitions for the scope of emissions that such a label would address on an urban scale, nor is there a process for qualifying the carbon reduction claims. This paper addresses the question of how to define a zero carbon, Low Carbon, or Carbon Neutral urban development by proposing hierarchical emissions categories with three levels: Internal Emissions based on the geographical boundary, external emissions directly caused by core municipal activities, and internal or external emissions due to non-core activities. Each level implies a different carbon management strategy (eliminating, balancing, and minimizing, respectively) needed to meet a Net Zero Carbon designation. The trade-offs, implications, and difficulties of implementing carbon debt accounting based upon these definitions are further analyzed. - Highlights: → A gap exists in comprehensive and standardized accounting methods for urban carbon emissions. → We propose a comprehensive and rigorous City Framework for Carbon Accounting (CiFCA). → CiFCA classifies emissions hierarchically with corresponding carbon management strategies. → Adoption of CiFCA allows for meaningful comparisons of claimed performance of eco-cities.

  11. Revisiting the constant growth angle: Estimation and verification via rigorous thermal modeling

    Science.gov (United States)

    Virozub, Alexander; Rasin, Igal G.; Brandon, Simon

    2008-12-01

    Methods for estimating growth angle ( θgr) values, based on the a posteriori analysis of directionally solidified material (e.g. drops) often involve assumptions of negligible gravitational effects as well as a planar solid/liquid interface during solidification. We relax both of these assumptions when using experimental drop shapes from the literature to estimate the relevant growth angles at the initial stages of solidification. Assumed to be constant, we use these values as input into a rigorous heat transfer and solidification model of the growth process. This model, which is shown to reproduce the experimental shape of a solidified sessile water drop using the literature value of θgr=0∘, yields excellent agreement with experimental profiles using our estimated values for silicon ( θgr=10∘) and germanium ( θgr=14.3∘) solidifying on an isotropic crystalline surface. The effect of gravity on the solidified drop shape is found to be significant in the case of germanium, suggesting that gravity should either be included in the analysis or that care should be taken that the relevant Bond number is truly small enough in each measurement. The planar solidification interface assumption is found to be unjustified. Although this issue is important when simulating the inflection point in the profile of the solidified water drop, there are indications that solidified drop shapes (at least in the case of silicon) may be fairly insensitive to the shape of this interface.

  12. EarthLabs Modules: Engaging Students In Extended, Rigorous Investigations Of The Ocean, Climate and Weather

    Science.gov (United States)

    Manley, J.; Chegwidden, D.; Mote, A. S.; Ledley, T. S.; Lynds, S. E.; Haddad, N.; Ellins, K.

    2016-02-01

    EarthLabs, envisioned as a national model for high school Earth or Environmental Science lab courses, is adaptable for both undergraduate middle school students. The collection includes ten online modules that combine to feature a global view of our planet as a dynamic, interconnected system, by engaging learners in extended investigations. EarthLabs support state and national guidelines, including the NGSS, for science content. Four modules directly guide students to discover vital aspects of the oceans while five other modules incorporate ocean sciences in order to complete an understanding of Earth's climate system. Students gain a broad perspective on the key role oceans play in fishing industry, droughts, coral reefs, hurricanes, the carbon cycle, as well as life on land and in the seas to drive our changing climate by interacting with scientific research data, manipulating satellite imagery, numerical data, computer visualizations, experiments, and video tutorials. Students explore Earth system processes and build quantitative skills that enable them to objectively evaluate scientific findings for themselves as they move through ordered sequences that guide the learning. As a robust collection, EarthLabs modules engage students in extended, rigorous investigations allowing a deeper understanding of the ocean, climate and weather. This presentation provides an overview of the ten curriculum modules that comprise the EarthLabs collection developed by TERC and found at http://serc.carleton.edu/earthlabs/index.html. Evaluation data on the effectiveness and use in secondary education classrooms will be summarized.

  13. A Rigorous Investigation on the Ground State of the Penson-Kolb Model

    Science.gov (United States)

    Yang, Kai-Hua; Tian, Guang-Shan; Han, Ru-Qi

    2003-05-01

    By using either numerical calculations or analytical methods, such as the bosonization technique, the ground state of the Penson-Kolb model has been previously studied by several groups. Some physicists argued that, as far as the existence of superconductivity in this model is concerned, it is canonically equivalent to the negative-U Hubbard model. However, others did not agree. In the present paper, we shall investigate this model by an independent and rigorous approach. We show that the ground state of the Penson-Kolb model is nondegenerate and has a nonvanishing overlap with the ground state of the negative-U Hubbard model. Furthermore, we also show that the ground states of both the models have the same good quantum numbers and may have superconducting long-range order at the same momentum q = 0. Our results support the equivalence between these models. The project partially supported by the Special Funds for Major State Basic Research Projects (G20000365) and National Natural Science Foundation of China under Grant No. 10174002

  14. Complexities and Controversies in Himalayan Research: A Call for Collaboration and Rigor for Better Data

    Directory of Open Access Journals (Sweden)

    Surendra P. Singh

    2015-11-01

    Full Text Available The Himalaya range encompasses enormous variation in elevation, precipitation, biodiversity, and patterns of human livelihoods. These mountains modify the regional climate in complex ways; the ecosystem services they provide influence the lives of almost 1 billion people in 8 countries. However, our understanding of these ecosystems remains rudimentary. The 2007 Intergovernmental Panel on Climate Change report that erroneously predicted a date for widespread glacier loss exposed how little was known of Himalayan glaciers. Recent research shows how variably glaciers respond to climate change in different Himalayan regions. Alarmist theories are not new. In the 1980s, the Theory of Himalayan Degradation warned of complete forest loss and devastation of downstream areas, an eventuality that never occurred. More recently, the debate on hydroelectric construction appears driven by passions rather than science. Poor data, hasty conclusions, and bad science plague Himalayan research. Rigorous sampling, involvement of civil society in data collection, and long-term collaborative research involving institutions from across the Himalaya are essential to improve knowledge of this region.

  15. Rigorous Mathematical Thinking Approach to Enhance Students’ Mathematical Creative and Critical Thinking Abilities

    Science.gov (United States)

    Hidayat, D.; Nurlaelah, E.; Dahlan, J. A.

    2017-09-01

    The ability of mathematical creative and critical thinking are two abilities that need to be developed in the learning of mathematics. Therefore, efforts need to be made in the design of learning that is capable of developing both capabilities. The purpose of this research is to examine the mathematical creative and critical thinking ability of students who get rigorous mathematical thinking (RMT) approach and students who get expository approach. This research was quasi experiment with control group pretest-posttest design. The population were all of students grade 11th in one of the senior high school in Bandung. The result showed that: the achievement of mathematical creative and critical thinking abilities of student who obtain RMT is better than students who obtain expository approach. The use of Psychological tools and mediation with criteria of intentionality, reciprocity, and mediated of meaning on RMT helps students in developing condition in critical and creative processes. This achievement contributes to the development of integrated learning design on students’ critical and creative thinking processes.

  16. A Rigorous Theory of Many-Body Prethermalization for Periodically Driven and Closed Quantum Systems

    Science.gov (United States)

    Abanin, Dmitry; De Roeck, Wojciech; Ho, Wen Wei; Huveneers, François

    2017-09-01

    Prethermalization refers to the transient phenomenon where a system thermalizes according to a Hamiltonian that is not the generator of its evolution. We provide here a rigorous framework for quantum spin systems where prethermalization is exhibited for very long times. First, we consider quantum spin systems under periodic driving at high frequency {ν}. We prove that up to a quasi-exponential time {τ_* ˜ e^{c ν/log^3 ν}}, the system barely absorbs energy. Instead, there is an effective local Hamiltonian {\\widehat D} that governs the time evolution up to {τ_*}, and hence this effective Hamiltonian is a conserved quantity up to {τ_*}. Next, we consider systems without driving, but with a separation of energy scales in the Hamiltonian. A prime example is the Fermi-Hubbard model where the interaction U is much larger than the hopping J. Also here we prove the emergence of an effective conserved quantity, different from the Hamiltonian, up to a time {τ_*} that is (almost) exponential in {U/J}.

  17. Coupling of Rigor Mortis and Intestinal Necrosis during C. elegans Organismal Death

    Directory of Open Access Journals (Sweden)

    Evgeniy R. Galimov

    2018-03-01

    Full Text Available Organismal death is a process of systemic collapse whose mechanisms are less well understood than those of cell death. We previously reported that death in C. elegans is accompanied by a calcium-propagated wave of intestinal necrosis, marked by a wave of blue autofluorescence (death fluorescence. Here, we describe another feature of organismal death, a wave of body wall muscle contraction, or death contraction (DC. This phenomenon is accompanied by a wave of intramuscular Ca2+ release and, subsequently, of intestinal necrosis. Correlation of directions of the DC and intestinal necrosis waves implies coupling of these death processes. Long-lived insulin/IGF-1-signaling mutants show reduced DC and delayed intestinal necrosis, suggesting possible resistance to organismal death. DC resembles mammalian rigor mortis, a postmortem necrosis-related process in which Ca2+ influx promotes muscle hyper-contraction. In contrast to mammals, DC is an early rather than a late event in C. elegans organismal death.

  18. Inosine-5'-monophosphate is a candidate agent to resolve rigor mortis of skeletal muscle.

    Science.gov (United States)

    Matsuishi, Masanori; Tsuji, Mariko; Yamaguchi, Megumi; Kitamura, Natsumi; Tanaka, Sachi; Nakamura, Yukinobu; Okitani, Akihiro

    2016-11-01

    The object of the present study was to reveal the action of inosine-5'-monophosphate (IMP) toward myofibrils in postmortem muscles. IMP solubilized isolated actomyosin within a narrow range of KCl concentration, 0.19-0.20 mol/L, because of the dissociation of actomyosin into actin and myosin, but it did not solubilize the proteins in myofibrils with 0.2 mol/L KCl. However, IMP could solubilize both proteins in myofibrils with 0.2 mol/L KCl in the presence of 1 m mol/L pyrophosphate or 1.0-3.3 m mol/L adenosine-5'-diphosphate (ADP). Thus, we presumed that pyrophosphate and ADP released thin filaments composed of actin, and thick filaments composed of myosin from restraints of myofibrils, and then both filaments were solubilized through the IMP-induced dissociation of actomyosin. Thus, we concluded that IMP is a candidate agent to resolve rigor mortis because of its ability to break the association between thick and thin filaments. © 2016 Japanese Society of Animal Science.

  19. Alternative pre-rigor foreshank positioning can improve beef shoulder muscle tenderness.

    Science.gov (United States)

    Grayson, A L; Lawrence, T E

    2013-09-01

    Thirty beef carcasses were harvested and the foreshank of each side was independently positioned (cranial, natural, parallel, or caudal) 1h post-mortem to determine the effect of foreshank angle at rigor mortis on the sarcomere length and tenderness of six beef shoulder muscles. The infraspinatus (IS), pectoralis profundus (PP), serratus ventralis (SV), supraspinatus (SS), teres major (TM) and triceps brachii (TB) were excised 48 h post-mortem for Warner-Bratzler shear force (WBSF) and sarcomere length evaluations. All muscles except the SS had altered (P<0.05) sarcomere lengths between positions; the cranial position resulted in the longest sarcomeres for the SV and TB muscles whilst the natural position had longer sarcomeres for the PP and TM muscles. The SV from the cranial position had lower (P<0.05) shear than the caudal position and TB from the natural position had lower (P<0.05) shear than the parallel or caudal positions. Sarcomere length was moderately correlated (r=-0.63; P<0.01) to shear force. Copyright © 2013 Elsevier Ltd. All rights reserved.

  20. Coupling of Rigor Mortis and Intestinal Necrosis during C. elegans Organismal Death.

    Science.gov (United States)

    Galimov, Evgeniy R; Pryor, Rosina E; Poole, Sarah E; Benedetto, Alexandre; Pincus, Zachary; Gems, David

    2018-03-06

    Organismal death is a process of systemic collapse whose mechanisms are less well understood than those of cell death. We previously reported that death in C. elegans is accompanied by a calcium-propagated wave of intestinal necrosis, marked by a wave of blue autofluorescence (death fluorescence). Here, we describe another feature of organismal death, a wave of body wall muscle contraction, or death contraction (DC). This phenomenon is accompanied by a wave of intramuscular Ca 2+ release and, subsequently, of intestinal necrosis. Correlation of directions of the DC and intestinal necrosis waves implies coupling of these death processes. Long-lived insulin/IGF-1-signaling mutants show reduced DC and delayed intestinal necrosis, suggesting possible resistance to organismal death. DC resembles mammalian rigor mortis, a postmortem necrosis-related process in which Ca 2+ influx promotes muscle hyper-contraction. In contrast to mammals, DC is an early rather than a late event in C. elegans organismal death. VIDEO ABSTRACT. Copyright © 2018 The Author(s). Published by Elsevier Inc. All rights reserved.

  1. Rigor mortis and the epileptology of Charles Bland Radcliffe (1822-1889).

    Science.gov (United States)

    Eadie, M J

    2007-03-01

    Charles Bland Radcliffe (1822-1889) was one of the physicians who made major contributions to the literature on epilepsy in the mid-19th century, when the modern understanding of the disorder was beginning to emerge, particularly in England. His experimental work was concerned with the electrical properties of frog muscle and nerve. Early in his career he related his experimental findings to the phenomenon of rigor mortis and concluded that, contrary to the general belief of the time, muscle contraction depended on the cessation of nerve input, and muscle relaxation on its presence. He adhered to this counter-intuitive interpretation throughout his life and, based on it, produced an epileptology that was very different from those of his contemporaries and successors. His interpretations were ultimately without any direct influence on the advance of knowledge. However, his idea that withdrawal of an inhibitory process released previously suppressed muscular contractile powers, when applied to the brain rather than the periphery of the nervous system, permitted Hughlings Jackson to explain certain psychological phenomena that accompany or follow some epileptic events. As well, Radcliffe was one of the chief early advocates for potassium bromide, the first effective anticonvulsant.

  2. Improving students’ mathematical critical thinking through rigorous teaching and learning model with informal argument

    Science.gov (United States)

    Hamid, H.

    2018-01-01

    The purpose of this study is to analyze an improvement of students’ mathematical critical thinking (CT) ability in Real Analysis course by using Rigorous Teaching and Learning (RTL) model with informal argument. In addition, this research also attempted to understand students’ CT on their initial mathematical ability (IMA). This study was conducted at a private university in academic year 2015/2016. The study employed the quasi-experimental method with pretest-posttest control group design. The participants of the study were 83 students in which 43 students were in the experimental group and 40 students were in the control group. The finding of the study showed that students in experimental group outperformed students in control group on mathematical CT ability based on their IMA (high, medium, low) in learning Real Analysis. In addition, based on medium IMA the improvement of mathematical CT ability of students who were exposed to RTL model with informal argument was greater than that of students who were exposed to CI (conventional instruction). There was also no effect of interaction between RTL model and CI model with both (high, medium, and low) IMA increased mathematical CT ability. Finally, based on (high, medium, and low) IMA there was a significant improvement in the achievement of all indicators of mathematical CT ability of students who were exposed to RTL model with informal argument than that of students who were exposed to CI.

  3. Control group design: enhancing rigor in research of mind-body therapies for depression.

    Science.gov (United States)

    Kinser, Patricia Anne; Robins, Jo Lynne

    2013-01-01

    Although a growing body of research suggests that mind-body therapies may be appropriate to integrate into the treatment of depression, studies consistently lack methodological sophistication particularly in the area of control groups. In order to better understand the relationship between control group selection and methodological rigor, we provide a brief review of the literature on control group design in yoga and tai chi studies for depression, and we discuss challenges we have faced in the design of control groups for our recent clinical trials of these mind-body complementary therapies for women with depression. To address the multiple challenges of research about mind-body therapies, we suggest that researchers should consider 4 key questions: whether the study design matches the research question; whether the control group addresses performance, expectation, and detection bias; whether the control group is ethical, feasible, and attractive; and whether the control group is designed to adequately control for nonspecific intervention effects. Based on these questions, we provide specific recommendations about control group design with the goal of minimizing bias and maximizing validity in future research.

  4. Methodological Challenges in Sustainability Science: A Call for Method Plurality, Procedural Rigor and Longitudinal Research

    Directory of Open Access Journals (Sweden)

    Henrik von Wehrden

    2017-02-01

    Full Text Available Sustainability science encompasses a unique field that is defined through its purpose, the problem it addresses, and its solution-oriented agenda. However, this orientation creates significant methodological challenges. In this discussion paper, we conceptualize sustainability problems as wicked problems to tease out the key challenges that sustainability science is facing if scientists intend to deliver on its solution-oriented agenda. Building on the available literature, we discuss three aspects that demand increased attention for advancing sustainability science: 1 methods with higher diversity and complementarity are needed to increase the chance of deriving solutions to the unique aspects of wicked problems; for instance, mixed methods approaches are potentially better suited to allow for an approximation of solutions, since they cover wider arrays of knowledge; 2 methodologies capable of dealing with wicked problems demand strict procedural and ethical guidelines, in order to ensure their integration potential; for example, learning from solution implementation in different contexts requires increased comparability between research approaches while carefully addressing issues of legitimacy and credibility; and 3 approaches are needed that allow for longitudinal research, since wicked problems are continuous and solutions can only be diagnosed in retrospect; for example, complex dynamics of wicked problems play out across temporal patterns that are not necessarily aligned with the common timeframe of participatory sustainability research. Taken together, we call for plurality in methodologies, emphasizing procedural rigor and the necessity of continuous research to effectively addressing wicked problems as well as methodological challenges in sustainability science.

  5. A TRADITIONAL FALSE PROBLEM: THE RIGORISM OF KANTIAN MORAL AND POLITICAL PHILOSOPHY. THE CASE OF VERACITY

    Directory of Open Access Journals (Sweden)

    MIHAI NOVAC

    2012-05-01

    Full Text Available According to many of its traditional critics, the main weakness in Kantian moral-political philosophy resides in its impossibility of admitting exceptions. In nuce, all these critical positions have converged, despite their reciprocal heterogeneity, in the so called accuse of moral rigorism (unjustly, I would say directed against Kant’s moral and political perspective. As such, basically, I will seek to defend Kant against this type of criticism, by showing that any perspective attempting to evaluate Kant’s ethics on the grounds of its capacity or incapacity to admit exceptions is apriorily doomed to lack of sense, in its two logical alternatives, i.e. either as nonsense (predicating about empty notions, or as tautology (formulating ad hoc definitions and criteria with respect to Kant’s system and then claiming that it does not hold with respect to them. Essentially, I will try to show that Kantian ethics can organically immunize itself epistemologically against any such so called antirigorist criticism.

  6. Rigorous Training of Dogs Leads to High Accuracy in Human Scent Matching-To-Sample Performance.

    Directory of Open Access Journals (Sweden)

    Sophie Marchal

    Full Text Available Human scent identification is based on a matching-to-sample task in which trained dogs are required to compare a scent sample collected from an object found at a crime scene to that of a suspect. Based on dogs' greater olfactory ability to detect and process odours, this method has been used in forensic investigations to identify the odour of a suspect at a crime scene. The excellent reliability and reproducibility of the method largely depend on rigor in dog training. The present study describes the various steps of training that lead to high sensitivity scores, with dogs matching samples with 90% efficiency when the complexity of the scents presented during the task in the sample is similar to that presented in the in lineups, and specificity reaching a ceiling, with no false alarms in human scent matching-to-sample tasks. This high level of accuracy ensures reliable results in judicial human scent identification tests. Also, our data should convince law enforcement authorities to use these results as official forensic evidence when dogs are trained appropriately.

  7. Applying rigorous decision analysis methodology to optimization of a tertiary recovery project

    International Nuclear Information System (INIS)

    Wackowski, R.K.; Stevens, C.E.; Masoner, L.O.; Attanucci, V.; Larson, J.L.; Aslesen, K.S.

    1992-01-01

    This paper reports that the intent of this study was to rigorously look at all of the possible expansion, investment, operational, and CO 2 purchase/recompression scenarios (over 2500) to yield a strategy that would maximize net present value of the CO 2 project at the Rangely Weber Sand Unit. Traditional methods of project management, which involve analyzing large numbers of single case economic evaluations, was found to be too cumbersome and inaccurate for an analysis of this scope. The decision analysis methodology utilized a statistical approach which resulted in a range of economic outcomes. Advantages of the decision analysis methodology included: a more organized approach to classification of decisions and uncertainties; a clear sensitivity method to identify the key uncertainties; an application of probabilistic analysis through the decision tree; and a comprehensive display of the range of possible outcomes for communication to decision makers. This range made it possible to consider the upside and downside potential of the options and to weight these against the Unit's strategies. Savings in time and manpower required to complete the study were also realized

  8. Rigorous numerical study of strong microwave photon-magnon coupling in all-dielectric magnetic multilayers

    Energy Technology Data Exchange (ETDEWEB)

    Maksymov, Ivan S., E-mail: ivan.maksymov@uwa.edu.au [School of Physics M013, The University of Western Australia, 35 Stirling Highway, Crawley, WA 6009 (Australia); ARC Centre of Excellence for Nanoscale BioPhotonics, School of Applied Sciences, RMIT University, Melbourne, VIC 3001 (Australia); Hutomo, Jessica; Nam, Donghee; Kostylev, Mikhail [School of Physics M013, The University of Western Australia, 35 Stirling Highway, Crawley, WA 6009 (Australia)

    2015-05-21

    We demonstrate theoretically a ∼350-fold local enhancement of the intensity of the in-plane microwave magnetic field in multilayered structures made from a magneto-insulating yttrium iron garnet (YIG) layer sandwiched between two non-magnetic layers with a high dielectric constant matching that of YIG. The enhancement is predicted for the excitation regime when the microwave magnetic field is induced inside the multilayer by the transducer of a stripline Broadband Ferromagnetic Resonance (BFMR) setup. By means of a rigorous numerical solution of the Landau-Lifshitz-Gilbert equation consistently with the Maxwell's equations, we investigate the magnetisation dynamics in the multilayer. We reveal a strong photon-magnon coupling, which manifests itself as anti-crossing of the ferromagnetic resonance magnon mode supported by the YIG layer and the electromagnetic resonance mode supported by the whole multilayered structure. The frequency of the magnon mode depends on the external static magnetic field, which in our case is applied tangentially to the multilayer in the direction perpendicular to the microwave magnetic field induced by the stripline of the BFMR setup. The frequency of the electromagnetic mode is independent of the static magnetic field. Consequently, the predicted photon-magnon coupling is sensitive to the applied magnetic field and thus can be used in magnetically tuneable metamaterials based on simultaneously negative permittivity and permeability achievable thanks to the YIG layer. We also suggest that the predicted photon-magnon coupling may find applications in microwave quantum information systems.

  9. Rigorous numerical modeling of scattering-type scanning near-field optical microscopy and spectroscopy

    Science.gov (United States)

    Chen, Xinzhong; Lo, Chiu Fan Bowen; Zheng, William; Hu, Hai; Dai, Qing; Liu, Mengkun

    2017-11-01

    Over the last decade, scattering-type scanning near-field optical microscopy and spectroscopy have been widely used in nano-photonics and material research due to their fine spatial resolution and broad spectral range. A number of simplified analytical models have been proposed to quantitatively understand the tip-scattered near-field signal. However, a rigorous interpretation of the experimental results is still lacking at this stage. Numerical modelings, on the other hand, are mostly done by simulating the local electric field slightly above the sample surface, which only qualitatively represents the near-field signal rendered by the tip-sample interaction. In this work, we performed a more comprehensive numerical simulation which is based on realistic experimental parameters and signal extraction procedures. By directly comparing to the experiments as well as other simulation efforts, our methods offer a more accurate quantitative description of the near-field signal, paving the way for future studies of complex systems at the nanoscale.

  10. Rigorous constraints on the matrix elements of the energy–momentum tensor

    Directory of Open Access Journals (Sweden)

    Peter Lowdon

    2017-11-01

    Full Text Available The structure of the matrix elements of the energy–momentum tensor play an important role in determining the properties of the form factors A(q2, B(q2 and C(q2 which appear in the Lorentz covariant decomposition of the matrix elements. In this paper we apply a rigorous frame-independent distributional-matching approach to the matrix elements of the Poincaré generators in order to derive constraints on these form factors as q→0. In contrast to the literature, we explicitly demonstrate that the vanishing of the anomalous gravitomagnetic moment B(0 and the condition A(0=1 are independent of one another, and that these constraints are not related to the specific properties or conservation of the individual Poincaré generators themselves, but are in fact a consequence of the physical on-shell requirement of the states in the matrix elements and the manner in which these states transform under Poincaré transformations.

  11. Rigorous, robust and systematic: Qualitative research and its contribution to burn care. An integrative review.

    Science.gov (United States)

    Kornhaber, Rachel Anne; de Jong, A E E; McLean, L

    2015-12-01

    Qualitative methods are progressively being implemented by researchers for exploration within healthcare. However, there has been a longstanding and wide-ranging debate concerning the relative merits of qualitative research within the health care literature. This integrative review aimed to exam the contribution of qualitative research in burns care and subsequent rehabilitation. Studies were identified using an electronic search strategy using the databases PubMed, Cumulative Index of Nursing and Allied Health Literature (CINAHL), Excerpta Medica database (EMBASE) and Scopus of peer reviewed primary research in English between 2009 to April 2014 using Whittemore and Knafl's integrative review method as a guide for analysis. From the 298 papers identified, 26 research papers met the inclusion criteria. Across all studies there was an average of 22 participants involved in each study with a range of 6-53 participants conducted across 12 nations that focussed on burns prevention, paediatric burns, appropriate acquisition and delivery of burns care, pain and psychosocial implications of burns trauma. Careful and rigorous application of qualitative methodologies promotes and enriches the development of burns knowledge. In particular, the key elements in qualitative methodological process and its publication are critical in disseminating credible and methodologically sound qualitative research. Copyright © 2015 Elsevier Ltd and ISBI. All rights reserved.

  12. Rigorous analysis of image force barrier lowering in bounded geometries: application to semiconducting nanowires

    International Nuclear Information System (INIS)

    Calahorra, Yonatan; Mendels, Dan; Epstein, Ariel

    2014-01-01

    Bounded geometries introduce a fundamental problem in calculating the image force barrier lowering of metal-wrapped semiconductor systems. In bounded geometries, the derivation of the barrier lowering requires calculating the reference energy of the system, when the charge is at the geometry center. In the following, we formulate and rigorously solve this problem; this allows combining the image force electrostatic potential with the band diagram of the bounded geometry. The suggested approach is applied to spheres as well as cylinders. Furthermore, although the expressions governing cylindrical systems are complex and can only be evaluated numerically, we present analytical approximations for the solution, which allow easy implementation in calculated band diagrams. The results are further used to calculate the image force barrier lowering of metal-wrapped cylindrical nanowires; calculations show that although the image force potential is stronger than that of planar systems, taking the complete band-structure into account results in a weaker effect of barrier lowering. Moreover, when considering small diameter nanowires, we find that the electrostatic effects of the image force exceed the barrier region, and influence the electronic properties of the nanowire core. This study is of interest to the nanowire community, and in particular for the analysis of nanowire I−V measurements where wrapped or omega-shaped metallic contacts are used. (paper)

  13. Rigorous Statistical Bounds in Uncertainty Quantification for One-Layer Turbulent Geophysical Flows

    Science.gov (United States)

    Qi, Di; Majda, Andrew J.

    2018-04-01

    Statistical bounds controlling the total fluctuations in mean and variance about a basic steady-state solution are developed for the truncated barotropic flow over topography. Statistical ensemble prediction is an important topic in weather and climate research. Here, the evolution of an ensemble of trajectories is considered using statistical instability analysis and is compared and contrasted with the classical deterministic instability for the growth of perturbations in one pointwise trajectory. The maximum growth of the total statistics in fluctuations is derived relying on the statistical conservation principle of the pseudo-energy. The saturation bound of the statistical mean fluctuation and variance in the unstable regimes with non-positive-definite pseudo-energy is achieved by linking with a class of stable reference states and minimizing the stable statistical energy. Two cases with dependence on initial statistical uncertainty and on external forcing and dissipation are compared and unified under a consistent statistical stability framework. The flow structures and statistical stability bounds are illustrated and verified by numerical simulations among a wide range of dynamical regimes, where subtle transient statistical instability exists in general with positive short-time exponential growth in the covariance even when the pseudo-energy is positive-definite. Among the various scenarios in this paper, there exist strong forward and backward energy exchanges between different scales which are estimated by the rigorous statistical bounds.

  14. A Rigorous Temperature-Dependent Stochastic Modelling and Testing for MEMS-Based Inertial Sensor Errors

    Directory of Open Access Journals (Sweden)

    Spiros Pagiatakis

    2009-10-01

    Full Text Available In this paper, we examine the effect of changing the temperature points on MEMS-based inertial sensor random error. We collect static data under different temperature points using a MEMS-based inertial sensor mounted inside a thermal chamber. Rigorous stochastic models, namely Autoregressive-based Gauss-Markov (AR-based GM models are developed to describe the random error behaviour. The proposed AR-based GM model is initially applied to short stationary inertial data to develop the stochastic model parameters (correlation times. It is shown that the stochastic model parameters of a MEMS-based inertial unit, namely the ADIS16364, are temperature dependent. In addition, field kinematic test data collected at about 17 °C are used to test the performance of the stochastic models at different temperature points in the filtering stage using Unscented Kalman Filter (UKF. It is shown that the stochastic model developed at 20 °C provides a more accurate inertial navigation solution than the ones obtained from the stochastic models developed at −40 °C, −20 °C, 0 °C, +40 °C, and +60 °C. The temperature dependence of the stochastic model is significant and should be considered at all times to obtain optimal navigation solution for MEMS-based INS/GPS integration.

  15. A Rigorous Temperature-Dependent Stochastic Modelling and Testing for MEMS-Based Inertial Sensor Errors.

    Science.gov (United States)

    El-Diasty, Mohammed; Pagiatakis, Spiros

    2009-01-01

    In this paper, we examine the effect of changing the temperature points on MEMS-based inertial sensor random error. We collect static data under different temperature points using a MEMS-based inertial sensor mounted inside a thermal chamber. Rigorous stochastic models, namely Autoregressive-based Gauss-Markov (AR-based GM) models are developed to describe the random error behaviour. The proposed AR-based GM model is initially applied to short stationary inertial data to develop the stochastic model parameters (correlation times). It is shown that the stochastic model parameters of a MEMS-based inertial unit, namely the ADIS16364, are temperature dependent. In addition, field kinematic test data collected at about 17 °C are used to test the performance of the stochastic models at different temperature points in the filtering stage using Unscented Kalman Filter (UKF). It is shown that the stochastic model developed at 20 °C provides a more accurate inertial navigation solution than the ones obtained from the stochastic models developed at -40 °C, -20 °C, 0 °C, +40 °C, and +60 °C. The temperature dependence of the stochastic model is significant and should be considered at all times to obtain optimal navigation solution for MEMS-based INS/GPS integration.

  16. Diffraction-based overlay measurement on dedicated mark using rigorous modeling method

    Science.gov (United States)

    Lu, Hailiang; Wang, Fan; Zhang, Qingyun; Chen, Yonghui; Zhou, Chang

    2012-03-01

    Diffraction Based Overlay (DBO) is widely evaluated by numerous authors, results show DBO can provide better performance than Imaging Based Overlay (IBO). However, DBO has its own problems. As well known, Modeling based DBO (mDBO) faces challenges of low measurement sensitivity and crosstalk between various structure parameters, which may result in poor accuracy and precision. Meanwhile, main obstacle encountered by empirical DBO (eDBO) is that a few pads must be employed to gain sufficient information on overlay-induced diffraction signature variations, which consumes more wafer space and costs more measuring time. Also, eDBO may suffer from mark profile asymmetry caused by processes. In this paper, we propose an alternative DBO technology that employs a dedicated overlay mark and takes a rigorous modeling approach. This technology needs only two or three pads for each direction, which is economic and time saving. While overlay measurement error induced by mark profile asymmetry being reduced, this technology is expected to be as accurate and precise as scatterometry technologies.

  17. Estimation of the convergence order of rigorous coupled-wave analysis for OCD metrology

    Science.gov (United States)

    Ma, Yuan; Liu, Shiyuan; Chen, Xiuguo; Zhang, Chuanwei

    2011-12-01

    In most cases of optical critical dimension (OCD) metrology, when applying rigorous coupled-wave analysis (RCWA) to optical modeling, a high order of Fourier harmonics is usually set up to guarantee the convergence of the final results. However, the total number of floating point operations grows dramatically as the truncation order increases. Therefore, it is critical to choose an appropriate order to obtain high computational efficiency without losing much accuracy in the meantime. In this paper, the convergence order associated with the structural and optical parameters has been estimated through simulation. The results indicate that the convergence order is linear with the period of the sample when fixing the other parameters, both for planar diffraction and conical diffraction. The illuminated wavelength also affects the convergence of a final result. With further investigations concentrated on the ratio of illuminated wavelength to period, it is discovered that the convergence order decreases with the growth of the ratio, and when the ratio is fixed, convergence order jumps slightly, especially in a specific range of wavelength. This characteristic could be applied to estimate the optimum convergence order of given samples to obtain high computational efficiency.

  18. Conformational distributions and proximity relationships in the rigor complex of actin and myosin subfragment-1.

    Science.gov (United States)

    Nyitrai, M; Hild, G; Lukács, A; Bódis, E; Somogyi, B

    2000-01-28

    Cyclic conformational changes in the myosin head are considered essential for muscle contraction. We hereby show that the extension of the fluorescence resonance energy transfer method described originally by Taylor et al. (Taylor, D. L., Reidler, J., Spudich, J. A., and Stryer, L. (1981) J. Cell Biol. 89, 362-367) allows determination of the position of a labeled point outside the actin filament in supramolecular complexes and also characterization of the conformational heterogeneity of an actin-binding protein while considering donor-acceptor distance distributions. Using this method we analyzed proximity relationships between two labeled points of S1 and the actin filament in the acto-S1 rigor complex. The donor (N-[[(iodoacetyl)amino]ethyl]-5-naphthylamine-1-sulfonate) was attached to either the catalytic domain (Cys-707) or the essential light chain (Cys-177) of S1, whereas the acceptor (5-(iodoacetamido)fluorescein) was attached to the actin filament (Cys-374). In contrast to the narrow positional distribution (assumed as being Gaussian) of Cys-707 (5 +/- 3 A), the positional distribution of Cys-177 was found to be broad (102 +/- 4 A). Such a broad positional distribution of the label on the essential light chain of S1 may be important in accommodating the helically arranged acto-myosin binding relative to the filament axis.

  19. RIGOROUS PHOTOGRAMMETRIC PROCESSING OF CHANG'E-1 AND CHANG'E-2 STEREO IMAGERY FOR LUNAR TOPOGRAPHIC MAPPING

    OpenAIRE

    K. Di; Y. Liu; B. Liu; M. Peng

    2012-01-01

    Chang'E-1(CE-1) and Chang'E-2(CE-2) are the two lunar orbiters of China's lunar exploration program. Topographic mapping using CE-1 and CE-2 images is of great importance for scientific research as well as for preparation of landing and surface operation of Chang'E-3 lunar rover. In this research, we developed rigorous sensor models of CE-1 and CE-2 CCD cameras based on push-broom imaging principle with interior and exterior orientation parameters. Based on the rigorous sensor model, the 3D c...

  20. Jackknife Variance Estimator for Two Sample Linear Rank Statistics

    Science.gov (United States)

    1988-11-01

    Accesion For - - ,NTIS GPA&I "TIC TAB Unann c, nc .. [d Keywords: strong consistency; linear rank test’ influence function . i , at L By S- )Distribut...reverse if necessary and identify by block number) FIELD IGROUP SUB-GROUP Strong consistency; linear rank test; influence function . 19. ABSTRACT

  1. Robust Trypsin Coating on Electrospun Polymer Nanofibers in Rigorous Conditions and Its Uses for Protein Digestion

    Energy Technology Data Exchange (ETDEWEB)

    Ahn, Hye-Kyung; Kim, Byoung Chan; Jun, Seung-Hyun; Chang, Mun Seock; Lopez-Ferrer, Daniel; Smith, Richard D.; Gu, Man Bock; Lee, Sang-Won; Kim, Beom S.; Kim, Jungbae

    2010-12-15

    An efficient protein digestion in proteomic analysis requires the stabilization of proteases such as trypsin. In the present work, trypsin was stabilized in the form of enzyme coating on electrospun polymer nanofibers (EC-TR), which crosslinks additional trypsin molecules onto covalently-attached trypsin (CA-TR). EC-TR showed better stability than CA-TR in rigorous conditions, such as at high temperatures of 40 °C and 50 °C, in the presence of organic co-solvents, and at various pH's. For example, the half-lives of CA-TR and EC-TR were 0.24 and 163.20 hours at 40 ºC, respectively. The improved stability of EC-TR can be explained by covalent-linkages on the surface of trypsin molecules, which effectively inhibits the denaturation, autolysis, and leaching of trypsin. The protein digestion was performed at 40 °C by using both CA-TR and EC-TR in digesting a model protein, enolase. EC-TR showed better performance and stability than CA-TR by maintaining good performance of enolase digestion under recycled uses for a period of one week. In the same condition, CA-TR showed poor performance from the beginning, and could not be used for digestion at all after a few usages. The enzyme coating approach is anticipated to be successfully employed not only for protein digestion in proteomic analysis, but also for various other fields where the poor enzyme stability presently hampers the practical applications of enzymes.

  2. Rigorous construction and Hadamard property of the Unruh state in Schwarzschild spacetime

    International Nuclear Information System (INIS)

    Dappiaggi, Claudio; Pinamonti, Nicola

    2009-07-01

    The discovery of the radiation properties of black holes prompted the search for a natural candidate quantum ground state for a massless scalar field theory on Schwarzschild spacetime, here considered in the Eddington-Finkelstein representation. Among the several available proposals in the literature, an important physical role is played by the so-called Unruh state which is supposed to be appropriate to capture the physics of a black hole formed by spherically symmetric collapsing matter. Within this respect, we shall consider a massless Klein-Gordon field and we shall rigorously and globally construct such state, that is on the algebra of Weyl observables localised in the union of the static external region, the future event horizon and the non-static black hole region. Eventually, out of a careful use of microlocal techniques, we prove that the built state fulfils, where defined, the so-called Hadamard condition; hence, it is perturbatively stable, in other words realizing the natural candidate with which one could study purely quantum phenomena such as the role of the back reaction of Hawking's radiation. From a geometrical point of view, we shall make a profitable use of a bulk-to-boundary reconstruction technique which carefully exploits the Killing horizon structure as well as the conformal asymptotic behaviour of the underlying background. From an analytical point of view, our tools will range from Hoermander's theorem on propagation of singularities, results on the role of passive states, and a detailed use of the recently discovered peeling behaviour of the solutions of the wave equation in Schwarzschild spacetime. (orig.)

  3. Rigor mortis at the myocardium investigated by post-mortem magnetic resonance imaging.

    Science.gov (United States)

    Bonzon, Jérôme; Schön, Corinna A; Schwendener, Nicole; Zech, Wolf-Dieter; Kara, Levent; Persson, Anders; Jackowski, Christian

    2015-12-01

    Post-mortem cardiac MR exams present with different contraction appearances of the left ventricle in cardiac short axis images. It was hypothesized that the grade of post-mortem contraction may be related to the post-mortem interval (PMI) or cause of death and a phenomenon caused by internal rigor mortis that may give further insights in the circumstances of death. The cardiac contraction grade was investigated in 71 post-mortem cardiac MR exams (mean age at death 52 y, range 12-89 y; 48 males, 23 females). In cardiac short axis images the left ventricular lumen volume as well as the left ventricular myocardial volume were assessed by manual segmentation. The quotient of both (LVQ) represents the grade of myocardial contraction. LVQ was correlated to the PMI, sex, age, cardiac weight, body mass and height, cause of death and pericardial tamponade when present. In cardiac causes of death a separate correlation was investigated for acute myocardial infarction cases and arrhythmic deaths. LVQ values ranged from 1.99 (maximum dilatation) to 42.91 (maximum contraction) with a mean of 15.13. LVQ decreased slightly with increasing PMI, however without significant correlation. Pericardial tamponade positively correlated with higher LVQ values. Variables such as sex, age, body mass and height, cardiac weight and cause of death did not correlate with LVQ values. There was no difference in LVQ values for myocardial infarction without tamponade and arrhythmic deaths. Based on the observation in our investigated cases, the phenomenon of post-mortem myocardial contraction cannot be explained by the influence of the investigated variables, except for pericardial tamponade cases. Further research addressing post-mortem myocardial contraction has to focus on other, less obvious factors, which may influence the early post-mortem phase too. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  4. A Generic Model for Relative Adjustment Between Optical Sensors Using Rigorous Orbit Mechanics

    Directory of Open Access Journals (Sweden)

    B. Islam

    2008-06-01

    Full Text Available The classical calibration or space resection is the fundamental task in photogrammetry. The lack of sufficient knowledge of interior and exterior orientation parameters lead to unreliable results in the photogrammetric process. One of the earliest in approaches using in photogrammetry was the plumb line calibration method. This method is suitable to recover the radial and decentering lens distortion coefficients, while the remaining interior(focal length and principal point coordinates and exterior orientation parameters have to be determined by a complimentary method. As the lens distortion remains very less it not considered as the interior orientation parameters, in the present rigorous sensor model. There are several other available methods based on the photogrammetric collinearity equations, which consider the determination of exterior orientation parameters, with no mention to the simultaneous determination of inner orientation parameters. Normal space resection methods solve the problem using control points, whose coordinates are known both in image and object reference systems. The non-linearity of the model and the problems, in point location in digital images and identifying the maximum GPS measured control points are the main drawbacks of the classical approaches. This paper addresses mathematical model based on the fundamental assumption of collineariy of three points of two Along-Track Stereo imagery sensors and independent object point. Assuming this condition it is possible to extract the exterior orientation (EO parameters for a long strip and single image together, without and with using the control points. Moreover, after extracting the EO parameters the accuracy for satellite data products are compared in with using single and with no control points.

  5. Rigorous Performance Evaluation of Smartphone GNSS/IMU Sensors for ITS Applications

    Directory of Open Access Journals (Sweden)

    Vassilis Gikas

    2016-08-01

    Full Text Available With the rapid growth in smartphone technologies and improvement in their navigation sensors, an increasing amount of location information is now available, opening the road to the provision of new Intelligent Transportation System (ITS services. Current smartphone devices embody miniaturized Global Navigation Satellite System (GNSS, Inertial Measurement Unit (IMU and other sensors capable of providing user position, velocity and attitude. However, it is hard to characterize their actual positioning and navigation performance capabilities due to the disparate sensor and software technologies adopted among manufacturers and the high influence of environmental conditions, and therefore, a unified certification process is missing. This paper presents the analysis results obtained from the assessment of two modern smartphones regarding their positioning accuracy (i.e., precision and trueness capabilities (i.e., potential and limitations based on a practical but rigorous methodological approach. Our investigation relies on the results of several vehicle tracking (i.e., cruising and maneuvering tests realized through comparing smartphone obtained trajectories and kinematic parameters to those derived using a high-end GNSS/IMU system and advanced filtering techniques. Performance testing is undertaken for the HTC One S (Android and iPhone 5s (iOS. Our findings indicate that the deviation of the smartphone locations from ground truth (trueness deteriorates by a factor of two in obscured environments compared to those derived in open sky conditions. Moreover, it appears that iPhone 5s produces relatively smaller and less dispersed error values compared to those computed for HTC One S. Also, the navigation solution of the HTC One S appears to adapt faster to changes in environmental conditions, suggesting a somewhat different data filtering approach for the iPhone 5s. Testing the accuracy of the accelerometer and gyroscope sensors for a number of

  6. Bounding Averages Rigorously Using Semidefinite Programming: Mean Moments of the Lorenz System

    Science.gov (United States)

    Goluskin, David

    2018-04-01

    We describe methods for proving bounds on infinite-time averages in differential dynamical systems. The methods rely on the construction of nonnegative polynomials with certain properties, similarly to the way nonlinear stability can be proved using Lyapunov functions. Nonnegativity is enforced by requiring the polynomials to be sums of squares, a condition which is then formulated as a semidefinite program (SDP) that can be solved computationally. Although such computations are subject to numerical error, we demonstrate two ways to obtain rigorous results: using interval arithmetic to control the error of an approximate SDP solution, and finding exact analytical solutions to relatively small SDPs. Previous formulations are extended to allow for bounds depending analytically on parametric variables. These methods are illustrated using the Lorenz equations, a system with three state variables ( x, y, z) and three parameters (β ,σ ,r). Bounds are reported for infinite-time averages of all eighteen moments x^ly^mz^n up to quartic degree that are symmetric under (x,y)\\mapsto (-x,-y). These bounds apply to all solutions regardless of stability, including chaotic trajectories, periodic orbits, and equilibrium points. The analytical approach yields two novel bounds that are sharp: the mean of z^3 can be no larger than its value of (r-1)^3 at the nonzero equilibria, and the mean of xy^3 must be nonnegative. The interval arithmetic approach is applied at the standard chaotic parameters to bound eleven average moments that all appear to be maximized on the shortest periodic orbit. Our best upper bound on each such average exceeds its value on the maximizing orbit by less than 1%. Many bounds reported here are much tighter than would be possible without computer assistance.

  7. Analysis of specular resonance in dielectric bispheres using rigorous and geometrical-optics theories.

    Science.gov (United States)

    Miyazaki, Hideki T; Miyazaki, Hiroshi; Miyano, Kenjiro

    2003-09-01

    We have recently identified the resonant scattering from dielectric bispheres in the specular direction, which has long been known as the specular resonance, to be a type of rainbow (a caustic) and a general phenomenon for bispheres. We discuss the details of the specular resonance on the basis of systematic calculations. In addition to the rigorous theory, which precisely describes the scattering even in the resonance regime, the ray-tracing method, which gives the scattering in the geometrical-optics limit, is used. Specular resonance is explicitly defined as strong scattering in the direction of the specular reflection from the symmetrical axis of the bisphere whose intensity exceeds that of the scattering from noninteracting bispheres. Then the range of parameters for computing a particular specular resonance is specified. This resonance becomes prominent in a wide range of refractive indices (from 1.2 to 2.2) in a wide range of size parameters (from five to infinity) and for an arbitrarily polarized light incident within an angle of 40 degrees to the symmetrical axis. This particular scattering can stay evident even when the spheres are not in contact or the sizes of the spheres are different. Thus specular resonance is a common and robust phenomenon in dielectric bispheres. Furthermore, we demonstrate that various characteristic features in the scattering from bispheres can be explained successfully by using intuitive and simple representations. Most of the significant scatterings other than the specular resonance are also understandable as caustics in geometrical-optics theory. The specular resonance becomes striking at the smallest size parameter among these caustics because its optical trajectory is composed of only the refractions at the surfaces and has an exceptionally large intensity. However, some characteristics are not accounted for by geometrical optics. In particular, the oscillatory behaviors of their scattering intensity are well described by

  8. Unforgivable Sinners? Epistemological and Psychological Naturalism in Husserl’s Philosophy as a Rigorous Science

    Directory of Open Access Journals (Sweden)

    Andrea Sebastiano Staiti

    2012-01-01

    Full Text Available In this paper I present and assess Husserl's arguments against epistomological and psychological naturalism in his essay Philosophy as a Rigorous Science. I show that his critique is directed against positions that are generally more extreme than most currently debated variants of naturalism. However, Husserl has interesting thoughts to contribute to philosophy today. First, he shows that there is an important connection between naturalism in epistemology (which in his view amounts to the position that the validity of logic can be reduced to the validity natural laws of thinking and naturalism in psychology (which in his view amounts to the position that all psychic occurrences are merely parallel accompaniments of physiological occurrences. Second, he shows that a strong version of epistemological naturalism is self-undermining and fails to translate the cogency of logic in psychological terms. Third, and most importantly for current debates, he attacks Cartesianism as a form of psychological naturalism because of its construal of the psyche as a substance. Against this position, Husserl defends the necessity to formulate new epistemic aims for the investigation of consciousness. He contends that what is most interesting about consciousness is not its empirical fact but its transcendental function of granting cognitive access to all kinds of objects (both empirical and ideal. The study of this function requires a specific method (eidetics that cannot be conflated with empirical methods. I conclude that Husserl's analyses offer much-needed insight into the fabric of consciousness and compelling arguments against unwarranted metaphysical speculations about the relationship between mind and body.

  9. A Development of Advanced Rigorous 2 Step System for the High Resolution Residual Dose Evaluation

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Do Hyun; Kim, Jong Woo; Kim, Jea Hyun; Lee, Jae Yong; Shin, Chang Ho [Hanyang Univ., Seoul (Korea, Republic of); Kim, Song Hyun [Kyoto University, Sennan (Japan)

    2016-10-15

    In these days, an activation problem such as residual radiation is one of the important issues. The activated devices and structures can emit the residual radiation. Therefore, the activation should be properly analyzed to make a plan for design, operation, and decontamination of nuclear facilities. For activation calculation, Rigorous 2 Step (R2S) method is introduced as following strategy: (1) the particle transport calculation is performed for an object geometry to get particle spectra and total fluxes; (2) inventories of each cell are calculated by using flux information according to irradiation and decay history; (3) the residual gamma distribution was evaluated by transport code, if needed. This scheme is based on cell calculation of used geometry. In this method, the particle spectra and total fluxes are obtained by mesh tally for activation calculation. It is useful to reduce the effects of gradient flux information. Nevertheless, several limitations are known as follows: Firstly, high relative error of spectra, when lots of meshes were used; secondly, different flux information from spectrum of void in mesh-tally. To calculate high resolution residual dose, several method are developed such as R2Smesh and MCR2S unstructured mesh. The R2Smesh method products better efficiency for obtaining neutron spectra by using fine/coarse mesh. Also, the MCR2S unstructured mesh can effectively separate void spectrum. In this study, the AR2S system was developed to combine the features of those mesh based R2S method. To confirm the AR2S system, the simple activation problem was evaluated and compared with R2S method using same division. Those results have good agreement within 0.83 %. Therefore, it is expected that the AR2S system can properly estimate an activation problem.

  10. Rigorous construction and Hadamard property of the Unruh state in Schwarzschild spacetime

    Energy Technology Data Exchange (ETDEWEB)

    Dappiaggi, Claudio; Pinamonti, Nicola [Hamburg Univ. (Germany). II. Inst. fuer Theoretische Physik; Moretti, Valter [Trento Univ., Povo (Italy). Dipt. di Matematica; Istituto Nazionale di Fisica Nucleare, Povo (Italy); Istituto Nazionale di Alta Matematica ' ' F. Severi' ' , GNFM, Sesto Fiorentino (Italy)

    2009-07-15

    The discovery of the radiation properties of black holes prompted the search for a natural candidate quantum ground state for a massless scalar field theory on Schwarzschild spacetime, here considered in the Eddington-Finkelstein representation. Among the several available proposals in the literature, an important physical role is played by the so-called Unruh state which is supposed to be appropriate to capture the physics of a black hole formed by spherically symmetric collapsing matter. Within this respect, we shall consider a massless Klein-Gordon field and we shall rigorously and globally construct such state, that is on the algebra of Weyl observables localised in the union of the static external region, the future event horizon and the non-static black hole region. Eventually, out of a careful use of microlocal techniques, we prove that the built state fulfils, where defined, the so-called Hadamard condition; hence, it is perturbatively stable, in other words realizing the natural candidate with which one could study purely quantum phenomena such as the role of the back reaction of Hawking's radiation. From a geometrical point of view, we shall make a profitable use of a bulk-to-boundary reconstruction technique which carefully exploits the Killing horizon structure as well as the conformal asymptotic behaviour of the underlying background. From an analytical point of view, our tools will range from Hoermander's theorem on propagation of singularities, results on the role of passive states, and a detailed use of the recently discovered peeling behaviour of the solutions of the wave equation in Schwarzschild spacetime. (orig.)

  11. Atlantic salmon skin and fillet color changes effected by perimortem handling stress, rigor mortis, and ice storage.

    Science.gov (United States)

    Erikson, U; Misimi, E

    2008-03-01

    The changes in skin and fillet color of anesthetized and exhausted Atlantic salmon were determined immediately after killing, during rigor mortis, and after ice storage for 7 d. Skin color (CIE L*, a*, b*, and related values) was determined by a Minolta Chroma Meter. Roche SalmoFan Lineal and Roche Color Card values were determined by a computer vision method and a sensory panel. Before color assessment, the stress levels of the 2 fish groups were characterized in terms of white muscle parameters (pH, rigor mortis, and core temperature). The results showed that perimortem handling stress initially significantly affected several color parameters of skin and fillets. Significant transient fillet color changes also occurred in the prerigor phase and during the development of rigor mortis. Our results suggested that fillet color was affected by postmortem glycolysis (pH drop, particularly in anesthetized fillets), then by onset and development of rigor mortis. The color change patterns during storage were different for the 2 groups of fish. The computer vision method was considered suitable for automated (online) quality control and grading of salmonid fillets according to color.

  12. Post mortem rigor development in the Egyptian goose (Alopochen aegyptiacus) breast muscle (pectoralis): factors which may affect the tenderness.

    Science.gov (United States)

    Geldenhuys, Greta; Muller, Nina; Frylinck, Lorinda; Hoffman, Louwrens C

    2016-01-15

    Baseline research on the toughness of Egyptian goose meat is required. This study therefore investigates the post mortem pH and temperature decline (15 min-4 h 15 min post mortem) in the pectoralis muscle (breast portion) of this gamebird species. It also explores the enzyme activity of the Ca(2+)-dependent protease (calpain system) and the lysosomal cathepsins during the rigor mortis period. No differences were found for any of the variables between genders. The pH decline in the pectoralis muscle occurs quite rapidly (c = -0.806; ultimate pH ∼ 5.86) compared with other species and it is speculated that the high rigor temperature (>20 °C) may contribute to the increased toughness. No calpain I was found in Egyptian goose meat and the µ/m-calpain activity remained constant during the rigor period, while a decrease in calpastatin activity was observed. The cathepsin B, B & L and H activity increased over the rigor period. Further research into the connective tissue content and myofibrillar breakdown during aging is required in order to know if the proteolytic enzymes do in actual fact contribute to tenderisation. © 2015 Society of Chemical Industry.

  13. Is Collaborative, Community-Engaged Scholarship More Rigorous than Traditional Scholarship? On Advocacy, Bias, and Social Science Research

    Science.gov (United States)

    Warren, Mark R.; Calderón, José; Kupscznk, Luke Aubry; Squires, Gregory; Su, Celina

    2018-01-01

    Contrary to the charge that advocacy-oriented research cannot meet social science research standards because it is inherently biased, the authors of this article argue that collaborative, community-engaged scholarship (CCES) must meet high standards of rigor if it is to be useful to support equity-oriented, social justice agendas. In fact, they…

  14. Rigorous Line-Based Transformation Model Using the Generalized Point Strategy for the Rectification of High Resolution Satellite Imagery

    Directory of Open Access Journals (Sweden)

    Kun Hu

    2016-09-01

    Full Text Available High precision geometric rectification of High Resolution Satellite Imagery (HRSI is the basis of digital mapping and Three-Dimensional (3D modeling. Taking advantage of line features as basic geometric control conditions instead of control points, the Line-Based Transformation Model (LBTM provides a practical and efficient way of image rectification. It is competent to build the mathematical relationship between image space and the corresponding object space accurately, while it reduces the workloads of ground control and feature recognition dramatically. Based on generalization and the analysis of existing LBTMs, a novel rigorous LBTM is proposed in this paper, which can further eliminate the geometric deformation caused by sensor inclination and terrain variation. This improved nonlinear LBTM is constructed based on a generalized point strategy and resolved by least squares overall adjustment. Geo-positioning accuracy experiments with IKONOS, GeoEye-1 and ZiYuan-3 satellite imagery are performed to compare rigorous LBTM with other relevant line-based and point-based transformation models. Both theoretic analysis and experimental results demonstrate that the rigorous LBTM is more accurate and reliable without adding extra ground control. The geo-positioning accuracy of satellite imagery rectified by rigorous LBTM can reach about one pixel with eight control lines and can be further improved by optimizing the horizontal and vertical distribution of control lines.

  15. Useful, Used, and Peer Approved: The Importance of Rigor and Accessibility in Postsecondary Research and Evaluation. WISCAPE Viewpoints

    Science.gov (United States)

    Vaade, Elizabeth; McCready, Bo

    2012-01-01

    Traditionally, researchers, policymakers, and practitioners have perceived a tension between rigor and accessibility in quantitative research and evaluation in postsecondary education. However, this study indicates that both producers and consumers of these studies value high-quality work and clear findings that can reach multiple audiences. The…

  16. EVALUACIÓN DE DOS MÉTODOS DE ESTABILIDAD FENOTÍPICA A TRAVÉS DE VALIDACIÓN CRUZADA EVALUATION OF TWO METHODS OF PHENOTYPIC STABILITY THROUGH CROSS-VALIDATION

    Directory of Open Access Journals (Sweden)

    Jairo Alberto Rueda Restrepo

    2009-12-01

    Full Text Available Una de las principales preocupaciones de los fitomejoradores es la evaluación de la estabilidad fenotípica mediante la realización de pruebas regionales o multiambiente. Existen numerosos métodos propuestos para el análisis de estas pruebas regionales y la estimación de la estabilidad fenotípica. En este trabajo se compara el método de regresión propuesto por Eberhart y Russell y el de componentes de varianza propuesto por Shukla, siguiendo un esquema de validación cruzada. Para ello fueron utilizados datos provenientes de 20 pruebas multiambiente de maíz, cada una con nueve genotipos, plantadas bajo un diseño en bloques completos al azar con cuatro repeticiones. Se encontró que el mejor modelo para predecir el rendimiento futuro de un genotipo en un determinado ambiente es el método de Eberhart y Russell, presentando un valor de raíz cuadrada del cuadrado medio de predicción 2,21% menos que el método de Shukla, con una consistencia en la predicción de 90,6%.One of the most important topics of plant breeders is to evaluate the phenotypic stability through regional trials or multi-environment trials. There are many methods proposed to analyze those trials and to estimate the phenotypic stability. This paper compares the regression method proposed by Eberhart and Russell and the components of variance proposed by Shukla, according to a cross-validation methodology. Data from 20 multi-environment corn tests, each one with nine genotypes, planted under a randomized complete block design with four replications, were used. It was found that the best model to predict the future performance of a genotype is the method of Eberhart and Russell, showing a root square value of the prediction medium 2,21% less than Shukla´s method , which a prediction consistence of 90.6%.

  17. Causality as a Rigorous Notion and Quantitative Causality Analysis with Time Series

    Science.gov (United States)

    Liang, X. S.

    2017-12-01

    Given two time series, can one faithfully tell, in a rigorous and quantitative way, the cause and effect between them? Here we show that this important and challenging question (one of the major challenges in the science of big data), which is of interest in a wide variety of disciplines, has a positive answer. Particularly, for linear systems, the maximal likelihood estimator of the causality from a series X2 to another series X1, written T2→1, turns out to be concise in form: T2→1 = [C11 C12 C2,d1 — C112 C1,d1] / [C112 C22 — C11C122] where Cij (i,j=1,2) is the sample covariance between Xi and Xj, and Ci,dj the covariance between Xi and ΔXj/Δt, the difference approximation of dXj/dt using the Euler forward scheme. An immediate corollary is that causation implies correlation, but not vice versa, resolving the long-standing debate over causation versus correlation. The above formula has been validated with touchstone series purportedly generated with one-way causality that evades the classical approaches such as Granger causality test and transfer entropy analysis. It has also been applied successfully to the investigation of many real problems. Through a simple analysis with the stock series of IBM and GE, an unusually strong one-way causality is identified from the former to the latter in their early era, revealing to us an old story, which has almost faded into oblivion, about "Seven Dwarfs" competing with a "Giant" for the computer market. Another example presented here regards the cause-effect relation between the two climate modes, El Niño and Indian Ocean Dipole (IOD). In general, these modes are mutually causal, but the causality is asymmetric. To El Niño, the information flowing from IOD manifests itself as a propagation of uncertainty from the Indian Ocean. In the third example, an unambiguous one-way causality is found between CO2 and the global mean temperature anomaly. While it is confirmed that CO2 indeed drives the recent global warming

  18. The Society for Implementation Research Collaboration Instrument Review Project: a methodology to promote rigorous evaluation.

    Science.gov (United States)

    Lewis, Cara C; Stanick, Cameo F; Martinez, Ruben G; Weiner, Bryan J; Kim, Mimi; Barwick, Melanie; Comtois, Katherine A

    2015-01-08

    science. Despite numerous constructs having greater than 20 available instruments, which implies saturation, preliminary results suggest that few instruments stem from gold standard development procedures. We anticipate identifying few high-quality, psychometrically sound instruments once our evidence-based assessment rating criteria have been applied. The results of this methodology may enhance the rigor of implementation science evaluations by systematically facilitating access to psychometrically validated instruments and identifying where further instrument development is needed.

  19. Rigorous study of the gap equation for an inhomogeneous superconducting state near T/sub c/

    International Nuclear Information System (INIS)

    Hu, C.

    1975-01-01

    A rigorous analytic study of the self-consistent gap equation (symobolically Δ=F/sub T/Δ), for an inhomogeneous superconducting state, is presented in the Bogoliubov formulation. The gap function Δ (r) is taken to simulate a planar normal-superconducting phase boundary: Δ (r) =Δ/sub infinity/ tanh(αΔ/sub infinity/z/v/sub F/) THETA (z), where Δ/sub infinity/(T) is the equilibrium gap, v/subF/ is the Fermi velocity, and THETA (z) is a unit step function. First a special space integral of the gap equation proportional∫ 0 /sub +//sup infinity/(F/sub T/-Δ)(dΔ/dz) dz is evaluated essentially exactly, except for a nonperturbative WKBJ approximation used in solving the Bogoliubov--de Gennes equations. It is then expanded near the transition temperature T/sub c/ in power of Δ/sub infinity/proportional (1-T/T/sub c/) 1 / 2 , demonstrating an exact cancellation of a subseries of ''anomalous-order'' terms. The leading surviving term is found to agree in order, but not in magnitude, with the Ginzburg-Landau-Gor'kov (GLG) approximation. The discrepancy is found to be linked to the slope discontinuity in our chosen Δ. A contour-integral technique in a complex-energy plane is then devised to evaluate the local value of F/sub T/-Δ exactly. Our result reveals that near T/sub c/ this method can reproduce the GLG result essentially everywhere, except within a BCS coherence length not xi (T) exclamation from a singularity in Δ, where F/sub T/-Δ can have a singular contribution with an ''anomalous'' local magnitude, not expected from the GLG approach. This anomalous term precisely accounts for the discrepancy found in the special integral of the gap equation as mentioned above, and likely explains the ultimate origin of the anomalous terms found in the free energy of an isolated vortex line by Cleary

  20. Ar-Ar_Redux: rigorous error propagation of 40Ar/39Ar data, including covariances

    Science.gov (United States)

    Vermeesch, P.

    2015-12-01

    Rigorous data reduction and error propagation algorithms are needed to realise Earthtime's objective to improve the interlaboratory accuracy of 40Ar/39Ar dating to better than 1% and thereby facilitate the comparison and combination of the K-Ar and U-Pb chronometers. Ar-Ar_Redux is a new data reduction protocol and software program for 40Ar/39Ar geochronology which takes into account two previously underappreciated aspects of the method: 1. 40Ar/39Ar measurements are compositional dataIn its simplest form, the 40Ar/39Ar age equation can be written as: t = log(1+J [40Ar/39Ar-298.5636Ar/39Ar])/λ = log(1 + JR)/λ Where λ is the 40K decay constant and J is the irradiation parameter. The age t does not depend on the absolute abundances of the three argon isotopes but only on their relative ratios. Thus, the 36Ar, 39Ar and 40Ar abundances can be normalised to unity and plotted on a ternary diagram or 'simplex'. Argon isotopic data are therefore subject to the peculiar mathematics of 'compositional data', sensu Aitchison (1986, The Statistical Analysis of Compositional Data, Chapman & Hall). 2. Correlated errors are pervasive throughout the 40Ar/39Ar methodCurrent data reduction protocols for 40Ar/39Ar geochronology propagate the age uncertainty as follows: σ2(t) = [J2 σ2(R) + R2 σ2(J)] / [λ2 (1 + R J)], which implies zero covariance between R and J. In reality, however, significant error correlations are found in every step of the 40Ar/39Ar data acquisition and processing, in both single and multi collector instruments, during blank, interference and decay corrections, age calculation etc. Ar-Ar_Redux revisits every aspect of the 40Ar/39Ar method by casting the raw mass spectrometer data into a contingency table of logratios, which automatically keeps track of all covariances in a compositional context. Application of the method to real data reveals strong correlations (r2 of up to 0.9) between age measurements within a single irradiation batch. Propertly taking

  1. Peridynamics as a rigorous coarse-graining of atomistics for multiscale materials design

    International Nuclear Information System (INIS)

    Lehoucq, Richard B.; Aidun, John Bahram; Silling, Stewart Andrew; Sears, Mark P.; Kamm, James R.; Parks, Michael L.

    2010-01-01

    This report summarizes activities undertaken during FY08-FY10 for the LDRD Peridynamics as a Rigorous Coarse-Graining of Atomistics for Multiscale Materials Design. The goal of our project was to develop a coarse-graining of finite temperature molecular dynamics (MD) that successfully transitions from statistical mechanics to continuum mechanics. The goal of our project is to develop a coarse-graining of finite temperature molecular dynamics (MD) that successfully transitions from statistical mechanics to continuum mechanics. Our coarse-graining overcomes the intrinsic limitation of coupling atomistics with classical continuum mechanics via the FEM (finite element method), SPH (smoothed particle hydrodynamics), or MPM (material point method); namely, that classical continuum mechanics assumes a local force interaction that is incompatible with the nonlocal force model of atomistic methods. Therefore FEM, SPH, and MPM inherit this limitation. This seemingly innocuous dichotomy has far reaching consequences; for example, classical continuum mechanics cannot resolve the short wavelength behavior associated with atomistics. Other consequences include spurious forces, invalid phonon dispersion relationships, and irreconcilable descriptions/treatments of temperature. We propose a statistically based coarse-graining of atomistics via peridynamics and so develop a first of a kind mesoscopic capability to enable consistent, thermodynamically sound, atomistic-to-continuum (AtC) multiscale material simulation. Peridynamics (PD) is a microcontinuum theory that assumes nonlocal forces for describing long-range material interaction. The force interactions occurring at finite distances are naturally accounted for in PD. Moreover, PDs nonlocal force model is entirely consistent with those used by atomistics methods, in stark contrast to classical continuum mechanics. Hence, PD can be employed for mesoscopic phenomena that are beyond the realms of classical continuum mechanics and

  2. Detrended cross-correlation coefficient: Application to predict apoptosis protein subcellular localization.

    Science.gov (United States)

    Liang, Yunyun; Liu, Sanyang; Zhang, Shengli

    2016-12-01

    Apoptosis, or programed cell death, plays a central role in the development and homeostasis of an organism. Obtaining information on subcellular location of apoptosis proteins is very helpful for understanding the apoptosis mechanism. The prediction of subcellular localization of an apoptosis protein is still a challenging task, and existing methods mainly based on protein primary sequences. In this paper, we introduce a new position-specific scoring matrix (PSSM)-based method by using detrended cross-correlation (DCCA) coefficient of non-overlapping windows. Then a 190-dimensional (190D) feature vector is constructed on two widely used datasets: CL317 and ZD98, and support vector machine is adopted as classifier. To evaluate the proposed method, objective and rigorous jackknife cross-validation tests are performed on the two datasets. The results show that our approach offers a novel and reliable PSSM-based tool for prediction of apoptosis protein subcellular localization. Copyright © 2016 Elsevier Inc. All rights reserved.

  3. Effect of pre-rigor stretch and various constant temperatures on the rate of post-mortem pH fall, rigor mortis and some quality traits of excised porcine biceps femoris muscle strips.

    Science.gov (United States)

    Vada-Kovács, M

    1996-01-01

    Porcine biceps femoris strips of 10 cm original length were stretched by 50% and fixed within 1 hr post mortem then subjected to temperatures of 4 °, 15 ° or 36 °C until they attained their ultimate pH. Unrestrained control muscle strips, which were left to shorten freely, were similarly treated. Post-mortem metabolism (pH, R-value) and shortening were recorded; thereafter ultimate meat quality traits (pH, lightness, extraction and swelling of myofibrils) were determined. The rate of pH fall at 36 °C, as well as ATP breakdown at 36 and 4 °C, were significantly reduced by pre-rigor stretch. The relationship between R-value and pH indicated cold shortening at 4 °C. Myofibrils isolated from pre-rigor stretched muscle strips kept at 36 °C showed the most severe reduction of hydration capacity, while paleness remained below extreme values. However, pre-rigor stretched myofibrils - when stored at 4 °C - proved to be superior to shortened ones in their extractability and swelling.

  4. Cross-validated stable-isotope dilution GC-MS and LC-MS/MS assays for monoacylglycerol lipase (MAGL) activity by measuring arachidonic acid released from the endocannabinoid 2-arachidonoyl glycerol.

    Science.gov (United States)

    Kayacelebi, Arslan Arinc; Schauerte, Celina; Kling, Katharina; Herbers, Jan; Beckmann, Bibiana; Engeli, Stefan; Jordan, Jens; Zoerner, Alexander A; Tsikas, Dimitrios

    2017-03-15

    2-Arachidonoyl glycerol (2AG) is an endocannabinoid that activates cannabinoid (CB) receptors CB1 and CB2. Monoacylglycerol lipase (MAGL) inactivates 2AG through hydrolysis to arachidonic acid (AA) and glycerol, thus modulating the activity at CB receptors. In the brain, AA released from 2AG by the action of MAGL serves as a substrate for cyclooxygenases which produce pro-inflammatory prostaglandins. Here we report stable-isotope GC-MS and LC-MS/MS assays for the reliable measurement of MAGL activity. The assays utilize deuterium-labeled 2AG (d 8 -2AG; 10μM) as the MAGL substrate and measure deuterium-labeled AA (d 8 -AA; range 0-1μM) as the MAGL product. Unlabelled AA (d 0 -AA, 1μM) serves as the internal standard. d 8 -AA and d 0 -AA are extracted from the aqueous buffered incubation mixtures by ethyl acetate. Upon solvent evaporation the residue is reconstituted in the mobile phase prior to LC-MS/MS analysis or in anhydrous acetonitrile for GC-MS analysis. LC-MS/MS analysis is performed in the negative electrospray ionization mode by selected-reaction monitoring the mass transitions [M-H] - →[M-H - CO 2 ] - , i.e., m/z 311→m/z 267 for d 8 -AA and m/z 303→m/z 259 for d 0 -AA. Prior to GC-MS analysis d 8 -AA and d 0 -AA were converted to their pentafluorobenzyl (PFB) esters by means of PFB-Br. GC-MS analysis is performed in the electron-capture negative-ion chemical ionization mode by selected-ion monitoring the ions [M-PFB] - , i.e., m/z 311 for d 8 -AA and m/z 303 for d 0 -AA. The GC-MS and LC-MS/MS assays were cross-validated. Linear regression analysis between the concentration (range, 0-1μM) of d 8 -AA measured by LC-MS/MS (y) and that by GC-MS (x) revealed a straight line (r 2 =0.9848) with the regression equation y=0.003+0.898x, indicating a good agreement. In dog liver, we detected MAGL activity that was inhibitable by the MAGL inhibitor JZL-184. Exogenous eicosatetraynoic acid is suitable as internal standard for the quantitative determination

  5. Reconsideration of the sequence of rigor mortis through postmortem changes in adenosine nucleotides and lactic acid in different rat muscles.

    Science.gov (United States)

    Kobayashi, M; Takatori, T; Iwadate, K; Nakajima, M

    1996-10-25

    We examined the changes in adenosine triphosphate (ATP), lactic acid, adenosine diphosphate (ADP) and adenosine monophosphate (AMP) in five different rat muscles after death. Rigor mortis has been thought to occur simultaneously in dead muscles and hence to start in small muscles sooner than in large muscles. In this study we found that the rate of decrease in ATP was significantly different in each muscle. The greatest drop in ATP was observed in the masseter muscle. These findings contradict the conventional theory of rigor mortis. Similarly, the rates of change in ADP and lactic acid, which are thought to be related to the consumption or production of ATP, were different in each muscle. However, the rate of change of AMP was the same in each muscle.

  6. Authorization of Animal Experiments Is Based on Confidence Rather than Evidence of Scientific Rigor

    Science.gov (United States)

    Nathues, Christina; Würbel, Hanno

    2016-01-01

    animal experiments are lacking important information about experimental conduct that determines the scientific validity of the findings, which may be critical for the weight attributed to the benefit of the research in the harm–benefit analysis. Similar to manuscripts getting accepted for publication despite poor reporting of measures against bias, applications for animal experiments may often be approved based on implicit confidence rather than explicit evidence of scientific rigor. Our findings shed serious doubt on the current authorization procedure for animal experiments, as well as the peer-review process for scientific publications, which in the long run may undermine the credibility of research. Developing existing authorization procedures that are already in place in many countries towards a preregistration system for animal research is one promising way to reform the system. This would not only benefit the scientific validity of findings from animal experiments but also help to avoid unnecessary harm to animals for inconclusive research. PMID:27911892

  7. Assessment of the Methodological Rigor of Case Studies in the Field of Management Accounting Published in Journals in Brazil

    Directory of Open Access Journals (Sweden)

    Kelly Cristina Mucio Marques

    2015-04-01

    Full Text Available This study aims to assess the methodological rigor of case studies in management accounting published in Brazilian journals. The study is descriptive. The data were collected using documentary research and content analysis, and 180 papers published from 2008 to 2012 in accounting journals rated as A2, B1, and B2 that were classified as case studies were selected. Based on the literature, we established a set of 15 criteria that we expected to be identified (either explicitly or implicitly in the case studies to classify those case studies as appropriate from the standpoint of methodological rigor. These criteria were partially met by the papers analyzed. The aspects less aligned with those proposed in the literature were the following: little emphasis on justifying the need to understand phenomena in context; lack of explanation of the reason for choosing the case study strategy; the predominant use of questions that do not enable deeper analysis; many studies based on only one source of evidence; little use of data and information triangulation; little emphasis on the data collection method; a high number of cases in which confusion between case study as a research strategy and as data collection method were detected; a low number of papers reporting the method of data analysis; few reports on a study's contributions; and a minority highlighting the issues requiring further research. In conclusion, the method used to apply case studies to management accounting must be improved because few studies showed rigorous application of the procedures that this strategy requires.

  8. The rigorous stochastic matrix multiplication scheme for the calculations of reduced equilibrium density matrices of open multilevel quantum systems

    International Nuclear Information System (INIS)

    Chen, Xin

    2014-01-01

    Understanding the roles of the temporary and spatial structures of quantum functional noise in open multilevel quantum molecular systems attracts a lot of theoretical interests. I want to establish a rigorous and general framework for functional quantum noises from the constructive and computational perspectives, i.e., how to generate the random trajectories to reproduce the kernel and path ordering of the influence functional with effective Monte Carlo methods for arbitrary spectral densities. This construction approach aims to unify the existing stochastic models to rigorously describe the temporary and spatial structure of Gaussian quantum noises. In this paper, I review the Euclidean imaginary time influence functional and propose the stochastic matrix multiplication scheme to calculate reduced equilibrium density matrices (REDM). In addition, I review and discuss the Feynman-Vernon influence functional according to the Gaussian quadratic integral, particularly its imaginary part which is critical to the rigorous description of the quantum detailed balance. As a result, I establish the conditions under which the influence functional can be interpreted as the average of exponential functional operator over real-valued Gaussian processes for open multilevel quantum systems. I also show the difference between the local and nonlocal phonons within this framework. With the stochastic matrix multiplication scheme, I compare the normalized REDM with the Boltzmann equilibrium distribution for open multilevel quantum systems

  9. Effects of well-boat transportation on the muscle pH and onset of rigor mortis in Atlantic salmon.

    Science.gov (United States)

    Gatica, M C; Monti, G; Gallo, C; Knowles, T G; Warriss, P D

    2008-07-26

    During the transport of salmon (Salmo salar), in a well-boat, 10 fish were sampled at each of six stages: in cages after crowding at the farm (stage 1), in the well-boat after loading (stage 2), in the well-boat after eight hours transport and before unloading (stage 3), in the resting cages immediately after finishing unloading (stage 4), after 24 hours resting in cages, (stage 5) and in the processing plant after pumping from the resting cages (stage 6). The water in the well-boat was at ambient temperature with recirculation to the sea. At each stage the fish were stunned percussively and bled by gill cutting. Immediately after death, and then every three hours for 18 hours, the muscle pH and rigor index of the fish were measured. At successive stages the initial muscle pH of the fish decreased, except for a slight gain in stage 5, after they had been rested for 24 hours. The lowest initial muscle pH was observed at stage 6. The fishes' rigor index showed that rigor developed more quickly at each successive stage, except for a slight decrease in rate at stage 5, attributable to the recovery of muscle reserves.

  10. Seizing the Future: How Ohio's Career-Technical Education Programs Fuse Academic Rigor and Real-World Experiences to Prepare Students for College and Careers

    Science.gov (United States)

    Guarino, Heidi; Yoder, Shaun

    2015-01-01

    "Seizing the Future: How Ohio's Career and Technical Education Programs Fuse Academic Rigor and Real-World Experiences to Prepare Students for College and Work," demonstrates Ohio's progress in developing strong policies for career and technical education (CTE) programs to promote rigor, including college- and career-ready graduation…

  11. Rigorous Photogrammetric Processing of CHANG'E-1 and CHANG'E-2 Stereo Imagery for Lunar Topographic Mapping

    Science.gov (United States)

    Di, K.; Liu, Y.; Liu, B.; Peng, M.

    2012-07-01

    Chang'E-1(CE-1) and Chang'E-2(CE-2) are the two lunar orbiters of China's lunar exploration program. Topographic mapping using CE-1 and CE-2 images is of great importance for scientific research as well as for preparation of landing and surface operation of Chang'E-3 lunar rover. In this research, we developed rigorous sensor models of CE-1 and CE-2 CCD cameras based on push-broom imaging principle with interior and exterior orientation parameters. Based on the rigorous sensor model, the 3D coordinate of a ground point in lunar body-fixed (LBF) coordinate system can be calculated by space intersection from the image coordinates of con-jugate points in stereo images, and the image coordinates can be calculated from 3D coordinates by back-projection. Due to uncer-tainties of the orbit and the camera, the back-projected image points are different from the measured points. In order to reduce these inconsistencies and improve precision, we proposed two methods to refine the rigorous sensor model: 1) refining EOPs by correcting the attitude angle bias, 2) refining the interior orientation model by calibration of the relative position of the two linear CCD arrays. Experimental results show that the mean back-projection residuals of CE-1 images are reduced to better than 1/100 pixel by method 1 and the mean back-projection residuals of CE-2 images are reduced from over 20 pixels to 0.02 pixel by method 2. Consequently, high precision DEM (Digital Elevation Model) and DOM (Digital Ortho Map) are automatically generated.

  12. Muscle pH, rigor mortis and blood variables in Atlantic salmon transported in two types of well-boat.

    Science.gov (United States)

    Gatica, M C; Monti, G E; Knowles, T G; Gallo, C B

    2010-01-09

    Two systems for transporting live salmon (Salmo salar) were compared in terms of their effects on blood variables, muscle pH and rigor index: an 'open system' well-boat with recirculated sea water at 13.5 degrees C and a stocking density of 107 kg/m3 during an eight-hour journey, and a 'closed system' well-boat with water chilled from 16.7 to 2.1 degrees C and a stocking density of 243.7 kg/m3 during a seven-hour journey. Groups of 10 fish were sampled at each of four stages: in cages at the farm, in the well-boat after loading, in the well-boat after the journey and before unloading, and in the processing plant after they were pumped from the resting cages. At each sampling, the fish were stunned and bled by gill cutting. Blood samples were taken to measure lactate, osmolality, chloride, sodium, cortisol and glucose, and their muscle pH and rigor index were measured at death and three hours later. In the open system well-boat, the initial muscle pH of the fish decreased at each successive stage, and at the final stage they had a significantly lower initial muscle pH and more rapid onset of rigor than the fish transported on the closed system well-boat. At the final stage all the blood variables except glucose were significantly affected in the fish transported on both types of well-boat.

  13. RIGOROUS PHOTOGRAMMETRIC PROCESSING OF CHANG'E-1 AND CHANG'E-2 STEREO IMAGERY FOR LUNAR TOPOGRAPHIC MAPPING

    Directory of Open Access Journals (Sweden)

    K. Di

    2012-07-01

    Full Text Available Chang'E-1(CE-1 and Chang'E-2(CE-2 are the two lunar orbiters of China's lunar exploration program. Topographic mapping using CE-1 and CE-2 images is of great importance for scientific research as well as for preparation of landing and surface operation of Chang'E-3 lunar rover. In this research, we developed rigorous sensor models of CE-1 and CE-2 CCD cameras based on push-broom imaging principle with interior and exterior orientation parameters. Based on the rigorous sensor model, the 3D coordinate of a ground point in lunar body-fixed (LBF coordinate system can be calculated by space intersection from the image coordinates of con-jugate points in stereo images, and the image coordinates can be calculated from 3D coordinates by back-projection. Due to uncer-tainties of the orbit and the camera, the back-projected image points are different from the measured points. In order to reduce these inconsistencies and improve precision, we proposed two methods to refine the rigorous sensor model: 1 refining EOPs by correcting the attitude angle bias, 2 refining the interior orientation model by calibration of the relative position of the two linear CCD arrays. Experimental results show that the mean back-projection residuals of CE-1 images are reduced to better than 1/100 pixel by method 1 and the mean back-projection residuals of CE-2 images are reduced from over 20 pixels to 0.02 pixel by method 2. Consequently, high precision DEM (Digital Elevation Model and DOM (Digital Ortho Map are automatically generated.

  14. Study designs for identifying risk compensation behavior among users of biomedical HIV prevention technologies: balancing methodological rigor and research ethics.

    Science.gov (United States)

    Underhill, Kristen

    2013-10-01

    The growing evidence base for biomedical HIV prevention interventions - such as oral pre-exposure prophylaxis, microbicides, male circumcision, treatment as prevention, and eventually prevention vaccines - has given rise to concerns about the ways in which users of these biomedical products may adjust their HIV risk behaviors based on the perception that they are prevented from infection. Known as risk compensation, this behavioral adjustment draws on the theory of "risk homeostasis," which has previously been applied to phenomena as diverse as Lyme disease vaccination, insurance mandates, and automobile safety. Little rigorous evidence exists to answer risk compensation concerns in the biomedical HIV prevention literature, in part because the field has not systematically evaluated the study designs available for testing these behaviors. The goals of this Commentary are to explain the origins of risk compensation behavior in risk homeostasis theory, to reframe risk compensation as a testable response to the perception of reduced risk, and to assess the methodological rigor and ethical justification of study designs aiming to isolate risk compensation responses. Although the most rigorous methodological designs for assessing risk compensation behavior may be unavailable due to ethical flaws, several strategies can help investigators identify potential risk compensation behavior during Phase II, Phase III, and Phase IV testing of new technologies. Where concerns arise regarding risk compensation behavior, empirical evidence about the incidence, types, and extent of these behavioral changes can illuminate opportunities to better support the users of new HIV prevention strategies. This Commentary concludes by suggesting a new way to conceptualize risk compensation behavior in the HIV prevention context. Copyright © 2013 Elsevier Ltd. All rights reserved.

  15. Rigorous lower bounds on the imaginary parts of the scattering amplitudes and the positions of their zeros

    CERN Document Server

    Uchiyama, T

    1974-01-01

    Rigorous lower bounds are derived from axiomatic field theory, by invoking analyticity and unitarity of the S-matrix. The bounds are expressed in terms of the total cross section and the slope parameter, and are found to be compatible with CERN experimental pp scattering data. It is also shown that the calculated lower-bound values imply non-existence of zeros for -t

  16. Why so many "rigorous" evaluations fail to identify unintended consequences of development programs: How mixed methods can contribute.

    Science.gov (United States)

    Bamberger, Michael; Tarsilla, Michele; Hesse-Biber, Sharlene

    2016-04-01

    Many widely-used impact evaluation designs, including randomized control trials (RCTs) and quasi-experimental designs (QEDs), frequently fail to detect what are often quite serious unintended consequences of development programs. This seems surprising as experienced planners and evaluators are well aware that unintended consequences frequently occur. Most evaluation designs are intended to determine whether there is credible evidence (statistical, theory-based or narrative) that programs have achieved their intended objectives and the logic of many evaluation designs, even those that are considered the most "rigorous," does not permit the identification of outcomes that were not specified in the program design. We take the example of RCTs as they are considered by many to be the most rigorous evaluation designs. We present a numbers of cases to illustrate how infusing RCTs with a mixed-methods approach (sometimes called an "RCT+" design) can strengthen the credibility of these designs and can also capture important unintended consequences. We provide a Mixed Methods Evaluation Framework that identifies 9 ways in which UCs can occur, and we apply this framework to two of the case studies. Copyright © 2016 Elsevier Ltd. All rights reserved.

  17. Changes in the contractile state, fine structure and metabolism of cardiac muscle cells during the development of rigor mortis.

    Science.gov (United States)

    Vanderwee, M A; Humphrey, S M; Gavin, J B; Armiger, L C

    1981-01-01

    Transmural slices from the left anterior papillary muscle of dog hearts were maintained for 120 min in a moist atmosphere at 37 degrees C. At 15-min intervals tissue samples were taken for estimation of adenosine triphosphate (ATP) and glucose-6-phosphate (G6P) and for electron microscopic examination. At the same time the deformability under standard load of comparable regions of an adjacent slice of tissue was measured. ATP levels fell rapidly during the first 45 to 75 min after excision of the heart. During a subsequent further decline in ATP, the mean deformability of myocardium fell from 30 to 12% indicating the development of rigor mortis. Conversely, G6P levels increased during the first decline in adenosine triphosphate but remained relatively steady thereafter. Whereas many of the myocardial cells fixed after 5 min contracted on contact with glutaraldehyde, all cells examined after 15 to 40 min were relaxed. A progressive increase in the proportion of contracted cells was observed during the rapid increase in myocardial rigidity. During this late contraction the cells showed morphological evidence of irreversible injury. These findings suggest that ischaemic myocytes contract just before actin and myosin become strongly linked to maintain the state of rigor mortis.

  18. Improved rigorous upper bounds for transport due to passive advection described by simple models of bounded systems

    International Nuclear Information System (INIS)

    Kim, Chang-Bae; Krommes, J.A.

    1988-08-01

    The work of Krommes and Smith on rigorous upper bounds for the turbulent transport of a passively advected scalar [/ital Ann. Phys./ 177:246 (1987)] is extended in two directions: (1) For their ''reference model,'' improved upper bounds are obtained by utilizing more sophisticated two-time constraints which include the effects of cross-correlations up to fourth order. Numerical solutions of the model stochastic differential equation are also obtained; they show that the new bounds compare quite favorably with the exact results, even at large Reynolds and Kubo numbers. (2) The theory is extended to take account of a finite spatial autocorrelation length L/sub c/. As a reasonably generic example, the problem of particle transport due to statistically specified stochastic magnetic fields in a collisionless turbulent plasma is revisited. A bound is obtained which reduces for small L/sub c/ to the quasilinear limit and for large L/sub c/ to the strong turbulence limit, and which provides a reasonable and rigorous interpolation for intermediate values of L/sub c/. 18 refs., 6 figs

  19. Smoothing of Transport Plans with Fixed Marginals and Rigorous Semiclassical Limit of the Hohenberg-Kohn Functional

    Science.gov (United States)

    Cotar, Codina; Friesecke, Gero; Klüppelberg, Claudia

    2018-06-01

    We prove rigorously that the exact N-electron Hohenberg-Kohn density functional converges in the strongly interacting limit to the strictly correlated electrons (SCE) functional, and that the absolute value squared of the associated constrained search wavefunction tends weakly in the sense of probability measures to a minimizer of the multi-marginal optimal transport problem with Coulomb cost associated to the SCE functional. This extends our previous work for N = 2 ( Cotar etal. in Commun Pure Appl Math 66:548-599, 2013). The correct limit problem has been derived in the physics literature by Seidl (Phys Rev A 60 4387-4395, 1999) and Seidl, Gorigiorgi and Savin (Phys Rev A 75:042511 1-12, 2007); in these papers the lack of a rigorous proofwas pointed out.We also give amathematical counterexample to this type of result, by replacing the constraint of given one-body density—an infinite dimensional quadratic expression in the wavefunction—by an infinite-dimensional quadratic expression in the wavefunction and its gradient. Connections with the Lawrentiev phenomenon in the calculus of variations are indicated.

  20. The Researchers' View of Scientific Rigor-Survey on the Conduct and Reporting of In Vivo Research.

    Science.gov (United States)

    Reichlin, Thomas S; Vogt, Lucile; Würbel, Hanno

    2016-01-01

    Reproducibility in animal research is alarmingly low, and a lack of scientific rigor has been proposed as a major cause. Systematic reviews found low reporting rates of measures against risks of bias (e.g., randomization, blinding), and a correlation between low reporting rates and overstated treatment effects. Reporting rates of measures against bias are thus used as a proxy measure for scientific rigor, and reporting guidelines (e.g., ARRIVE) have become a major weapon in the fight against risks of bias in animal research. Surprisingly, animal scientists have never been asked about their use of measures against risks of bias and how they report these in publications. Whether poor reporting reflects poor use of such measures, and whether reporting guidelines may effectively reduce risks of bias has therefore remained elusive. To address these questions, we asked in vivo researchers about their use and reporting of measures against risks of bias and examined how self-reports relate to reporting rates obtained through systematic reviews. An online survey was sent out to all registered in vivo researchers in Switzerland (N = 1891) and was complemented by personal interviews with five representative in vivo researchers to facilitate interpretation of the survey results. Return rate was 28% (N = 530), of which 302 participants (16%) returned fully completed questionnaires that were used for further analysis. According to the researchers' self-report, they use measures against risks of bias to a much greater extent than suggested by reporting rates obtained through systematic reviews. However, the researchers' self-reports are likely biased to some extent. Thus, although they claimed to be reporting measures against risks of bias at much lower rates than they claimed to be using these measures, the self-reported reporting rates were considerably higher than reporting rates found by systematic reviews. Furthermore, participants performed rather poorly when asked to

  1. The influence of low temperature, type of muscle and electrical stimulation on the course of rigor mortis, ageing and tenderness of beef muscles.

    Science.gov (United States)

    Olsson, U; Hertzman, C; Tornberg, E

    1994-01-01

    The course of rigor mortis, ageing and tenderness have been evaluated for two beef muscles, M. semimembranosus (SM) and M. longissimus dorsi (LD), when entering rigor at constant temperatures in the cold-shortening region (1, 4, 7 and 10°C). The influence of electrical stimulation (ES) was also examined. Post-mortem changes were registered by shortening and isometric tension and by following the decline of pH, ATP and creatine phosphate. The effect of ageing on tenderness was recorded by measuring shear-force (2, 8 and 15 days post mortem) and the sensory properties were assessed 15 days post mortem. It was found that shortening increased with decreasing temperature, resulting in decreased tenderness. Tenderness for LD, but not for SM, was improved by ES at 1 and 4°C, whereas ES did not give rise to any decrease in the degree of shortening during rigor mortis development. This suggests that ES influences tenderization more than it prevents cold-shortening. The samples with a pre-rigor mortis temperature of 1°C could not be tenderized, when stored up to 15 days, whereas this was the case for the muscles entering rigor mortis at the other higher temperatures. The results show that under the conditions used in this study, the course of rigor mortis is more important for the ultimate tenderness than the course of ageing. Copyright © 1994. Published by Elsevier Ltd.

  2. The rigorous bound on the transmission probability for massless scalar field of non-negative-angular-momentum mode emitted from a Myers-Perry black hole

    Energy Technology Data Exchange (ETDEWEB)

    Ngampitipan, Tritos, E-mail: tritos.ngampitipan@gmail.com [Faculty of Science, Chandrakasem Rajabhat University, Ratchadaphisek Road, Chatuchak, Bangkok 10900 (Thailand); Particle Physics Research Laboratory, Department of Physics, Faculty of Science, Chulalongkorn University, Phayathai Road, Patumwan, Bangkok 10330 (Thailand); Boonserm, Petarpa, E-mail: petarpa.boonserm@gmail.com [Department of Mathematics and Computer Science, Faculty of Science, Chulalongkorn University, Phayathai Road, Patumwan, Bangkok 10330 (Thailand); Chatrabhuti, Auttakit, E-mail: dma3ac2@gmail.com [Particle Physics Research Laboratory, Department of Physics, Faculty of Science, Chulalongkorn University, Phayathai Road, Patumwan, Bangkok 10330 (Thailand); Visser, Matt, E-mail: matt.visser@msor.vuw.ac.nz [School of Mathematics, Statistics, and Operations Research, Victoria University of Wellington, PO Box 600, Wellington 6140 (New Zealand)

    2016-06-02

    Hawking radiation is the evidence for the existence of black hole. What an observer can measure through Hawking radiation is the transmission probability. In the laboratory, miniature black holes can successfully be generated. The generated black holes are, most commonly, Myers-Perry black holes. In this paper, we will derive the rigorous bounds on the transmission probabilities for massless scalar fields of non-negative-angular-momentum modes emitted from a generated Myers-Perry black hole in six, seven, and eight dimensions. The results show that for low energy, the rigorous bounds increase with the increase in the energy of emitted particles. However, for high energy, the rigorous bounds decrease with the increase in the energy of emitted particles. When the black holes spin faster, the rigorous bounds decrease. For dimension dependence, the rigorous bounds also decrease with the increase in the number of extra dimensions. Furthermore, as comparison to the approximate transmission probability, the rigorous bound is proven to be useful.

  3. The rigorous bound on the transmission probability for massless scalar field of non-negative-angular-momentum mode emitted from a Myers-Perry black hole

    International Nuclear Information System (INIS)

    Ngampitipan, Tritos; Boonserm, Petarpa; Chatrabhuti, Auttakit; Visser, Matt

    2016-01-01

    Hawking radiation is the evidence for the existence of black hole. What an observer can measure through Hawking radiation is the transmission probability. In the laboratory, miniature black holes can successfully be generated. The generated black holes are, most commonly, Myers-Perry black holes. In this paper, we will derive the rigorous bounds on the transmission probabilities for massless scalar fields of non-negative-angular-momentum modes emitted from a generated Myers-Perry black hole in six, seven, and eight dimensions. The results show that for low energy, the rigorous bounds increase with the increase in the energy of emitted particles. However, for high energy, the rigorous bounds decrease with the increase in the energy of emitted particles. When the black holes spin faster, the rigorous bounds decrease. For dimension dependence, the rigorous bounds also decrease with the increase in the number of extra dimensions. Furthermore, as comparison to the approximate transmission probability, the rigorous bound is proven to be useful.

  4. A rigorous approach to investigating common assumptions about disease transmission: Process algebra as an emerging modelling methodology for epidemiology.

    Science.gov (United States)

    McCaig, Chris; Begon, Mike; Norman, Rachel; Shankland, Carron

    2011-03-01

    Changing scale, for example, the ability to move seamlessly from an individual-based model to a population-based model, is an important problem in many fields. In this paper, we introduce process algebra as a novel solution to this problem in the context of models of infectious disease spread. Process algebra allows us to describe a system in terms of the stochastic behaviour of individuals, and is a technique from computer science. We review the use of process algebra in biological systems, and the variety of quantitative and qualitative analysis techniques available. The analysis illustrated here solves the changing scale problem: from the individual behaviour we can rigorously derive equations to describe the mean behaviour of the system at the level of the population. The biological problem investigated is the transmission of infection, and how this relates to individual interactions.

  5. pd Scattering Using a Rigorous Coulomb Treatment: Reliability of the Renormalization Method for Screened-Coulomb Potentials

    International Nuclear Information System (INIS)

    Hiratsuka, Y.; Oryu, S.; Gojuki, S.

    2011-01-01

    Reliability of the screened Coulomb renormalization method, which was proposed in an elegant way by Alt-Sandhas-Zankel-Ziegelmann (ASZZ), is discussed on the basis of 'two-potential theory' for the three-body AGS equations with the Coulomb potential. In order to obtain ASZZ's formula, we define the on-shell Moller function, and calculate it by using the Haeringen criterion, i. e. 'the half-shell Coulomb amplitude is zero'. By these two steps, we can finally obtain the ASZZ formula for a small Coulomb phase shift. Furthermore, the reliability of the Haeringen criterion is thoroughly checked by a numerically rigorous calculation for the Coulomb LS-type equation. We find that the Haeringen criterion can be satisfied only in the higher energy region. We conclude that the ASZZ method can be verified in the case that the on-shell approximation to the Moller function is reasonable, and the Haeringen criterion is reliable. (author)

  6. Forced oral opening for cadavers with rigor mortis: two approaches for the myotomy on the temporal muscles.

    Science.gov (United States)

    Nakayama, Y; Aoki, Y; Niitsu, H; Saigusa, K

    2001-04-15

    Forensic dentistry plays an essential role in personal identification procedures. An adequate interincisal space of cadavers with rigor mortis is required to obtain detailed dental findings. We have developed intraoral and two directional approaches, for myotomy of the temporal muscles. The intraoral approach, in which the temporalis was dissected with scissors inserted via an intraoral incision, was adopted for elderly cadavers, females and emaciated or exhausted bodies, and had a merit of no incision on the face. The two directional approach, in which myotomy was performed with thread-wire saw from behind and with scissors via the intraoral incision, was designed for male muscular youths. Both approaches were effective to obtain a desired degree of an interincisal opening without facial damage.

  7. Optical Properties of Complex Plasmonic Materials Studied with Extended Effective Medium Theories Combined with Rigorous Coupled Wave Analysis

    Directory of Open Access Journals (Sweden)

    Elie Nadal

    2018-02-01

    Full Text Available In this study we fabricate gold nanocomposites and model their optical properties. The nanocomposites are either homogeneous films or gratings containing gold nanoparticles embedded in a polymer matrix. The samples are fabricated using a recently developed technique making use of laser interferometry. The gratings present original plasmon-enhanced diffraction properties. In this work, we develop a new approach to model the optical properties of our composites. We combine the extended Maxwell–Garnett model of effective media with the Rigorous Coupled Wave Analysis (RCWA method and compute both the absorption spectra and the diffraction efficiency spectra of the gratings. We show that such a semi-analytical approach allows us to reproduce the original plasmonic features of the composites and can provide us with details about their inner structure. Such an approach, considering reasonably high particle concentrations, could be a simple and efficient tool to study complex micro-structured system based on plasmonic components, such as metamaterials.

  8. Rigorous project for existing houses. Energy conservation requires evolution; Rigoureus project voor bestaande woningen. Evolutie voor energiebesparing nodig

    Energy Technology Data Exchange (ETDEWEB)

    Clocquet, R. [DHV, Amersfoort (Netherlands); Koene, F. [ECN Efficiency and Infrastructure, Petten (Netherlands)

    2010-05-15

    How can existing terraced houses be renovated in such a way that their energy use decreases with 75 percent? The Rigorous project of the Energy research Centre of the Netherlands (ECN), TNO, Delft University of Technology and DHV, developed innovative renovation concepts that make such savings feasible by combining constructional measures with installation concepts. On top of that it is also essential that consumer behavior is addressed. [Dutch] Hoe kunnen bestaande rijtjeswoningen zo worden gerenoveerd dat het totale energiegebruik met 75 procent afneemt? In het Rigoureus-project hebben ECN, TNO, TU Delft en DHV innovatieve renovatieconcepten ontwikkeld die dat, door een combinatie van bouwkundige maatregelen en uitgeldende installatieconcepten, mogelijk maken. Daarbij blijkt het van essentieel belang ook het gebruikersgedrag aan te pakken.

  9. Rigorous derivation of the mean-field green functions of the two-band Hubbard model of superconductivity

    International Nuclear Information System (INIS)

    Adam, G.; Adam, S.

    2007-01-01

    The Green function (GF) equation of motion technique for solving the effective two-band Hubbard model of high-T c superconductivity in cuprates rests on the Hubbard operator (HO) algebra. We show that, if we take into account the invariance to translations and spin reversal, the HO algebra results in invariance properties of several specific correlation functions. The use of these properties allows rigorous derivation and simplification of the expressions of the frequency matrix (FM) and of the generalized mean-field approximation (GMFA) Green functions (GFs) of the model. For the normal singlet hopping and anomalous exchange pairing correlation functions which enter the FM and GMFA-GFs, the use of spectral representations allows the identification and elimination of exponentially small quantities. This procedure secures the reduction of the correlation order to the GMFA-GF expressions

  10. Importance of All-in-one (MCNPX2.7.0+CINDER2008) Code for Rigorous Transmutation Study

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Oyeon [Institute for Modeling and Simulation Convergence, Daegu (Korea, Republic of); Kim, Kwanghyun [RadTek Co. Ltd., Daejeon (Korea, Republic of)

    2015-10-15

    It can be utilized as a possible mechanism for reducing the volume and hazard of radioactive waste by transforming hazardous radioactive elements with long half-life into less hazardous elements with short halflife. Thus, the understanding of the transmutation mechanism and beneficial machinery design technologies are important and useful. Although the terminology transmutation was rooted back to alchemy which transforms the base metals into gold in the middle ages, Rutherford and Soddy were the first observers by discovering the natural transmutation as a part of radioactive decay of the alpha decay type in early 20th century. Along with the development of computing technology, analysis software, for example, CINDER was developed for rigorous atomic transmutation study. The code has a long history of development from the original work of T. England at Bettis Atomic Power Laboratory (BAPL) in the early 1960s. It has been used to calculate the inventory of nuclides in an irradiated material. CINDER'90 which is recently released involved an upgrade of the code to allow the spontaneous tracking of chains based upon the significant density or pass-by of a nuclide, where pass-by represents the density of a nuclide transforming to other nuclides. Nuclear transmutation process is governed by highly non-linear differential equation. Chaotic nature of the non-linear equation bespeaks the importance of the accurate input data (i.e. number of significant digits). Thus, reducing the human interrogation is very important for the rigorous transmutation study and 'allin- one' code structure is desired. Note that non-linear characteristic of the transmutation equation caused by the flux changes due to the number density change during a given time interval (intrinsic physical phenomena) is not considered in this study. In this study, we only emphasized the effects of human interrogation in the computing process solving nonlinear differential equations, as shown in

  11. The Challenge of Timely, Responsive and Rigorous Ethics Review of Disaster Research: Views of Research Ethics Committee Members.

    Directory of Open Access Journals (Sweden)

    Matthew Hunt

    Full Text Available Research conducted following natural disasters such as earthquakes, floods or hurricanes is crucial for improving relief interventions. Such research, however, poses ethical, methodological and logistical challenges for researchers. Oversight of disaster research also poses challenges for research ethics committees (RECs, in part due to the rapid turnaround needed to initiate research after a disaster. Currently, there is limited knowledge available about how RECs respond to and appraise disaster research. To address this knowledge gap, we investigated the experiences of REC members who had reviewed disaster research conducted in low- or middle-income countries.We used interpretive description methodology and conducted in-depth interviews with 15 respondents. Respondents were chairs, members, advisors, or coordinators from 13 RECs, including RECs affiliated with universities, governments, international organizations, a for-profit REC, and an ad hoc committee established during a disaster. Interviews were analyzed inductively using constant comparative techniques.Through this process, three elements were identified as characterizing effective and high-quality review: timeliness, responsiveness and rigorousness. To ensure timeliness, many RECs rely on adaptations of review procedures for urgent protocols. Respondents emphasized that responsive review requires awareness of and sensitivity to the particularities of disaster settings and disaster research. Rigorous review was linked with providing careful assessment of ethical considerations related to the research, as well as ensuring independence of the review process.Both the frequency of disasters and the conduct of disaster research are on the rise. Ensuring effective and high quality review of disaster research is crucial, yet challenges, including time pressures for urgent protocols, exist for achieving this goal. Adapting standard REC procedures may be necessary. However, steps should be

  12. A rigorous approach to facilitate and guarantee the correctness of the genetic testing management in human genome information systems.

    Science.gov (United States)

    Araújo, Luciano V; Malkowski, Simon; Braghetto, Kelly R; Passos-Bueno, Maria R; Zatz, Mayana; Pu, Calton; Ferreira, João E

    2011-12-22

    Recent medical and biological technology advances have stimulated the development of new testing systems that have been providing huge, varied amounts of molecular and clinical data. Growing data volumes pose significant challenges for information processing systems in research centers. Additionally, the routines of genomics laboratory are typically characterized by high parallelism in testing and constant procedure changes. This paper describes a formal approach to address this challenge through the implementation of a genetic testing management system applied to human genome laboratory. We introduced the Human Genome Research Center Information System (CEGH) in Brazil, a system that is able to support constant changes in human genome testing and can provide patients updated results based on the most recent and validated genetic knowledge. Our approach uses a common repository for process planning to ensure reusability, specification, instantiation, monitoring, and execution of processes, which are defined using a relational database and rigorous control flow specifications based on process algebra (ACP). The main difference between our approach and related works is that we were able to join two important aspects: 1) process scalability achieved through relational database implementation, and 2) correctness of processes using process algebra. Furthermore, the software allows end users to define genetic testing without requiring any knowledge about business process notation or process algebra. This paper presents the CEGH information system that is a Laboratory Information Management System (LIMS) based on a formal framework to support genetic testing management for Mendelian disorder studies. We have proved the feasibility and showed usability benefits of a rigorous approach that is able to specify, validate, and perform genetic testing using easy end user interfaces.

  13. Meat quality and rigor mortis development in broiler chickens with gas-induced anoxia and postmortem electrical stimulation.

    Science.gov (United States)

    Sams, A R; Dzuik, C S

    1999-10-01

    This study was conducted to evaluate the combined rigor-accelerating effects of postmortem electrical stimulation (ES) and argon-induced anoxia (Ar) of broiler chickens. One hundred broilers were processed in the following treatments: untreated controls, ES, Ar, or Ar with ES (Ar + ES). Breast fillets were harvested at 1 h postmortem for all treatments or at 1 and 6 h postmortem for the control carcasses. Fillets were sampled for pH and ratio of inosine to adenosine (R-value) and were then individually quick frozen (IQF) or aged on ice (AOI) until 24 h postmortem. Color was measured in the AOI fillets at 24 h postmortem. All fillets were then cooked and evaluated for Allo-Kramer shear value. The Ar treatment accelerated the normal pH decline, whereas the ES and AR + ES treatments yielded even lower pH values at 1 h postmortem. The Ar + ES treatment had a greater R-value than the ES treatment, which was greater than either the Ar or 1-h controls, which, in turn, were not different from each other. The ES treatment had the lowest L* value, and ES, Ar, and Ar + ES produced significantly higher a* values than the 1-h controls. For the IQF fillets, the ES and Ar + ES treatments were not different in shear value but were lower than Ar, which was lower than the 1-h controls. The same was true for the AOI fillets except that the ES and the Ar treatments were not different. These results indicated that although ES and Ar had rigor-accelerating and tenderizing effects, ES seemed to be more effective than Ar; there was little enhancement when Ar was added to the ES treatment and fillets were deboned at 1 h postmortem.

  14. Biclustering via optimal re-ordering of data matrices in systems biology: rigorous methods and comparative studies

    Directory of Open Access Journals (Sweden)

    Feng Xiao-Jiang

    2008-10-01

    Full Text Available Abstract Background The analysis of large-scale data sets via clustering techniques is utilized in a number of applications. Biclustering in particular has emerged as an important problem in the analysis of gene expression data since genes may only jointly respond over a subset of conditions. Biclustering algorithms also have important applications in sample classification where, for instance, tissue samples can be classified as cancerous or normal. Many of the methods for biclustering, and clustering algorithms in general, utilize simplified models or heuristic strategies for identifying the "best" grouping of elements according to some metric and cluster definition and thus result in suboptimal clusters. Results In this article, we present a rigorous approach to biclustering, OREO, which is based on the Optimal RE-Ordering of the rows and columns of a data matrix so as to globally minimize the dissimilarity metric. The physical permutations of the rows and columns of the data matrix can be modeled as either a network flow problem or a traveling salesman problem. Cluster boundaries in one dimension are used to partition and re-order the other dimensions of the corresponding submatrices to generate biclusters. The performance of OREO is tested on (a metabolite concentration data, (b an image reconstruction matrix, (c synthetic data with implanted biclusters, and gene expression data for (d colon cancer data, (e breast cancer data, as well as (f yeast segregant data to validate the ability of the proposed method and compare it to existing biclustering and clustering methods. Conclusion We demonstrate that this rigorous global optimization method for biclustering produces clusters with more insightful groupings of similar entities, such as genes or metabolites sharing common functions, than other clustering and biclustering algorithms and can reconstruct underlying fundamental patterns in the data for several distinct sets of data matrices arising

  15. Heterogeneous nucleation on convex spherical substrate surfaces: A rigorous thermodynamic formulation of Fletcher's classical model and the new perspectives derived.

    Science.gov (United States)

    Qian, Ma; Ma, Jie

    2009-06-07

    Fletcher's spherical substrate model [J. Chem. Phys. 29, 572 (1958)] is a basic model for understanding the heterogeneous nucleation phenomena in nature. However, a rigorous thermodynamic formulation of the model has been missing due to the significant complexities involved. This has not only left the classical model deficient but also likely obscured its other important features, which would otherwise have helped to better understand and control heterogeneous nucleation on spherical substrates. This work presents a rigorous thermodynamic formulation of Fletcher's model using a novel analytical approach and discusses the new perspectives derived. In particular, it is shown that the use of an intermediate variable, a selected geometrical angle or pseudocontact angle between the embryo and spherical substrate, revealed extraordinary similarities between the first derivatives of the free energy change with respect to embryo radius for nucleation on spherical and flat substrates. Enlightened by the discovery, it was found that there exists a local maximum in the difference between the equivalent contact angles for nucleation on spherical and flat substrates due to the existence of a local maximum in the difference between the shape factors for nucleation on spherical and flat substrate surfaces. This helps to understand the complexity of the heterogeneous nucleation phenomena in a practical system. Also, it was found that the unfavorable size effect occurs primarily when R<5r( *) (R: radius of substrate and r( *): critical embryo radius) and diminishes rapidly with increasing value of R/r( *) beyond R/r( *)=5. This finding provides a baseline for controlling the size effects in heterogeneous nucleation.

  16. Interval-parameter chance-constraint programming model for end-of-life vehicles management under rigorous environmental regulations.

    Science.gov (United States)

    Simic, Vladimir

    2016-06-01

    As the number of end-of-life vehicles (ELVs) is estimated to increase to 79.3 million units per year by 2020 (e.g., 40 million units were generated in 2010), there is strong motivation to effectively manage this fast-growing waste flow. Intensive work on management of ELVs is necessary in order to more successfully tackle this important environmental challenge. This paper proposes an interval-parameter chance-constraint programming model for end-of-life vehicles management under rigorous environmental regulations. The proposed model can incorporate various uncertainty information in the modeling process. The complex relationships between different ELV management sub-systems are successfully addressed. Particularly, the formulated model can help identify optimal patterns of procurement from multiple sources of ELV supply, production and inventory planning in multiple vehicle recycling factories, and allocation of sorted material flows to multiple final destinations under rigorous environmental regulations. A case study is conducted in order to demonstrate the potentials and applicability of the proposed model. Various constraint-violation probability levels are examined in detail. Influences of parameter uncertainty on model solutions are thoroughly investigated. Useful solutions for the management of ELVs are obtained under different probabilities of violating system constraints. The formulated model is able to tackle a hard, uncertainty existing ELV management problem. The presented model has advantages in providing bases for determining long-term ELV management plans with desired compromises between economic efficiency of vehicle recycling system and system-reliability considerations. The results are helpful for supporting generation and improvement of ELV management plans. Copyright © 2016 Elsevier Ltd. All rights reserved.

  17. Statistically rigorous calculations do not support common input and long-term synchronization of motor-unit firings

    Science.gov (United States)

    Kline, Joshua C.

    2014-01-01

    Over the past four decades, various methods have been implemented to measure synchronization of motor-unit firings. In this work, we provide evidence that prior reports of the existence of universal common inputs to all motoneurons and the presence of long-term synchronization are misleading, because they did not use sufficiently rigorous statistical tests to detect synchronization. We developed a statistically based method (SigMax) for computing synchronization and tested it with data from 17,736 motor-unit pairs containing 1,035,225 firing instances from the first dorsal interosseous and vastus lateralis muscles—a data set one order of magnitude greater than that reported in previous studies. Only firing data, obtained from surface electromyographic signal decomposition with >95% accuracy, were used in the study. The data were not subjectively selected in any manner. Because of the size of our data set and the statistical rigor inherent to SigMax, we have confidence that the synchronization values that we calculated provide an improved estimate of physiologically driven synchronization. Compared with three other commonly used techniques, ours revealed three types of discrepancies that result from failing to use sufficient statistical tests necessary to detect synchronization. 1) On average, the z-score method falsely detected synchronization at 16 separate latencies in each motor-unit pair. 2) The cumulative sum method missed one out of every four synchronization identifications found by SigMax. 3) The common input assumption method identified synchronization from 100% of motor-unit pairs studied. SigMax revealed that only 50% of motor-unit pairs actually manifested synchronization. PMID:25210152

  18. A new look at the statistical assessment of approximate and rigorous methods for the estimation of stabilized formation temperatures in geothermal and petroleum wells

    International Nuclear Information System (INIS)

    Espinoza-Ojeda, O M; Santoyo, E; Andaverde, J

    2011-01-01

    Approximate and rigorous solutions of seven heat transfer models were statistically examined, for the first time, to estimate stabilized formation temperatures (SFT) of geothermal and petroleum boreholes. Constant linear and cylindrical heat source models were used to describe the heat flow (either conductive or conductive/convective) involved during a borehole drilling. A comprehensive statistical assessment of the major error sources associated with the use of these models was carried out. The mathematical methods (based on approximate and rigorous solutions of heat transfer models) were thoroughly examined by using four statistical analyses: (i) the use of linear and quadratic regression models to infer the SFT; (ii) the application of statistical tests of linearity to evaluate the actual relationship between bottom-hole temperatures and time function data for each selected method; (iii) the comparative analysis of SFT estimates between the approximate and rigorous predictions of each analytical method using a β ratio parameter to evaluate the similarity of both solutions, and (iv) the evaluation of accuracy in each method using statistical tests of significance, and deviation percentages between 'true' formation temperatures and SFT estimates (predicted from approximate and rigorous solutions). The present study also enabled us to determine the sensitivity parameters that should be considered for a reliable calculation of SFT, as well as to define the main physical and mathematical constraints where the approximate and rigorous methods could provide consistent SFT estimates

  19. The influence of postmortem electrical stimulation on rigor mortis development, calpastatin activity, and tenderness in broiler and duck pectoralis.

    Science.gov (United States)

    Alvarado, C Z; Sams, A R

    2000-09-01

    This study was conducted to evaluate the effects of electrical stimulation (ES) on rigor mortis development, calpastatin activity, and tenderness in anatomically similar avian muscles composed primarily of either red or white muscle fibers. A total of 72 broilers and 72 White Pekin ducks were either treated with postmortem (PM) ES (450 mA) at the neck in a 1% NaCl solution for 2 s on and 1 s off for a total of 15 s or were used as nonstimulated controls. Both pectoralis muscles were harvested from the carcasses after 0.25, 1.25, and 24 h PM and analyzed for pH, inosine:adenosine ratio (R-value), sarcomere length, gravimetric fragmentation index, calpastatin activity, shear value, and cook loss. All data were analyzed within species for the effects of ES. Electrically stimulated ducks had a lower muscle pH at 0.25 and 1.25 h PM and higher R-values at 0.25 h PM compared with controls. Electrically stimulated broilers had a lower muscle pH at 1.25 h and higher R-values at 0.25 and 1.25 h PM compared with controls. Muscles of electrically stimulated broilers exhibited increased myofibrillar fragmentation at 0.25 and 1.25 h PM, whereas there was no such difference over PM time in the duck muscle. Electrical stimulation did not affect calpastatin activity in either broilers or ducks; however, the calpastatin activity of the broilers did decrease over the aging time period, whereas that of the ducks did not. Electrical stimulation decreased shear values in broilers at 1.25 h PM compared with controls; however, there was no difference in shear values of duck muscle due to ES at any sampling time. Cook loss was lower for electrically stimulated broilers at 0.25 and 1.25 h PM compared with the controls, but had no effect in the ducks. These results suggest that the red fibers of the duck pectoralis have less potential for rigor mortis acceleration and tenderization due to ES than do the white fibers of the broiler pectoralis.

  20. Implementation of rigorous renormalization group method for ground space and low-energy states of local Hamiltonians

    Science.gov (United States)

    Roberts, Brenden; Vidick, Thomas; Motrunich, Olexei I.

    2017-12-01

    The success of polynomial-time tensor network methods for computing ground states of certain quantum local Hamiltonians has recently been given a sound theoretical basis by Arad et al. [Math. Phys. 356, 65 (2017), 10.1007/s00220-017-2973-z]. The convergence proof, however, relies on "rigorous renormalization group" (RRG) techniques which differ fundamentally from existing algorithms. We introduce a practical adaptation of the RRG procedure which, while no longer theoretically guaranteed to converge, finds matrix product state ansatz approximations to the ground spaces and low-lying excited spectra of local Hamiltonians in realistic situations. In contrast to other schemes, RRG does not utilize variational methods on tensor networks. Rather, it operates on subsets of the system Hilbert space by constructing approximations to the global ground space in a treelike manner. We evaluate the algorithm numerically, finding similar performance to density matrix renormalization group (DMRG) in the case of a gapped nondegenerate Hamiltonian. Even in challenging situations of criticality, large ground-state degeneracy, or long-range entanglement, RRG remains able to identify candidate states having large overlap with ground and low-energy eigenstates, outperforming DMRG in some cases.

  1. From everyday communicative figurations to rigorous audience news repertoires: A mixed method approach to cross-media news consumption

    Directory of Open Access Journals (Sweden)

    Christian Kobbernagel

    2016-06-01

    Full Text Available In the last couple of decades there has been an unprecedented explosion of news media platforms and formats, as a succession of digital and social media have joined the ranks of legacy media. We live in a ‘hybrid media system’ (Chadwick, 2013, in which people build their cross-media news repertoires from the ensemble of old and new media available. This article presents an innovative mixed-method approach with considerable explanatory power to the exploration of patterns of news media consumption. This approach tailors Q-methodology in the direction of a qualitative study of news consumption, in which a card sorting exercise serves to translate the participants’ news media preferences into a form that enables the researcher to undertake a rigorous factor-analytical construction of their news consumption repertoires. This interpretive, factor-analytical procedure, which results in the building of six audience news repertoires in Denmark, also preserves the qualitative thickness of the participants’ verbal accounts of the communicative figurations of their day-in-the-life with the news media.

  2. Finite temperature Casimir energy in closed rectangular cavities: a rigorous derivation based on a zeta function technique

    International Nuclear Information System (INIS)

    Lim, S C; Teo, L P

    2007-01-01

    We derive rigorously explicit formulae of the Casimir free energy at finite temperature for massless scalar field and electromagnetic field confined in a closed rectangular cavity with different boundary conditions by a zeta regularization method. We study both the low and high temperature expansions of the free energy. In each case, we write the free energy as a sum of a polynomial in temperature plus exponentially decay terms. We show that the free energy is always a decreasing function of temperature. In the cases of massless scalar field with the Dirichlet boundary condition and electromagnetic field, the zero temperature Casimir free energy might be positive. In each of these cases, there is a unique transition temperature (as a function of the side lengths of the cavity) where the Casimir energy changes from positive to negative. When the space dimension is equal to two and three, we show graphically the dependence of this transition temperature on the side lengths of the cavity. Finally we also show that we can obtain the results for a non-closed rectangular cavity by letting the size of some directions of a closed cavity go to infinity, and we find that these results agree with the usual integration prescription adopted by other authors

  3. Statistical mechanics of fluids under internal constraints: Rigorous results for the one-dimensional hard rod fluid

    International Nuclear Information System (INIS)

    Corti, D.S.; Debenedetti, P.G.

    1998-01-01

    The rigorous statistical mechanics of metastability requires the imposition of internal constraints that prevent access to regions of phase space corresponding to inhomogeneous states. We derive exactly the Helmholtz energy and equation of state of the one-dimensional hard rod fluid under the influence of an internal constraint that places an upper bound on the distance between nearest-neighbor rods. This type of constraint is relevant to the suppression of boiling in a superheated liquid. We determine the effects of this constraint upon the thermophysical properties and internal structure of the hard rod fluid. By adding an infinitely weak and infinitely long-ranged attractive potential to the hard core, the fluid exhibits a first-order vapor-liquid transition. We determine exactly the equation of state of the one-dimensional superheated liquid and show that it exhibits metastable phase equilibrium. We also derive statistical mechanical relations for the equation of state of a fluid under the action of arbitrary constraints, and show the connection between the statistical mechanics of constrained and unconstrained ensembles. copyright 1998 The American Physical Society

  4. Rigorous bounds on survival times in circular accelerators and efficient computation of fringe-field transfer maps

    International Nuclear Information System (INIS)

    Hoffstaetter, G.H.

    1994-12-01

    Analyzing stability of particle motion in storage rings contributes to the general field of stability analysis in weakly nonlinear motion. A method which we call pseudo invariant estimation (PIE) is used to compute lower bounds on the survival time in circular accelerators. The pseudeo invariants needed for this approach are computed via nonlinear perturbative normal form theory and the required global maxima of the highly complicated multivariate functions could only be rigorously bound with an extension of interval arithmetic. The bounds on the survival times are large enough to the relevant; the same is true for the lower bounds on dynamical aperatures, which can be computed. The PIE method can lead to novel design criteria with the objective of maximizing the survival time. A major effort in the direction of rigourous predictions only makes sense if accurate models of accelerators are available. Fringe fields often have a significant influence on optical properties, but the computation of fringe-field maps by DA based integration is slower by several orders of magnitude than DA evaluation of the propagator for main-field maps. A novel computation of fringe-field effects called symplectic scaling (SYSCA) is introduced. It exploits the advantages of Lie transformations, generating functions, and scaling properties and is extremely accurate. The computation of fringe-field maps is typically made nearly two orders of magnitude faster. (orig.)

  5. Development of interface between MCNP-FISPACT-MCNP (IPR-MFM) based on rigorous two step method

    International Nuclear Information System (INIS)

    Shaw, A.K.; Swami, H.L.; Danani, C.

    2015-01-01

    In this work we present the development of interface tool between MCNP-FISPACT-MCNP (MFM) based on Rigorous Two Step method for the shutdown dose rate (SDDR) calculation. The MFM links MCNP radiation transport and the FISPACT inventory code through a suitable coupling scheme. MFM coupling scheme has three steps. In first step it picks neutron spectrum and total flux from MCNP output file to use as input parameter for FISPACT. It prepares the FISPACT input files by using irradiation history, neutron flux and neutron spectrum and then execute the FISPACT input file in the second step. Third step of MFM coupling scheme extracts the decay gammas from the FISPACT output file and prepares MCNP input file for decay gamma transport followed by execution of MCNP input file and estimation of SDDR. Here detailing of MFM methodology and flow scheme has been described. The programming language PYTHON has been chosen for this development of the coupling scheme. A complete loop of MCNP-FISPACT-MCNP has been developed to handle the simplified geometrical problems. For validation of MFM interface a manual cross-check has been performed which shows good agreements. The MFM interface also has been validated with exiting MCNP-D1S method for a simple geometry with 14 MeV cylindrical neutron source. (author)

  6. Application of Asymptotic and Rigorous Techniques for the Characterization of Interferences Caused by a Wind Turbine in Its Neighborhood

    Directory of Open Access Journals (Sweden)

    Maria Jesús Algar

    2013-01-01

    Full Text Available This paper presents a complete assessment to the interferences caused in the nearby radio systems by wind turbines. Three different parameters have been considered: the scattered field of a wind turbine, its radar cross-section (RCS, and the Doppler shift generated by the rotating movements of the blades. These predictions are very useful for the study of the influence of wind farms in radio systems. To achieve this, both high-frequency techniques, such as Geometrical Theory of Diffraction/Uniform Theory of Diffraction (GTD/UTD and Physical Optics (PO, and rigorous techniques, like Method of Moments (MoM, have been used. In the analysis of the scattered field, conductor and dielectric models of the wind turbine have been analyzed. In this way, realistic results can be obtained. For all cases under analysis, the wind turbine has been modeled with NURBS (Non-Uniform Rational B-Spline surfaces since they allow the real shape of the object to be accurately replicated with very little information.

  7. Cost evaluation of cellulase enzyme for industrial-scale cellulosic ethanol production based on rigorous Aspen Plus modeling.

    Science.gov (United States)

    Liu, Gang; Zhang, Jian; Bao, Jie

    2016-01-01

    Cost reduction on cellulase enzyme usage has been the central effort in the commercialization of fuel ethanol production from lignocellulose biomass. Therefore, establishing an accurate evaluation method on cellulase enzyme cost is crucially important to support the health development of the future biorefinery industry. Currently, the cellulase cost evaluation methods were complicated and various controversial or even conflict results were presented. To give a reliable evaluation on this important topic, a rigorous analysis based on the Aspen Plus flowsheet simulation in the commercial scale ethanol plant was proposed in this study. The minimum ethanol selling price (MESP) was used as the indicator to show the impacts of varying enzyme supply modes, enzyme prices, process parameters, as well as enzyme loading on the enzyme cost. The results reveal that the enzyme cost drives the cellulosic ethanol price below the minimum profit point when the enzyme is purchased from the current industrial enzyme market. An innovative production of cellulase enzyme such as on-site enzyme production should be explored and tested in the industrial scale to yield an economically sound enzyme supply for the future cellulosic ethanol production.

  8. Semi-physical Simulation of the Airborne InSAR based on Rigorous Geometric Model and Real Navigation Data

    Science.gov (United States)

    Changyong, Dou; Huadong, Guo; Chunming, Han; yuquan, Liu; Xijuan, Yue; Yinghui, Zhao

    2014-03-01

    Raw signal simulation is a useful tool for the system design, mission planning, processing algorithm testing, and inversion algorithm design of Synthetic Aperture Radar (SAR). Due to the wide and high frequent variation of aircraft's trajectory and attitude, and the low accuracy of the Position and Orientation System (POS)'s recording data, it's difficult to quantitatively study the sensitivity of the key parameters, i.e., the baseline length and inclination, absolute phase and the orientation of the antennas etc., of the airborne Interferometric SAR (InSAR) system, resulting in challenges for its applications. Furthermore, the imprecise estimation of the installation offset between the Global Positioning System (GPS), Inertial Measurement Unit (IMU) and the InSAR antennas compounds the issue. An airborne interferometric SAR (InSAR) simulation based on the rigorous geometric model and real navigation data is proposed in this paper, providing a way for quantitatively studying the key parameters and for evaluating the effect from the parameters on the applications of airborne InSAR, as photogrammetric mapping, high-resolution Digital Elevation Model (DEM) generation, and surface deformation by Differential InSAR technology, etc. The simulation can also provide reference for the optimal design of the InSAR system and the improvement of InSAR data processing technologies such as motion compensation, imaging, image co-registration, and application parameter retrieval, etc.

  9. Semi-physical Simulation of the Airborne InSAR based on Rigorous Geometric Model and Real Navigation Data

    International Nuclear Information System (INIS)

    Changyong, Dou; Huadong, Guo; Chunming, Han; Yuquan, Liu; Xijuan, Yue; Yinghui, Zhao

    2014-01-01

    Raw signal simulation is a useful tool for the system design, mission planning, processing algorithm testing, and inversion algorithm design of Synthetic Aperture Radar (SAR). Due to the wide and high frequent variation of aircraft's trajectory and attitude, and the low accuracy of the Position and Orientation System (POS)'s recording data, it's difficult to quantitatively study the sensitivity of the key parameters, i.e., the baseline length and inclination, absolute phase and the orientation of the antennas etc., of the airborne Interferometric SAR (InSAR) system, resulting in challenges for its applications. Furthermore, the imprecise estimation of the installation offset between the Global Positioning System (GPS), Inertial Measurement Unit (IMU) and the InSAR antennas compounds the issue. An airborne interferometric SAR (InSAR) simulation based on the rigorous geometric model and real navigation data is proposed in this paper, providing a way for quantitatively studying the key parameters and for evaluating the effect from the parameters on the applications of airborne InSAR, as photogrammetric mapping, high-resolution Digital Elevation Model (DEM) generation, and surface deformation by Differential InSAR technology, etc. The simulation can also provide reference for the optimal design of the InSAR system and the improvement of InSAR data processing technologies such as motion compensation, imaging, image co-registration, and application parameter retrieval, etc

  10. A rigorous mechanistic model for predicting gas hydrate formation kinetics: The case of CO2 recovery and sequestration

    International Nuclear Information System (INIS)

    ZareNezhad, Bahman; Mottahedin, Mona

    2012-01-01

    Highlights: ► A mechanistic model for predicting gas hydrate formation kinetics is presented. ► A secondary nucleation rate model is proposed for the first time. ► Crystal–crystal collisions and crystal–impeller collisions are distinguished. ► Simultaneous determination of nucleation and growth kinetics are established. ► Important for design of gas hydrate based energy storage and CO 2 recovery systems. - Abstract: A rigorous mechanistic model for predicting gas hydrate formation crystallization kinetics is presented and the special case of CO 2 gas hydrate formation regarding CO 2 recovery and sequestration processes has been investigated by using the proposed model. A physical model for prediction of secondary nucleation rate is proposed for the first time and the formation rates of secondary nuclei by crystal–crystal collisions and crystal–impeller collisions are formulated. The objective functions for simultaneous determination of nucleation and growth kinetics are presented and a theoretical framework for predicting the dynamic behavior of gas hydrate formation is presented. Predicted time variations of CO 2 content, total number and surface area of produced hydrate crystals are in good agreement with the available experimental data. The proposed approach can have considerable application for design of gas hydrate converters regarding energy storage and CO 2 recovery processes.

  11. Prediction of protein subcellular locations by GO-FunD-PseAA predictor.

    Science.gov (United States)

    Chou, Kuo-Chen; Cai, Yu-Dong

    2004-08-06

    The localization of a protein in a cell is closely correlated with its biological function. With the explosion of protein sequences entering into DataBanks, it is highly desired to develop an automated method that can fast identify their subcellular location. This will expedite the annotation process, providing timely useful information for both basic research and industrial application. In view of this, a powerful predictor has been developed by hybridizing the gene ontology approach [Nat. Genet. 25 (2000) 25], functional domain composition approach [J. Biol. Chem. 277 (2002) 45765], and the pseudo-amino acid composition approach [Proteins Struct. Funct. Genet. 43 (2001) 246; Erratum: ibid. 44 (2001) 60]. As a showcase, the recently constructed dataset [Bioinformatics 19 (2003) 1656] was used for demonstration. The dataset contains 7589 proteins classified into 12 subcellular locations: chloroplast, cytoplasmic, cytoskeleton, endoplasmic reticulum, extracellular, Golgi apparatus, lysosomal, mitochondrial, nuclear, peroxisomal, plasma membrane, and vacuolar. The overall success rate of prediction obtained by the jackknife cross-validation was 92%. This is so far the highest success rate performed on this dataset by following an objective and rigorous cross-validation procedure.

  12. Caracterização do processo de rigor mortis em músculos de eqüinos e maciez da carne Caracterization of rigor mortis process of muscle horse and meat tenderness

    Directory of Open Access Journals (Sweden)

    Tatiana Pacheco Rodrigues

    2004-08-01

    Full Text Available Esta pesquisa utilizou 12 eqüinos de diferentes idades, abatidos em um matadouro-frigorífico (SIF 1803 em Araguari-MG, e estudou a temperatura, pH, comprimento de sarcômero em diferentes intervalos de tempo após abate (1h, 5h, 8h, 10h, 12h, 15h e 24h e força de cisalhamento (maciez dos músculos Longissimus dorsi e Semitendinosus, com intuito de caracterizar o desenvolvimento do processo de rigor mortis de eqüídeos durante o processamento industrial. A temperatura da câmara fria variou de 10,2°C a 4,0°C e a temperatura média inicial das carcaças foi de 35,32°C e a final de 4,15°C. O pH inicial médio do músculo Longissimus dorsi foi 6,49 e o final 5,63, e para o músculo Semitendinosus o pH inicial médio foi 6,44 e o final 5,70. A menor medida de sarcômero observada em ambos os músculos foi na 15ª hora após abate, ou seja, 1,44µm e 1,41µm, respectivamente. A carne dos eqüídeos adultos foi mais dura (pThis work studied 12 horses at different ages butchered in a slaughterhouse in Minas Gerais State, Brazil (SIF 1803 and evaluated temperature, pH, sarcomere length in different periods after slaughter (1h, 5h, 8h, 10h, 12h, 15h, and 24 hours as well as the shear force (meat tenderness of the Longissimus dorsi and Semitendinosus muscles, aiming at characterizing the rigor mortis onset in the meat during industrial processing. The chilly room temperature varied from 10.2°C to 4.0°C, and the mean initial carcass temperature was 35.32°C and the final one was 4.15°C. The mean initial pH of Longissimus dorsi was 6.49 and the final one was 5.63; the mean initial pH of Semitendinosus was 6.44 and the final one was 5.70. The smallest sarcomere size obtained in both muscles occurred at 15 hours postmortem, and the sarcomere lengths were 1.44 µm and 1.41 µm, respectively. The meat from adult horses was tougher than that from young ones (p<0.05, and the Semitendinosus muscle was tougher than Longissimus dorsi muscle.

  13. Temperature simulations in hyperthermia treatment planning of the head and neck region. Rigorous optimization of tissue properties

    International Nuclear Information System (INIS)

    Verhaart, Rene F.; Rijnen, Zef; Verduijn, Gerda M.; Paulides, Margarethus M.; Fortunati, Valerio; Walsum, Theo van; Veenland, Jifke F.

    2014-01-01

    Hyperthermia treatment planning (HTP) is used in the head and neck region (H and N) for pretreatment optimization, decision making, and real-time HTP-guided adaptive application of hyperthermia. In current clinical practice, HTP is based on power-absorption predictions, but thermal dose-effect relationships advocate its extension to temperature predictions. Exploitation of temperature simulations requires region- and temperature-specific thermal tissue properties due to the strong thermoregulatory response of H and N tissues. The purpose of our work was to develop a technique for patient group-specific optimization of thermal tissue properties based on invasively measured temperatures, and to evaluate the accuracy achievable. Data from 17 treated patients were used to optimize the perfusion and thermal conductivity values for the Pennes bioheat equation-based thermal model. A leave-one-out approach was applied to accurately assess the difference between measured and simulated temperature (∇T). The improvement in ∇T for optimized thermal property values was assessed by comparison with the ∇T for values from the literature, i.e., baseline and under thermal stress. The optimized perfusion and conductivity values of tumor, muscle, and fat led to an improvement in simulation accuracy (∇T: 2.1 ± 1.2 C) compared with the accuracy for baseline (∇T: 12.7 ± 11.1 C) or thermal stress (∇T: 4.4 ± 3.5 C) property values. The presented technique leads to patient group-specific temperature property values that effectively improve simulation accuracy for the challenging H and N region, thereby making simulations an elegant addition to invasive measurements. The rigorous leave-one-out assessment indicates that improvements in accuracy are required to rely only on temperature-based HTP in the clinic. (orig.) [de

  14. Rigorous Asymptotics for the Lamé and Mathieu Functions and their Respective Eigenvalues with a Large Parameter

    Science.gov (United States)

    Ogilvie, Karen; Olde Daalhuis, Adri B.

    2015-11-01

    By application of the theory for second-order linear differential equations with two turning points developed in [Olver F.W.J., Philos. Trans. Roy. Soc. London Ser. A 278 (1975), 137-174], uniform asymptotic approximations are obtained in the first part of this paper for the Lamé and Mathieu functions with a large real parameter. These approximations are expressed in terms of parabolic cylinder functions, and are uniformly valid in their respective real open intervals. In all cases explicit bounds are supplied for the error terms associated with the approximations. Approximations are also obtained for the large order behaviour for the respective eigenvalues. We restrict ourselves to a two term uniform approximation. Theoretically more terms in these approximations could be computed, but the coefficients would be very complicated. In the second part of this paper we use a simplified method to obtain uniform asymptotic expansions for these functions. The coefficients are just polynomials and satisfy simple recurrence relations. The price to pay is that these asymptotic expansions hold only in a shrinking interval as their respective parameters become large; this interval however encapsulates all the interesting oscillatory behaviour of the functions. This simplified method also gives many terms in asymptotic expansions for these eigenvalues, derived simultaneously with the coefficients in the function expansions. We provide rigorous realistic error bounds for the function expansions when truncated and order estimates for the error when the eigenvalue expansions are truncated. With this paper we confirm that many of the formal results in the literature are correct.

  15. Measurement and characterization of slippage and slip-law using a rigorous analysis in dynamics of oscillating rheometer: Newtonian fluid

    Science.gov (United States)

    Azese, Martin Ndi

    2018-02-01

    This article presents a rigorous calculation involving velocity slip of Newtonian fluid where we analyze and solve the unsteady Navier-Stokes equation with emphasis on its rheological implication. The goal of which is to model a simple yet effective non-invasive way of quantifying and characterizing slippage. Indeed this contrasts with previous techniques that exhibit inherent limitations whereby injecting foreign objects usually alter the flow. This problem is built on the Couette rheological flow system such that μ-Newton force and μ-stress are captured and processed to obtain wall slip. Our model leads to a linear partial differential equation and upon enforcing linear-Navier slip boundary conditions (BC) yields inhomogeneous and unsteady "Robin-type" BC. A dimensional analysis reveals salient dimensionless parameters: Roshko, Strouhal, and Reynolds while highlighting slip-numbers from BC. We also solve the slip-free case to corroborate and validate our results. Several graphs are generated showing slip effects, particularly, studying how slip-numbers, a key input, differentiate themselves to the outputs. We also confirm this in a graphical fashion by presenting the flow profile across channel width, velocity, and stress at both walls. A perturbation scheme is introduced to calculate long-time behavior when the system seats for long. More importantly, in the end, we justify the existence of a reverse mechanism, where an inverse transformation like Fourier transform uses the output data to retrieve slip-numbers and slip law, thus quantifying and characterizing slip. Therefore, we not only substantiate our analysis, but we also justify our claim, measurement and characterization, and theorize realizability of our proposition.

  16. The geological record of life 3500 Ma ago: Coping with the rigors of a young earth during late accretion

    Science.gov (United States)

    Lowe, Donald R.

    1989-01-01

    Thin cherty sedimentary layers within the volcanic portions of the 3,500 to 3,300 Ma-old Onverwacht and Fig Tree Groups, Barberton Greenstone belt, South Africa, and Warrawoona Group, eastern Pilbara Block, Western Australia, contain an abundant record of early Archean life. Five principal types of organic and probably biogenic remains and or structures can be identifed: stromatolites, stromatolite detritus, carbonaceous laminite or flat stromalite, carbonaceous detrital particles, and microfossils. Early Archean stromatolites were reported from both the Barberton and eastern Pilbara greenstone belts. Systematic studies are lacking, but two main morphological types of stromatolites appear to be represented by these occurrences. Morphology of the stromalites is described. Preserved early Archean stromatolites and carbonaceous matter appear to reflect communities of photosynthetic cyanobacteria inhabiting shallow, probably marine environments developed over the surfaces of low-relief, rapidly subsiding, simatic volcanic platforms. The overall environmental and tectonic conditions were those that probably prevailed at Earth's surface since the simatic crust and oceans formed sometime before 3,800 Ma. Recent studies also suggest that these early Archean sequences contain layers of debris formed by large-body impacts on early Earth. If so, then these early bacterial communities had developed strategies for coping with the disruptive effects of possibly globe-encircling high-temperature impact vapor clouds, dust blankets, and impact-generated tsunamis. It is probable that these early Archean biogenic materials represent organic communities that evolved long before the beginning of the preserved geological record and were well adapted to the rigors of life on a young, volcanically active Earth during late bombardment. These conditions may have had parallels on Mars during its early evolution.

  17. Comparison of rigorous modelling of different structure profiles on photomasks for quantitative linewidth measurements by means of UV- or DUV-optical microscopy

    Science.gov (United States)

    Ehret, Gerd; Bodermann, Bernd; Woehler, Martin

    2007-06-01

    The optical microscopy is an important instrument for dimensional characterisation or calibration of micro- and nanostructures, e.g. chrome structures on photomasks. In comparison to scanning electron microscopy (possible contamination of the sample) and atomic force microscopy (slow, risk of damage) optical microscopy is a fast and non destructive metrology method. The precise quantitative determination of the linewidth from the microscope image is, however, only possible by knowledge of the geometry of the structures and their consideration in the optical modelling. We compared two different rigorous model approaches, the Rigorous Coupled Wave Analysis (RCWA) and the Finite Elements Method (FEM) for modelling of structures with different edge angles, linewidths, line to space ratios and polarisations. The RCWA method can adapt inclined edges profiles only by a staircase approximation leading to increased modelling errors of the RCWA method. Even today's sophisticated rigorous methods still show problems with TM-polarisation. Therefore both rigorous methods are compared in terms of their convergence for TE and TM- polarisation. Beyond that also the influence of typical illumination wavelengths (365 nm, 248 nm and 193 nm) on the microscope images and their contribution to the measuring uncertainty budget will be discussed.

  18. Rigorous derivation of the perimeter generating functions for the mean-squared radius of gyration of rectangular, Ferrers and pyramid polygons

    International Nuclear Information System (INIS)

    Lin, Keh Ying

    2006-01-01

    We have derived rigorously the perimeter generating functions for the mean-squared radius of gyration of rectangular, Ferrers and pyramid polygons. These functions were found by Jensen recently. His nonrigorous results are based on the analysis of the long series expansions. (comment)

  19. Which Interventions Have the Greatest Effect on Student Learning in Sub-Saharan Africa? "A Meta-Analysis of Rigorous Impact Evaluations"

    Science.gov (United States)

    Conn, Katharine

    2014-01-01

    In the last three decades, there has been a large increase in the number of rigorous experimental and quasi-experimental evaluations of education programs in developing countries. These impact evaluations have taken place all over the globe, including a large number in Sub-Saharan Africa (SSA). The fact that the developing world is socially and…

  20. Rigor index, fillet yield and proximate composition of cultured striped catfish (Pangasianodon hypophthalmus for its suitability in processing industries in Bangladesh

    Directory of Open Access Journals (Sweden)

    Salma Noor-E Islami

    2014-12-01

    Full Text Available Rigor-index in market-size striped catfish (Pangasianodon hypophthalmus, locally called Thai-Pangas was determined to assess fillet yield for production of value-added products. In whole fish, rigor started within 1 hr after death under both iced and room temperature conditions while rigor-index reached a maximum of 72.23% within 8 hr and 85.5% within 5 hr at room temperature and iced condition, respectively, which was fully relaxed after 22 hr under both storage conditions. Post-mortem muscle pH decreased to 6.8 after 2 hr, 6.2 after 8 hr and sharp increase to 6.9 after 9 hr. There was a positive correlation between rigor progress and pH shift in fish fillets. Hand filleting was done post-rigor and fillet yield experiment showed 50.4±2.1% fillet, 8.0±0.2% viscera, 8.0±1.3% skin and 32.0±3.2% carcass could be obtained from Thai-Pangas. Proximate composition analysis of four regions of Thai-Pangas viz., head region, middle region, tail region and viscera revealed moisture 78.36%, 81.14%, 81.45% and 57.33%; protein 15.83%, 15.97%, 16.14% and 17.20%; lipid 4.61%, 1.82%, 1.32% and 24.31% and ash 1.09%, 0.96%, 0.95% and 0.86%, respectively indicating suitability of Thai-Pangas for production of value-added products such as fish fillets.

  1. rigor mortis encodes a novel nuclear receptor interacting protein required for ecdysone signaling during Drosophila larval development.

    Science.gov (United States)

    Gates, Julie; Lam, Geanette; Ortiz, José A; Losson, Régine; Thummel, Carl S

    2004-01-01

    Pulses of the steroid hormone ecdysone trigger the major developmental transitions in Drosophila, including molting and puparium formation. The ecdysone signal is transduced by the EcR/USP nuclear receptor heterodimer that binds to specific response elements in the genome and directly regulates target gene transcription. We describe a novel nuclear receptor interacting protein encoded by rigor mortis (rig) that is required for ecdysone responses during larval development. rig mutants display defects in molting, delayed larval development, larval lethality, duplicated mouth parts, and defects in puparium formation--phenotypes that resemble those seen in EcR, usp, E75A and betaFTZ-F1 mutants. Although the expression of these nuclear receptor genes is essentially normal in rig mutant larvae, the ecdysone-triggered switch in E74 isoform expression is defective. rig encodes a protein with multiple WD-40 repeats and an LXXLL motif, sequences that act as specific protein-protein interaction domains. Consistent with the presence of these elements and the lethal phenotypes of rig mutants, Rig protein interacts with several Drosophila nuclear receptors in GST pull-down experiments, including EcR, USP, DHR3, SVP and betaFTZ-F1. The ligand binding domain of betaFTZ-F1 is sufficient for this interaction, which can occur in an AF-2-independent manner. Antibody stains reveal that Rig protein is present in the brain and imaginal discs of second and third instar larvae, where it is restricted to the cytoplasm. In larval salivary gland and midgut cells, however, Rig shuttles between the cytoplasm and nucleus in a spatially and temporally regulated manner, at times that correlate with the major lethal phase of rig mutants and major switches in ecdysone-regulated gene expression. Taken together, these data indicate that rig exerts essential functions during larval development through gene-specific effects on ecdysone-regulated transcription, most likely as a cofactor for one or more

  2. Cross-Validation of Indicators of Cognitive Workload

    National Research Council Canada - National Science Library

    Marshall, Sandra P; Bartels, Mike

    2005-01-01

    .... The current study replicated the human performance findings of the previous phase of AMBR and added eye tracking analyses to enhance understanding of participants' behavior and to compare NASA TLX...

  3. Dioscorea deltoidea in Nepal: Cross Validating Uses and Ethnopharmacological Relevance

    Czech Academy of Sciences Publication Activity Database

    Rokaya, Maan Bahadur; Sharma, L.

    2016-01-01

    Roč. 2, č. 2 (2016), s. 17-26 ISSN 2455-4812 R&D Projects: GA MŠk(CZ) LO1415 Institutional support: RVO:67179843 Keywords : food * poisoning * herbarium specimen * identification Subject RIV: EH - Ecology, Behaviour

  4. Cross Validated Temperament Scale Validities Computed Using Profile Similarity Metrics

    Science.gov (United States)

    2017-04-27

    ORGANIZATION NAME(S) AND ADDRESS(ES) U. S. Army Research Institute for the Behavioral & Social Sciences 6000 6TH Street (Bldg. 1464 / Mail...AGENCY NAME(S) AND ADDRESS(ES) 10. SPONSOR/MONITOR’S ACRONYM(S) U. S. Army Research Institute for the Behavioral & Social Sciences 6000 6TH...respondent’s scale score is equal to the mean of the non-reversed and recoded-reversed items. Table 1 portrays the conventional scoring algorithm on

  5. Internet Attack Traceback: Cross-Validation and Pebble-Trace

    Science.gov (United States)

    2013-02-28

    stolen-cyber-attack. [3] Hacked: Data breach costly for Ohio State, victims of compromised info http://www.thelantern.com/campus/hacked- data ... breach -costly-for-ohio-state-victims-of-compromised-info-1.1831311. [4] S. C. Lee and C. Shields, “Tracing the Source of Network Attack: A Technical

  6. Influência do estresse causado pelo transporte e método de abate sobre o rigor mortis do tambaqui (Colossoma macropomum

    Directory of Open Access Journals (Sweden)

    Joana Maia Mendes

    2015-06-01

    Full Text Available ResumoO presente trabalho avaliou a influência do estresse pré-abate e do método de abate sobre o rigor mortis do tambaqui durante armazenamento em gelo. Foram estudadas respostas fisiológicas do tambaqui ao estresse durante o pré-abate, que foi dividido em quatro etapas: despesca, transporte, recuperação por 24 h e por 48 h. Ao final de cada etapa, os peixes foram amostrados para caracterização do estresse pré-abate por meio de análises dos parâmetros plasmáticos de glicose, lactato e amônia e, em seguida, os peixes foram abatidos por hipotermia ou por asfixia com gás carbônico para o estudo do rigor mortis. Verificou-se que o estado fisiológico de estresse dos peixes foi mais agudo logo após o transporte, implicando numa entrada em rigor mortis mais rápida: 60 minutos para tambaquis abatidos por hipotermia e 120 minutos para tambaquis abatidos por asfixia com gás carbônico. Nos viveiros, os peixes abatidos logo após a despesca apresentaram estado de estresse intermediário, sem diferença no tempo de entrada em rigor mortis em relação ao método de abate (135 minutos. Os peixes que passaram por recuperação ao estresse causado pelo transporte em condições simuladas de indústria apresentaram entrada em rigor mortis mais tardia: 225 minutos (com 24 h de recuperação e 255 minutos (com 48 h de recuperação, igualmente sem diferença em relação aos métodos de abate testados. A resolução do rigor mortis foi mais rápida nos peixes abatidos após o transporte, que foi de 12 dias. Nos peixes abatidos logo após a despesca, a resolução ocorreu com 16 dias e, nos peixes abatidos após recuperação, com 20 dias para 24 h de recuperação ao estresse pré-abate e 24 dias para 48 h de recuperação, sem influência do método de abate na resolução do rigor mortis. Assim, é desejável que o abate do tambaqui destinado à indústria seja feito após período de recuperação ao estresse, com vistas a aumentar sua

  7. Temperature simulations in hyperthermia treatment planning of the head and neck region. Rigorous optimization of tissue properties

    Energy Technology Data Exchange (ETDEWEB)

    Verhaart, Rene F.; Rijnen, Zef; Verduijn, Gerda M.; Paulides, Margarethus M. [Erasmus MC - Cancer Institute, Department of Radiation Oncology, Hyperthermia Unit, Rotterdam (Netherlands); Fortunati, Valerio; Walsum, Theo van; Veenland, Jifke F. [Erasmus MC, Departments of Medical Informatics and Radiology, Biomedical Imaging Group Rotterdam, Rotterdam (Netherlands)

    2014-12-15

    Hyperthermia treatment planning (HTP) is used in the head and neck region (H and N) for pretreatment optimization, decision making, and real-time HTP-guided adaptive application of hyperthermia. In current clinical practice, HTP is based on power-absorption predictions, but thermal dose-effect relationships advocate its extension to temperature predictions. Exploitation of temperature simulations requires region- and temperature-specific thermal tissue properties due to the strong thermoregulatory response of H and N tissues. The purpose of our work was to develop a technique for patient group-specific optimization of thermal tissue properties based on invasively measured temperatures, and to evaluate the accuracy achievable. Data from 17 treated patients were used to optimize the perfusion and thermal conductivity values for the Pennes bioheat equation-based thermal model. A leave-one-out approach was applied to accurately assess the difference between measured and simulated temperature (∇T). The improvement in ∇T for optimized thermal property values was assessed by comparison with the ∇T for values from the literature, i.e., baseline and under thermal stress. The optimized perfusion and conductivity values of tumor, muscle, and fat led to an improvement in simulation accuracy (∇T: 2.1 ± 1.2 C) compared with the accuracy for baseline (∇T: 12.7 ± 11.1 C) or thermal stress (∇T: 4.4 ± 3.5 C) property values. The presented technique leads to patient group-specific temperature property values that effectively improve simulation accuracy for the challenging H and N region, thereby making simulations an elegant addition to invasive measurements. The rigorous leave-one-out assessment indicates that improvements in accuracy are required to rely only on temperature-based HTP in the clinic. (orig.) [German] Die Hyperthermiebehandlungsplanung (HTP, ''hyperthermia treatment planning'') wird in der Kopf- und Halsregion zur Optimierung der

  8. Scattering of atoms by a stationary sinusoidal hard wall: Rigorous treatment in (n+1) dimensions and comparison with the Rayleigh method

    International Nuclear Information System (INIS)

    Goodman, F.O.

    1977-01-01

    A rigorous treatment of the scattering of atoms by a stationary sinusoidal hard wall in (n+1) dimensions is presented, a previous treatment by Masel, Merrill, and Miller for n=1 being contained as a special case. Numerical comparisons are made with the GR method of Garcia, which incorporates the Rayleigh hypothesis. Advantages and disadvantages of both methods are discussed, and it is concluded that the Rayleigh GR method, if handled properly, will probably work satisfactorily in physically realistic cases

  9. Fast rigorous numerical method for the solution of the anisotropic neutron transport problem and the NITRAN system for fusion neutronics application. Pt. 1

    International Nuclear Information System (INIS)

    Takahashi, A.; Rusch, D.

    1979-07-01

    Some recent neutronics experiments for fusion reactor blankets show that the precise treatment of anisotropic secondary emissions for all types of neutron scattering is needed for neutron transport calculations. In the present work new rigorous methods, i.e. based on non-approximative microscopic neutron balance equations, are applied to treat the anisotropic collision source term in transport equations. The collision source calculation is free from approximations except for the discretization of energy, angle and space variables and includes the rigorous treatment of nonelastic collisions, as far as nuclear data are given. Two methods are presented: first the Ii-method, which relies on existing nuclear data files and then, as an ultimate goal, the I*-method, which aims at the use of future double-differential cross section data, but which is also applicable to the present single-differential data basis to allow a smooth transition to the new data type. An application of the Ii-method is given in the code system NITRAN which employs the Ssub(N)-method to solve the transport equations. Both rigorous methods, the Ii- and the I*-method, are applicable to all radiation transport problems and they can be used also in the Monte-Carlo-method to solve the transport problem. (orig./RW) [de

  10. Relationships between storage protein composition, protein content, growing season and flour quaility of bread wheat

    DEFF Research Database (Denmark)

    Faergestad, E.M.; Flaete, N.E.S.; Magnus, E.M.

    2004-01-01

    ;f alleles appear similar on one-dimensional gels, two-dimensional separation of selected samples may suggest that the f components in these alleles are different proteins. Cross-validated partial least squares regression combined with empirical uncertainty estimates (jack-knifing) of the parameters...

  11. A Cross-Validation Study of the Other Customers Perceptions Scale in the Context of Sport and Fitness Centres. [Un estudio de validación cruzada sobre la escala de percepción de otros consumidores en el contexto de centros deportivos y de fitness].

    Directory of Open Access Journals (Sweden)

    Nicholas D Theodorakis

    2014-01-01

    Full Text Available This study aimed to extent the use of the Other Customer Perception (OCP scale by testing its psychometric properties and its generalizability in the context of sport and fitness centres. 360 members of three fitness clubs in Greece participated in the study. They were randomly divided into two subsamples (a calibration and a validation sample. Using Confirmatory Factor Analysis and composite reliability estimates the construct validity of OCP was supported. A cross-validation approach using invariance testing procedures across the two samples further supported the validity and generalizability of OCP in sport and fitness settings. OCP was found to be a reliable and valid scale for assessing the role of other customers in the service experience. Resumen Esta investigación ha pretendido extender el uso de la escala de percepción de otros consumidores (OCP por medio de la evaluación de sus propiedades psicométricas y su generalización en el contexto de centros deportivos y de fitness. La muestra la compusieron 360 miembros de tres clubes de fitness en Grecia, los cuales fueron divididos en dos submuestras (calibración y validación, respectivamente. Tras la aplicación del análisis factorial confirmatorio y estimaciones de fiabilidad compuesta, los resultados indican la validez de constructo de la escala. Además, se ha realizado un análisis de invarianza para el estudio de validación cruzada, que ha apoyado la generalización de su validez en este contexto de estudio. Por tanto, esta escala es fiable y válida para evaluar el papel de los otros consumidores en la experiencia con el servicio.

  12. Effects of Chilling and Partial Freezing on Rigor Mortis Changes of Bighead Carp (Aristichthys nobilis) Fillets: Cathepsin Activity, Protein Degradation and Microstructure of Myofibrils.

    Science.gov (United States)

    Lu, Han; Liu, Xiaochang; Zhang, Yuemei; Wang, Hang; Luo, Yongkang

    2015-12-01

    To investigate the effects of chilling and partial freezing on rigor mortis changes in bighead carp (Aristichthys nobilis), pH, cathepsin B, cathepsin B+L activities, SDS-PAGE of sarcoplasmic and myofibrillar proteins, texture, and changes in microstructure of fillets at 4 °C and -3 °C were determined at 0, 2, 4, 8, 12, 24, 48, and 72 h after slaughter. The results indicated that pH of fillets (6.50 to 6.80) was appropriate for cathepsin function during the rigor mortis. For fillets that were chilled and partially frozen, the cathepsin activity in lysosome increased consistently during the first 12 h, followed by a decrease from the 12 to 24 h, which paralleled an increase in activity in heavy mitochondria, myofibrils and sarcoplasm. There was no significant difference in cathepsin activity in lysosomes between fillets at 4 °C and -3 °C (P > 0.05). Partially frozen fillets had greater cathepsin activity in heavy mitochondria than chilled samples from the 48 to 72 h. In addition, partially frozen fillets showed higher cathepsin activity in sarcoplasm and lower cathepsin activity in myofibrils compared with chilled fillets. Correspondingly, we observed degradation of α-actinin (105 kDa) by cathepsin L in chilled fillets and degradation of creatine kinase (41 kDa) by cathepsin B in partially frozen fillets during the rigor mortis. The decline of hardness for both fillets might be attributed to the accumulation of cathepsin in myofibrils from the 8 to 24 h. The lower cathepsin activity in myofibrils for fillets that were partially frozen might induce a more intact cytoskeletal structure than fillets that were chilled. © 2015 Institute of Food Technologists®

  13. The neutron's Dirac-equation: Its rigorous solution at slab-like magnetic fields, non-relativistic approximation, energy spectra and statistical characteristics

    International Nuclear Information System (INIS)

    Zhang Yongde.

    1987-03-01

    In this paper, the neutron Dirac-equation is presented. After decoupling it into two equations of the simple spinors, the rigorous solution of this equation is obtained in the case of slab-like uniform magnetic fields at perpendicular incidence. At non-relativistic approximation and first order approximation of weak field (NRWFA), our results have included all results that have been obtained in references for this case up to now. The corresponding transformations of the neutron's spin vectors are given. The single particle spectrum and its approximate expression are obtained. The characteristics of quantum statistics with the approximate expression of energy spectrum are studied. (author). 15 refs

  14. Calcium-dependence of Donnan potentials in glycerinated rabbit psoas muscle in rigor, at and beyond filament overlap; a role for titin in the contractile process

    DEFF Research Database (Denmark)

    Coomber, S J; Bartels, E M; Elliott, G F

    2011-01-01

    contracts and breaks the microelectrode. Therefore the rigor state was studied. There is no reason to suppose a priori that a similar voltage switch does not occur during contraction, however. Calcium dependence is still apparent in muscles stretched beyond overlap (sarcomere length>3.8 μm) and is also seen...... in the gap filaments between the A- and I-band ends; further stretching abolishes the dependence. These experiments strongly suggest that calcium dependence is controlled initially by the titin component, and that this control is lost when titin filaments break. We suppose that that effect is mediated...

  15. Rigorous control of logarithmic corrections in four-dimensional phi4 spin systems. II. Critical behavior of susceptibility and correlation length

    International Nuclear Information System (INIS)

    Hara, T.; Tasaki, H.

    1987-01-01

    Continuing the analysis started in Part I of this work, they investigate critical phenomena in weakly coupled phi 4 spin systems in four dimensions. Concerning the critical behavior of the susceptibility and the correlation length (in the high-temperature phase), the existence of logarithmic corrections to their mean field type behavior is rigorously shown (i.e., they prove chi(t) ∼ t -1 absolute value 1n t/sup 1/3/, zeta(t) ∼ t/sup -1/2/ absolute value of ln t/sup 1/6/)

  16. A Rigorous Test of the Fit of the Circumplex Model to Big Five Personality Data: Theoretical and Methodological Issues and Two Large Sample Empirical Tests.

    Science.gov (United States)

    DeGeest, David Scott; Schmidt, Frank

    2015-01-01

    Our objective was to apply the rigorous test developed by Browne (1992) to determine whether the circumplex model fits Big Five personality data. This test has yet to be applied to personality data. Another objective was to determine whether blended items explained correlations among the Big Five traits. We used two working adult samples, the Eugene-Springfield Community Sample and the Professional Worker Career Experience Survey. Fit to the circumplex was tested via Browne's (1992) procedure. Circumplexes were graphed to identify items with loadings on multiple traits (blended items), and to determine whether removing these items changed five-factor model (FFM) trait intercorrelations. In both samples, the circumplex structure fit the FFM traits well. Each sample had items with dual-factor loadings (8 items in the first sample, 21 in the second). Removing blended items had little effect on construct-level intercorrelations among FFM traits. We conclude that rigorous tests show that the fit of personality data to the circumplex model is good. This finding means the circumplex model is competitive with the factor model in understanding the organization of personality traits. The circumplex structure also provides a theoretically and empirically sound rationale for evaluating intercorrelations among FFM traits. Even after eliminating blended items, FFM personality traits remained correlated.

  17. Impact of post-rigor high pressure processing on the physicochemical and microbial shelf-life of cultured red abalone (Haliotis rufescens).

    Science.gov (United States)

    Hughes, Brianna H; Perkins, L Brian; Yang, Tom C; Skonberg, Denise I

    2016-03-01

    High pressure processing (HPP) of post-rigor abalone at 300MPa for 10min extended the refrigerated shelf-life to four times that of unprocessed controls. Shucked abalone meats were processed at 100 or 300MPa for 5 or 10min, and stored at 2°C for 35days. Treatments were analyzed for aerobic plate count (APC), total volatile base nitrogen (TVBN), K-value, biogenic amines, color, and texture. APC did not exceed 10(6) and TVBN levels remained below 35mg/100g for 35days for the 300MPa treatments. No biogenic amines were detected in the 300MPa treatments, but putrescine and cadaverine were detected in the control and 100MPa treatments. Color and texture were not affected by HPP or storage time. These results indicate that post-rigor processing at 300MPa for 10min can significantly increase refrigerated shelf-life of abalone without affecting chemical or physical quality characteristics important to consumers. Copyright © 2015 Elsevier Ltd. All rights reserved.

  18. Does the sequence of onset of rigor mortis depend on the proportion of muscle fibre types and on intra-muscular glycogen content?

    Science.gov (United States)

    Kobayashi, M; Takatori, T; Nakajima, M; Saka, K; Iwase, H; Nagao, M; Niijima, H; Matsuda, Y

    1999-01-01

    We examined the postmortem changes in the levels of ATP, glycogen and lactic acid in two masticatory muscles and three leg muscles of rats. The proportion of fibre types of the muscles was determined with NIH image software. The ATP levels in the white muscles did not decrease up to 1 h after death, and the ATP levels 1 and 2 h after death in the white muscles were higher than those in the red muscles with a single exception. The glycogen level at death and 1 h after death and the lactic acid level 1 h after death in masticatory muscles were lower than in the leg muscles. It is possible that the differences in the proportion of muscle fibre types and in glycogen level in muscles influences the postmortem change in ATP and lactic acid, which would accelerate or retard rigor mortis of the muscles.

  19. Influência do estresse causado pelo transporte e método de abate sobre o rigor mortis do tambaqui (Colossoma macropomum)

    OpenAIRE

    Mendes, Joana Maia; Inoue, Luis Antonio Kioshi Aoki; Jesus, Rogério Souza

    2015-01-01

    ResumoO presente trabalho avaliou a influência do estresse pré-abate e do método de abate sobre o rigor mortis do tambaqui durante armazenamento em gelo. Foram estudadas respostas fisiológicas do tambaqui ao estresse durante o pré-abate, que foi dividido em quatro etapas: despesca, transporte, recuperação por 24 h e por 48 h. Ao final de cada etapa, os peixes foram amostrados para caracterização do estresse pré-abate por meio de análises dos parâmetros plasmáticos de glicose, lactato e amônia...

  20. The alterations in adenosine nucleotides and lactic acid in striated muscles of rats during Rigor mortis following death with drowning or cervical dislocation.

    Science.gov (United States)

    Pençe, Halime Hanim; Pençe, Sadrettin; Kurtul, Naciye; Yilmaz, Necat; Kocoglu, Hasan; Bakan, Ebubekir

    2003-01-01

    In this study, adenosine triphosphate (ATP), adenosine diphosphate (ADP), adenosine monophosphate (AMP) and lactic acid in the muscles of masseter, triceps, and quadriceps obtained from right and left sides of Spraque-Dawley rats following death were investigated. The samples were taken immediately and 120 minutes after death occurred. The rats were killed either by cervical dislocation or drowning. ATP concentrations in the muscles of masseter, triceps, and quadriceps were lower in samples obtained 120 minutes after death than in those obtained immediately after death. ADP, AMP, and lactic acid concentrations in these muscles were higher in samples obtained 120 minutes after death than those obtained immediately after death. A positive linear correlation was determined between ATP and ADP concentrations in quadriceps muscles of the rats killed with cervical dislocation and in triceps muscles of the rats killed with drowning. When rats killed with cervical dislocation and with drowning were compared, ADP, AMP, and lactic acid concentrations were lower in the former than in the latter for both times (immediately and 120 minutes after death occurred). In the case of drowning, ATP is consumed faster because of hard exercise or severe physical activity, resulting in a faster rigor mortis. Higher lactic acid levels were determined in muscles of the rats killed with drowning than the other group. In the control and electric shock rats, ATP decreased in different levels in the three different muscle types mentioned above in control group, being much decline in masseter and then in quadriceps. This may be caused by lower mass and less glycogen storage of masseter. No different ATP levels were measured in drowning group with respect to the muscle type possibly because of the severe activity of triceps and quadriceps and because of smaller mass of masseter. One can conclude that the occurrence of rigor mortis is closely related to the mode of death.

  1. The 1,5-H-shift in 1-butoxy: A case study in the rigorous implementation of transition state theory for a multirotamer system

    Science.gov (United States)

    Vereecken, Luc; Peeters, Jozef

    2003-09-01

    The rigorous implementation of transition state theory (TST) for a reaction system with multiple reactant rotamers and multiple transition state conformers is discussed by way of a statistical rate analysis of the 1,5-H-shift in 1-butoxy radicals, a prototype reaction for the important class of H-shift reactions in atmospheric chemistry. Several approaches for deriving a multirotamer TST expression are treated: oscillator versus (hindered) internal rotor models; distinguishable versus indistinguishable atoms; and direct count methods versus degeneracy factors calculated by (simplified) direct count methods or from symmetry numbers and number of enantiomers, where applicable. It is shown that the various treatments are fully consistent, even if the TST expressions themselves appear different. The 1-butoxy H-shift reaction is characterized quantum chemically using B3LYP-DFT; the performance of this level of theory is compared to other methods. Rigorous application of the multirotamer TST methodology in an harmonic oscillator approximation based on this data yields a rate coefficient of k(298 K,1 atm)=1.4×105 s-1, and an Arrhenius expression k(T,1 atm)=1.43×1011 exp(-8.17 kcal mol-1/RT) s-1, which both closely match the experimental recommendations in the literature. The T-dependence is substantially influenced by the multirotamer treatment, as well as by the tunneling and fall-off corrections. The present results are compared to those of simplified TST calculations based solely on the properties of the lowest energy 1-butoxy rotamer.

  2. Analysis of designed experiments by stabilised PLS Regression and jack-knifing

    DEFF Research Database (Denmark)

    Martens, Harald; Høy, M.; Westad, F.

    2001-01-01

    Pragmatical, visually oriented methods for assessing and optimising bi-linear regression models are described, and applied to PLS Regression (PLSR) analysis of multi-response data from controlled experiments. The paper outlines some ways to stabilise the PLSR method to extend its range...... the reliability of the linear and bi-linear model parameter estimates. The paper illustrates how the obtained PLSR "significance" probabilities are similar to those from conventional factorial ANOVA, but the PLSR is shown to give important additional overview plots of the main relevant structures in the multi....... An Introduction, Wiley, Chichester, UK, 2001]....

  3. ROCView: prototype software for data collection in jackknife alternative free-response receiver operating characteristic analysis

    Science.gov (United States)

    Thompson, J; Hogg, P; Thompson, S; Manning, D; Szczepura, K

    2012-01-01

    ROCView has been developed as an image display and response capture (IDRC) solution to image display and consistent recording of reader responses in relation to the free-response receiver operating characteristic paradigm. A web-based solution to IDRC for observer response studies allows observations to be completed from any location, assuming that display performance and viewing conditions are consistent with the study being completed. The simplistic functionality of the software allows observations to be completed without supervision. ROCView can display images from multiple modalities, in a randomised order if required. Following registration, observers are prompted to begin their image evaluation. All data are recorded via mouse clicks, one to localise (mark) and one to score confidence (rate) using either an ordinal or continuous rating scale. Up to nine “mark-rating” pairs can be made per image. Unmarked images are given a default score of zero. Upon completion of the study, both true-positive and false-positive reports can be downloaded and adapted for analysis. ROCView has the potential to be a useful tool in the assessment of modality performance difference for a range of imaging methods. PMID:22573294

  4. The rigors of aligning performance

    OpenAIRE

    Hart, Andrew; Lucas, James

    2015-01-01

    Approved for public release; distribution is unlimited This Joint Applied Project addresses what can be done within the Naval Facilities Engineering Command Northwest community to better align its goals among competing interests from various stakeholders, while balancing the operational and regulatory constraints that often conflict with stakeholder goals and objectives. As a cross-functional organization, competing interests among the various business lines, support lines, and other stake...

  5. The Rigors of Aligning Performance

    Science.gov (United States)

    2015-06-01

    importance of the organization’s goals. To better align the commands goals with departmental goals, setting and continuously communicating goals and goal...which is vital to highlight the importance of the organization’s goals. To better align the commands goals with departmental goals, setting and...result of the 2004 organizational restructure, and as defined in the CONOPS, NAVFAC now operates as a matrix organization with integrated “vertical

  6. Maintaining rigor in research: flaws in a recent study and a reanalysis of the relationship between state abortion laws and maternal mortality in Mexico.

    Science.gov (United States)

    Darney, Blair G; Saavedra-Avendano, Biani; Lozano, Rafael

    2017-01-01

    A recent publication [Koch E, Chireau M, Pliego F, Stanford J, Haddad S, Calhoun B, Aracena P, Bravo M, Gatica S, Thorp J. Abortion legislation, maternal healthcare, fertility, female literacy, sanitation, violence against women and maternal deaths: a natural experiment in 32 Mexican states. BMJ Open 2015;5(2):e006013] claimed that Mexican states with more restrictive abortion laws had lower levels of maternal mortality. Our objectives are to replicate the analysis, reanalyze the data and offer a critique of the key flaws of the Koch study. We used corrected maternal mortality data (2006-2013), live births, and state-level indicators of poverty. We replicate the published analysis. We then reclassified state-level exposure to abortion on demand based on actual availability of abortion (Mexico City versus the other 31 states) and test the association of abortion access and the maternal mortality ratio (MMR) using descriptives over time, pooled chi-square tests and regression models. We included 256 state-year observations. We did not find significant differences in MMR between Mexico City (MMR=49.1) and the 31 states (MMR=44.6; p=.44). Using Koch's classification of states, we replicated published differences of higher MMR where abortion is more available. We found a significant, negative association between MMR and availability of abortion in the same multivariable models as Koch, but using our state classification (beta=-22.49, 95% CI=-38.9; -5.99). State-level poverty remains highly correlated with MMR. Koch makes errors in methodology and interpretation, making false causal claims about abortion law and MMR. MMR is falling most rapidly in Mexico City, but our main study limitation is an inability to draw causal inference about abortion law or access and maternal mortality. We need rigorous evidence about the health impacts of increasing access to safe abortion worldwide. Transparency and integrity in research is crucial, as well as perhaps even more in

  7. Prediction of protein structural classes by Chou's pseudo amino acid composition: approached using continuous wavelet transform and principal component analysis.

    Science.gov (United States)

    Li, Zhan-Chao; Zhou, Xi-Bin; Dai, Zong; Zou, Xiao-Yong

    2009-07-01

    A prior knowledge of protein structural classes can provide useful information about its overall structure, so it is very important for quick and accurate determination of protein structural class with computation method in protein science. One of the key for computation method is accurate protein sample representation. Here, based on the concept of Chou's pseudo-amino acid composition (AAC, Chou, Proteins: structure, function, and genetics, 43:246-255, 2001), a novel method of feature extraction that combined continuous wavelet transform (CWT) with principal component analysis (PCA) was introduced for the prediction of protein structural classes. Firstly, the digital signal was obtained by mapping each amino acid according to various physicochemical properties. Secondly, CWT was utilized to extract new feature vector based on wavelet power spectrum (WPS), which contains more abundant information of sequence order in frequency domain and time domain, and PCA was then used to reorganize the feature vector to decrease information redundancy and computational complexity. Finally, a pseudo-amino acid composition feature vector was further formed to represent primary sequence by coupling AAC vector with a set of new feature vector of WPS in an orthogonal space by PCA. As a showcase, the rigorous jackknife cross-validation test was performed on the working datasets. The results indicated that prediction quality has been improved, and the current approach of protein representation may serve as a useful complementary vehicle in classifying other attributes of proteins, such as enzyme family class, subcellular localization, membrane protein types and protein secondary structure, etc.

  8. Prediction of Protein Structural Classes for Low-Similarity Sequences Based on Consensus Sequence and Segmented PSSM

    Directory of Open Access Journals (Sweden)

    Yunyun Liang

    2015-01-01

    Full Text Available Prediction of protein structural classes for low-similarity sequences is useful for understanding fold patterns, regulation, functions, and interactions of proteins. It is well known that feature extraction is significant to prediction of protein structural class and it mainly uses protein primary sequence, predicted secondary structure sequence, and position-specific scoring matrix (PSSM. Currently, prediction solely based on the PSSM has played a key role in improving the prediction accuracy. In this paper, we propose a novel method called CSP-SegPseP-SegACP by fusing consensus sequence (CS, segmented PsePSSM, and segmented autocovariance transformation (ACT based on PSSM. Three widely used low-similarity datasets (1189, 25PDB, and 640 are adopted in this paper. Then a 700-dimensional (700D feature vector is constructed and the dimension is decreased to 224D by using principal component analysis (PCA. To verify the performance of our method, rigorous jackknife cross-validation tests are performed on 1189, 25PDB, and 640 datasets. Comparison of our results with the existing PSSM-based methods demonstrates that our method achieves the favorable and competitive performance. This will offer an important complementary to other PSSM-based methods for prediction of protein structural classes for low-similarity sequences.

  9. GANTRAS - a system of codes for the solution of the multigroup transport equation with a rigorous treatment of anisotropic neutron scattering

    International Nuclear Information System (INIS)

    Schwenk-Ferrero, A.

    1986-11-01

    GANTRAS is a system of codes for neutron transport calculations in which the anisotropy of elastic and inelastic (including (n,n'x)-reactions) scattering is fully taken into account. This is achieved by employing a rigorous method, so-called I * -method, to represent the scattering term of the transport equation and with the use of double-differential cross-sections for the description of the emission of secondary neutrons. The I * -method was incorporated into the conventional transport code ONETRAN. The ONETRAN subroutines were modified for the new purpose. An implementation of the updated version ANTRA1 was accomplished for plane and spherical geometry. ANTRA1 was included in GANTRAS and linked to another modules which prepare angle-dependent transfer matrices. The GANTRAS code consists of three modules: 1. The CROMIX code which calculates the macroscopic transfer matrices for mixtures on the base of microscopic nuclide-dependent data. 2. The ATP code which generates discretized angular transfer probabilities (i.e. discretizes the I * -function). 3. The ANTRA1 code to perform S N transport calculations in one-dimensional plane and spherical geometries. This structure of GANTRAS allows to accommodate the system to various transport problems. (orig.) [de

  10. A systematic review of the use of theory in the design of guideline dissemination and implementation strategies and interpretation of the results of rigorous evaluations

    Directory of Open Access Journals (Sweden)

    Grimshaw Jeremy M

    2010-02-01

    Full Text Available Abstract Background There is growing interest in the use of cognitive, behavioural, and organisational theories in implementation research. However, the extent of use of theory in implementation research is uncertain. Methods We conducted a systematic review of use of theory in 235 rigorous evaluations of guideline dissemination and implementation studies published between 1966 and 1998. Use of theory was classified according to type of use (explicitly theory based, some conceptual basis, and theoretical construct used and stage of use (choice/design of intervention, process/mediators/moderators, and post hoc/explanation. Results Fifty-three of 235 studies (22.5% were judged to have employed theories, including 14 studies that explicitly used theory. The majority of studies (n = 42 used only one theory; the maximum number of theories employed by any study was three. Twenty-five different theories were used. A small number of theories accounted for the majority of theory use including PRECEDE (Predisposing, Reinforcing, and Enabling Constructs in Educational Diagnosis and Evaluation, diffusion of innovations, information overload and social marketing (academic detailing. Conclusions There was poor justification of choice of intervention and use of theory in implementation research in the identified studies until at least 1998. Future research should explicitly identify the justification for the interventions. Greater use of explicit theory to understand barriers, design interventions, and explore mediating pathways and moderators is needed to advance the science of implementation research.

  11. O lugar de referência e o rigor do método no Jornalismo: algumas considerações

    Directory of Open Access Journals (Sweden)

    Alfredo Eurico Vizeu

    2010-07-01

    Full Text Available O objetivo deste artigo é  propor algumas questões quanto ao jornalismo como um lugar de referência e a conseqüente necessidade do rigor no método de investigação. Procuramos analisar o jornalismo como um das instituições centrais na orientação do homem moderno, bem como problematizar a questão da investigação jornalística. Com base nos referenciais teóricos da cordialidade, da função pedagógica do Jornalismo e do método de apuração, seleção e produção da notícia, procuramos apontar algumas pistas que indicam as mudanças que vêm ocorrendo no campo jornalístico nesta já quase primeira década do século XXI.

  12. High survival rates of Campylobacter coli under different stress conditions suggest that more rigorous food control measures might be needed in Brazil.

    Science.gov (United States)

    Gomes, Carolina N; Passaglia, Jaqueline; Vilela, Felipe P; Pereira da Silva, Fátima M H S; Duque, Sheila S; Falcão, Juliana P

    2018-08-01

    Campylobacter spp. have been the most commonly reported gastrointestinal bacterial pathogen in many countries. Consumption of improperly prepared poultry meat has been the main transmission route of Campylobacter spp. Although Brazil is the largest exporter of poultry meat in the world, campylobacteriosis has been a neglected disease in the country. The aim of this study was to characterize 50 Campylobacter coli strains isolated from different sources in Brazil regarding the frequency of 16 virulence genes and their survival capability under five different stress conditions. All strains studied presented the cadF, flaA, and sodB genes that are considered essential for colonization. All strains grew at 4 °C and 37 °C after 24 h. High survival rates were observed when the strains were incubated in BHI with 7.5% NaCl and exposed to acid and oxidative stress. In conclusion, the pathogenic potential of the strains studied was reinforced by the presence of several important virulence genes and by the high growth and survival rates of the majority of those strains under different stress conditions. The results enabled a better understanding of strains circulating in Brazil and suggest that more rigorous control measures may be needed, given the importance of contaminated food as vehicles for Campylobacter coli. Copyright © 2018 Elsevier Ltd. All rights reserved.

  13. ASTM Committee C28: International Standards for Properties and Performance of Advanced Ceramics-Three Decades of High-Quality, Technically-Rigorous Normalization

    Science.gov (United States)

    Jenkins, Michael G.; Salem, Jonathan A.

    2016-01-01

    Physical and mechanical properties and performance of advanced ceramics and glasses are difficult to measure correctly without the proper techniques. For over three decades, ASTM Committee C28 on Advanced Ceramics, has developed high-quality, technically-rigorous, full-consensus standards (e.g., test methods, practices, guides, terminology) to measure properties and performance of monolithic and composite ceramics that may be applied to glasses in some cases. These standards contain testing particulars for many mechanical, physical, thermal, properties and performance of these materials. As a result these standards are used to generate accurate, reliable, repeatable and complete data. Within Committee C28, users, producers, researchers, designers, academicians, etc. have written, continually updated, and validated through round-robin test programs, 50 standards since the Committee's founding in 1986. This paper provides a detailed retrospective of the 30 years of ASTM Committee C28 including a graphical pictogram listing of C28 standards along with examples of the tangible benefits of standards for advanced ceramics to demonstrate their practical applications.

  14. ASTM Committee C28: International Standards for Properties and Performance of Advanced Ceramics, Three Decades of High-quality, Technically-rigorous Normalization

    Science.gov (United States)

    Jenkins, Michael G.; Salem, Jonathan A.

    2016-01-01

    Physical and mechanical properties and performance of advanced ceramics and glasses are difficult to measure correctly without the proper techniques. For over three decades, ASTM Committee C28 on Advanced Ceramics, has developed high quality, rigorous, full-consensus standards (e.g., test methods, practices, guides, terminology) to measure properties and performance of monolithic and composite ceramics that may be applied to glasses in some cases. These standards testing particulars for many mechanical, physical, thermal, properties and performance of these materials. As a result these standards provide accurate, reliable, repeatable and complete data. Within Committee C28 users, producers, researchers, designers, academicians, etc. have written, continually updated, and validated through round-robin test programs, nearly 50 standards since the Committees founding in 1986. This paper provides a retrospective review of the 30 years of ASTM Committee C28 including a graphical pictogram listing of C28 standards along with examples of the tangible benefits of advanced ceramics standards to demonstrate their practical applications.

  15. Rigor mortis and livor mortis in a living patient: A fatal case of acute total occlusion of the infrarenal abdominal aorta following renal surgery

    Directory of Open Access Journals (Sweden)

    Høyer Christian Bjerre

    2016-06-01

    Full Text Available A 63-year-old woman underwent a nephrectomy on the right side for renal cancer. Postoperatively she developed abdominal and lower back pain, which was treated with an injection of analgesics in an epidural catheter. The following morning it was discovered that the patient had cold legs with pallor and no palpable femoral pulse. Rigor mortis and livor mortis were diagnosed in both legs, even though the patient was still alive and awake. Doppler ultrasound examination revealed the absence of blood flow in the lower part of the abdominal aorta and distally. A cross disciplinary conference including specialists in urology, orthopaedics, vascular surgery, anaesthesiology, internal medicine, and intensive care concluded that no lifesaving treatment was possible, and the patient died the following day. A forensic autopsy revealed severe atherosclerosis with thrombosis and dissection of the abdominal aorta. This case clearly demonstrates that a vascular emergency should be considered when patients complain about pain in the lower back, abdomen or limbs. Clinicians should be especially aware of symptoms of tissue death that can be masked by epidural analgesia.

  16. Carbon monoxide stunning of Atlantic salmon (Salmo salar L.) modifies rigor mortis and sensory traits as revealed by NIRS and other instruments.

    Science.gov (United States)

    Concollato, Anna; Parisi, Giuliana; Masoero, Giorgio; Romvàri, Robert; Olsen, Rolf-Erik; Dalle Zotte, Antonella

    2016-08-01

    Methods of stunning used in salmon slaughter are still the subject of research. Fish quality can be influenced by pre-, ante- and post-mortem conditions, including handling before slaughter, slaughter methods and storage conditions. Carbon monoxide (CO) is known to improve colour stability in red muscle and to reduce microbial growth and lipid oxidation in live fish exposed to CO. Quality differences in Atlantic salmon, Salmo salar L., stunned by CO or percussion, were evaluated and compared by different techniques [near infrared reflectance spectroscopy (NIRS), electronic nose (EN), electronic tongue (ET)] and sensory analysis. Thawed samples, freeze-dried preparates and NIRS devices proved to be the most efficient combinations for discriminating the treatments applied to salmon, i.e. first the stunning methods adopted, then the back-prediction of the maximum time to reach rigor mortis and finally to correlate some sensory attributes. A trained panel found significant differences between control and CO-stunned salmon: reduced tactile crumbliness, reduced odour and aroma intensities, and reduced tenderness of CO-treated fillets. CO stunning reduced radiation absorbance in spectra of thawed and freeze-dried fillets, but not fillet samples stored in ethanol, where it may have interacted with myoglobin and myosin. The good results in a rapid discrimination of thawed samples detected by NIRS suggest suitable applications in the fish industry. CO treatment could mitigate sensory perception, but consumer tests are needed to confirm our findings. © 2015 Society of Chemical Industry. © 2015 Society of Chemical Industry.

  17. Explaining the variation in the shear force of lamb meat using sarcomere length, the rate of rigor onset and pH.

    Science.gov (United States)

    Hopkins, D L; Toohey, E S; Lamb, T A; Kerr, M J; van de Ven, R; Refshauge, G

    2011-08-01

    The temperature when the pH=6.0 (temp@pH6) impacts on the tenderness and eating quality of sheep meat. Due to the expense, sarcomere length is not routinely measured as a variable to explain variation in shear force, but whether measures such as temp@pH6 are as useful a parameter needs to be established. Measures of rigor onset in 261 carcases, including the temp@pH6, were evaluated in this study for their ability to explain some of the variation in shear force. The results show that for 1 day aged product combinations of the temp@pH6, the pH at 18 °C and the pH at 24 h provided a larger reduction (almost double) in total shear force variation than sarcomere length alone, with pH at 24 h being the single best measure. For 5 day aged product, pH at 18 °C was the single best measure. Inclusion of sarcomere length did represent some improvement, but the marginal increase would not be cost effective. Crown Copyright © 2011. Published by Elsevier Ltd. All rights reserved.

  18. A ''new'' approach to the quantitative statistical dynamics of plasma turbulence: The optimum theory of rigorous bounds on steady-state transport

    International Nuclear Information System (INIS)

    Krommes, J.A.; Kim, C.

    1990-01-01

    The fundamental problem in the theory of turbulent transport is to find the flux Γ of a quantity such as heat. Methods based on statistical closures are mired in conceptual controversies and practical difficulties. However, it is possible to bound Γ by employing constraints derived rigorously from the equations of motion. Brief reviews of the general theory and its application to passive advection are given. Then, a detailed application is made to anomalous resistivity generated by self-consistent turbulence in a reversed-field pinch. A nonlinear variational principle for an upper bound on the turbulent electromotive force for fixed current is formulated from the magnetohydrodynamic equations in cylindrical geometry. Numerical solution of a case constrained solely by energy balance leads to a reasonable bound and nonlinear eigenfunctions that share intriguing features with experimental data: The dominant mode numbers appear to be correct, and field reversal is predicted at reasonable values of the pinch parameter. Although open questions remain, upon considering all bounding calculations to date it can be concluded, remarkably, that global energy balance constrains transport sufficiently so that bounds derived therefrom are not unreasonable and that bounding calculations are feasible even for involved practical problems. The potential of the method has hardly been tapped; it provides a fertile area for future research

  19. Skin Bleaching and Dermatologic Health of African and Afro-Caribbean Populations in the US: New Directions for Methodologically Rigorous, Multidisciplinary, and Culturally Sensitive Research.

    Science.gov (United States)

    Benn, Emma K T; Alexis, Andrew; Mohamed, Nihal; Wang, Yan-Hong; Khan, Ikhlas A; Liu, Bian

    2016-12-01

    Skin-bleaching practices, such as using skin creams and soaps to achieve a lighter skin tone, are common throughout the world and are triggered by cosmetic reasons that oftentimes have deep historical, economic, sociocultural, and psychosocial roots. Exposure to chemicals in the bleaching products, notably, mercury (Hg), hydroquinone, and steroids, has been associated with a variety of adverse health effects, such as Hg poisoning and exogenous ochronosis. In New York City (NYC), skin care product use has been identified as an important route of Hg exposure, especially among Caribbean-born blacks and Dominicans. However, surprisingly sparse information is available on the epidemiology of the health impacts of skin-bleaching practices among these populations. We highlight the dearth of large-scale, comprehensive, community-based, clinical, and translational research in this area, especially the limited skin-bleaching-related research among non-White populations in the US. We offer five new research directions, including investigating the known and under-studied health consequences among populations for which the skin bleach practice is newly emerging at an alarming rate using innovative laboratory and statistical methods. We call for conducting methodologically rigorous, multidisciplinary, and culturally sensitive research in order to provide insights into the root and the epidemiological status of the practice and provide evidence of exposure-outcome associations, with an ultimate goal of developing potential intervention strategies to reduce the health burdens of skin-bleaching practice.

  20. A ''new'' approach to the quantitative statistical dynamics of plasma turbulence: The optimum theory of rigorous bounds on steady-state transport

    International Nuclear Information System (INIS)

    Krommes, J.A.; Kim, Chang-Bae

    1990-06-01

    The fundamental problem in the theory of turbulent transport is to find the flux Γ of a quantity such as heat. Methods based on statistical closures are mired in conceptual controversies and practical difficulties. However, it is possible to bound Γ by employing constraints derived rigorously from the equations of motion. Brief reviews of the general theory and its application to passive advection are given. Then, a detailed application is made to anomalous resistivity generated by self-consistent turbulence in a reversed-field pinch. A nonlinear variational principle for an upper bound on the turbulence electromotive force for fixed current is formulated from the magnetohydrodynamic equations in cylindrical geometry. Numerical solution of a case constrained solely by energy balance leads to a reasonable bound and nonlinear eigenfunctions that share intriguing features with experimental data: the dominant mode numbers appear to be correct, and field reversal is predicted at reasonable values of the pinch parameter. Although open questions remain upon considering all bounding calculations to date one can conclude, remarkably, that global energy balance constrains transport sufficiently so that bounds derived therefrom are not unreasonable and that bounding calculations are feasible even for involved practical problems. The potential of the method has hardly been tapped; it provides a fertile area for future research. 29 refs

  1. Multi-platform mass spectrometry analysis of the CSF and plasma metabolomes of rigorously matched amyotrophic lateral sclerosis, Parkinson's disease and control subjects.

    Science.gov (United States)

    Wuolikainen, Anna; Jonsson, Pär; Ahnlund, Maria; Antti, Henrik; Marklund, Stefan L; Moritz, Thomas; Forsgren, Lars; Andersen, Peter M; Trupp, Miles

    2016-04-01

    Amyotrophic lateral sclerosis (ALS) and Parkinson's disease (PD) are protein-aggregation diseases that lack clear molecular etiologies. Biomarkers could aid in diagnosis, prognosis, planning of care, drug target identification and stratification of patients into clinical trials. We sought to characterize shared and unique metabolite perturbations between ALS and PD and matched controls selected from patients with other diagnoses, including differential diagnoses to ALS or PD that visited our clinic for a lumbar puncture. Cerebrospinal fluid (CSF) and plasma from rigorously age-, sex- and sampling-date matched patients were analyzed on multiple platforms using gas chromatography (GC) and liquid chromatography (LC)-mass spectrometry (MS). We applied constrained randomization of run orders and orthogonal partial least squares projection to latent structure-effect projections (OPLS-EP) to capitalize upon the study design. The combined platforms identified 144 CSF and 196 plasma metabolites with diverse molecular properties. Creatine was found to be increased and creatinine decreased in CSF of ALS patients compared to matched controls. Glucose was increased in CSF of ALS patients and α-hydroxybutyrate was increased in CSF and plasma of ALS patients compared to matched controls. Leucine, isoleucine and ketoleucine were increased in CSF of both ALS and PD. Together, these studies, in conjunction with earlier studies, suggest alterations in energy utilization pathways and have identified and further validated perturbed metabolites to be used in panels of biomarkers for the diagnosis of ALS and PD.

  2. A systematic review of the use of theory in the design of guideline dissemination and implementation strategies and interpretation of the results of rigorous evaluations.

    Science.gov (United States)

    Davies, Philippa; Walker, Anne E; Grimshaw, Jeremy M

    2010-02-09

    There is growing interest in the use of cognitive, behavioural, and organisational theories in implementation research. However, the extent of use of theory in implementation research is uncertain. We conducted a systematic review of use of theory in 235 rigorous evaluations of guideline dissemination and implementation studies published between 1966 and 1998. Use of theory was classified according to type of use (explicitly theory based, some conceptual basis, and theoretical construct used) and stage of use (choice/design of intervention, process/mediators/moderators, and post hoc/explanation). Fifty-three of 235 studies (22.5%) were judged to have employed theories, including 14 studies that explicitly used theory. The majority of studies (n = 42) used only one theory; the maximum number of theories employed by any study was three. Twenty-five different theories were used. A small number of theories accounted for the majority of theory use including PRECEDE (Predisposing, Reinforcing, and Enabling Constructs in Educational Diagnosis and Evaluation), diffusion of innovations, information overload and social marketing (academic detailing). There was poor justification of choice of intervention and use of theory in implementation research in the identified studies until at least 1998. Future research should explicitly identify the justification for the interventions. Greater use of explicit theory to understand barriers, design interventions, and explore mediating pathways and moderators is needed to advance the science of implementation research.

  3. Um exercício rigoroso de investigação clínica A rigorous exercise on clinical investigation

    Directory of Open Access Journals (Sweden)

    Angélica Bastos

    2007-12-01

    Full Text Available O artigo visa extrair da produção psicanalítica de Juan Carlos Cosentino as coordenadas de sua investigação. Com base no procedimento segundo o qual Freud concebeu a elaboração do saber em psicanálise, busca-se delimitar suas questões clínicas. Nas fobias, na angústia e nos sonhos, o psicanalista distingue duas ordens de laço na experiência analítica: por um lado, a fantasia e a neurose de transferência, e, por outro, a estrutura. Procura-se demonstrar o quanto sua releitura de Freud é movida por problemáticas originais, configurando uma pesquisa rigorosa impulsionada pela direção do tratamento e ditada pela função do desejo do analista.The article aims at drawing out the coordinates of investigation from the psychoanalytical production of Juan Carlos Cosentino. Based on the proceedings which Freud conceived knowledge elaboration in psychoanalysis, the clinical questions are delimited in his investigation. The psychoanalyst distinguishes two bond types in the analytical experience in relation to phobias, angst and dreams: on one hand, fantasy and transfer neurosis and on the other hand, the structure. We try to demonstrate how much his reading on Freud is powered by original problems, thus outlining a rigorous research triggered by the treatment course and ruled by the function of the therapist's desire.

  4. Stochastic Inversion of Geomagnetic Observatory Data Including Rigorous Treatment of the Ocean Induction Effect With Implications for Transition Zone Water Content and Thermal Structure

    Science.gov (United States)

    Munch, F. D.; Grayver, A. V.; Kuvshinov, A.; Khan, A.

    2018-01-01

    In this paper we estimate and invert local electromagnetic (EM) sounding data for 1-D conductivity profiles in the presence of nonuniform oceans and continents to most rigorously account for the ocean induction effect that is known to strongly influence coastal observatories. We consider a new set of high-quality time series of geomagnetic observatory data, including hitherto unused data from island observatories installed over the last decade. The EM sounding data are inverted in the period range 3-85 days using stochastic optimization and model exploration techniques to provide estimates of model range and uncertainty. The inverted conductivity profiles are best constrained in the depth range 400-1,400 km and reveal significant lateral variations between 400 km and 1,000 km depth. To interpret the inverted conductivity anomalies in terms of water content and temperature, we combine laboratory-measured electrical conductivity of mantle minerals with phase equilibrium computations. Based on this procedure, relatively low temperatures (1200-1350°C) are observed in the transition zone (TZ) underneath stations located in Southern Australia, Southern Europe, Northern Africa, and North America. In contrast, higher temperatures (1400-1500°C) are inferred beneath observatories on islands, Northeast Asia, and central Australia. TZ water content beneath European and African stations is ˜0.05-0.1 wt %, whereas higher water contents (˜0.5-1 wt %) are inferred underneath North America, Asia, and Southern Australia. Comparison of the inverted water contents with laboratory-constrained water storage capacities suggests the presence of melt in or around the TZ underneath four geomagnetic observatories in North America and Northeast Asia.

  5. Caracterização do processo de rigor mortis em músculos de cordeiros da raça Santa Inês e F1 Santa Inês x Dorper Characterization of rigor mortis process of muscles lamb of Santa Inês breed and F1 Santa Inês x Dorper

    Directory of Open Access Journals (Sweden)

    Rafael dos Santos Costa

    2011-01-01

    Full Text Available O desenvolvimento do processo de rigor mortis nas carcaças dos animais de açougue influenciam diretamente a qualidade da carne. As características do processo de rigor mortis em carcaça de ovinos durante o processamento industrial para obtenção de carcaças resfriadas já foram estudadas em outros países e no Brasil em ovinos Santa Inês, mas ainda não estabelecidas em ovinos F1 Santa Inês x Dorper. Assim, objetivou-se neste trabalho caracterizar o processo de rigor mortis dos músculos Semitendinosus e Triceps brachii durante o resfriamento industrial e maciez da carne, em 10 carcaças ovinas. Foram escolhidos aleatoriamente 10 ovinos machos inteiros, sendo seis da raça Santa Inês e quatro F1 Santa Inês x Dorper, abatidos no Matadouro Frigorífico de Campos - Campos dos Goytacazes, Rio de Janeiro. Após a sangria, analisou-se temperatura, pH, comprimento de sarcômero em diferentes intervalos de tempo (4h; 6h; 8h; 10h; 12h; e 24h e força de cisalhamento ou maciez às 48h, do músculo Semitendinosus. Paralelamente, foi realizada a correlação entre a análise sensorial e a análise instrumental desse músculo. A temperatura da câmara fria variou de 12,2°C (4h a -0,5°C (24h e a temperatura média das carcaças foi de 26,80°C e -0,20°C, respectivamente. O pH médio inicial do músculo Semitendinosus foi de 6,62 e o final 5,64 enquanto no músculo T. brachii foi de 6,50 (4h e 5,68 (24h. A contração máxima do sarcômero do músculo Semitendinosus ocorreu na 12ªh(1,50mm após a sangria e no músculo Triceps brachii, no intervalo entre a 10ªh e 24ªh (1,53 a 1,57mm. No músculo Semitendinosus a força de cisalhamento ou maciez foi semelhante entre cordeiros da raça Santa Inês e F1 Santa Inês x Dorper, demonstrando que o grupo genético não influencia na maciez da carne. O painel sensorial confirmou os resultados obtidos na análise instrumental. Na correlação da análise instrumental (força de cisalhamento com a an

  6. Lessons learned from a rigorous peer-review process for building the Climate Literacy and Energy Awareness (CLEAN) collection of high-quality digital teaching materials

    Science.gov (United States)

    Gold, A. U.; Ledley, T. S.; McCaffrey, M. S.; Buhr, S. M.; Manduca, C. A.; Niepold, F.; Fox, S.; Howell, C. D.; Lynds, S. E.

    2010-12-01

    The topic of climate change permeates all aspects of our society: the news, household debates, scientific conferences, etc. To provide students with accurate information about climate science and energy awareness, educators require scientifically and pedagogically robust teaching materials. To address this need, the NSF-funded Climate Literacy & Energy Awareness Network (CLEAN) Pathway has assembled a new peer-reviewed digital collection as part of the National Science Digital Library (NSDL) featuring teaching materials centered on climate and energy science for grades 6 through 16. The scope and framework of the collection is defined by the Essential Principles of Climate Science (CCSP 2009) and a set of energy awareness principles developed in the project. The collection provides trustworthy teaching materials on these socially relevant topics and prepares students to become responsible decision-makers. While a peer-review process is desirable for curriculum developer as well as collection builder to ensure quality, its implementation is non-trivial. We have designed a rigorous and transparent peer-review process for the CLEAN collection, and our experiences provide general guidelines that can be used to judge the quality of digital teaching materials across disciplines. Our multi-stage review process ensures that only resources with teaching goals relevant to developing climate literacy and energy awareness are considered. Each relevant resource is reviewed by two individuals to assess the i) scientific accuracy, ii) pedagogic effectiveness, and iii) usability/technical quality. A science review by an expert ensures the scientific quality and accuracy. Resources that pass all review steps are forwarded to a review panel of educators and scientists who make a final decision regarding inclusion of the materials in the CLEAN collection. Results from the first panel review show that about 20% (~100) of the resources that were initially considered for inclusion

  7. Computer vision-based evaluation of pre- and postrigor changes in size and shape of Atlantic cod (Gadus morhua) and Atlantic salmon (Salmo salar) fillets during rigor mortis and ice storage: effects of perimortem handling stress.

    Science.gov (United States)

    Misimi, E; Erikson, U; Digre, H; Skavhaug, A; Mathiassen, J R

    2008-03-01

    The present study describes the possibilities for using computer vision-based methods for the detection and monitoring of transient 2D and 3D changes in the geometry of a given product. The rigor contractions of unstressed and stressed fillets of Atlantic salmon (Salmo salar) and Atlantic cod (Gadus morhua) were used as a model system. Gradual changes in fillet shape and size (area, length, width, and roundness) were recorded for 7 and 3 d, respectively. Also, changes in fillet area and height (cross-section profiles) were tracked using a laser beam and a 3D digital camera. Another goal was to compare rigor developments of the 2 species of farmed fish, and whether perimortem stress affected the appearance of the fillets. Some significant changes in fillet size and shape were found (length, width, area, roundness, height) between unstressed and stressed fish during the course of rigor mortis as well as after ice storage (postrigor). However, the observed irreversible stress-related changes were small and would hardly mean anything for postrigor fish processors or consumers. The cod were less stressed (as defined by muscle biochemistry) than the salmon after the 2 species had been subjected to similar stress bouts. Consequently, the difference between the rigor courses of unstressed and stressed fish was more extreme in the case of salmon. However, the maximal whole fish rigor strength was judged to be about the same for both species. Moreover, the reductions in fillet area and length, as well as the increases in width, were basically of similar magnitude for both species. In fact, the increases in fillet roundness and cross-section height were larger for the cod. We conclude that the computer vision method can be used effectively for automated monitoring of changes in 2D and 3D shape and size of fish fillets during rigor mortis and ice storage. In addition, it can be used for grading of fillets according to uniformity in size and shape, as well as measurement of

  8. iLoc-Animal: a multi-label learning classifier for predicting subcellular localization of animal proteins.

    Science.gov (United States)

    Lin, Wei-Zhong; Fang, Jian-An; Xiao, Xuan; Chou, Kuo-Chen

    2013-04-05

    Predicting protein subcellular localization is a challenging problem, particularly when query proteins have multi-label features meaning that they may simultaneously exist at, or move between, two or more different subcellular location sites. Most of the existing methods can only be used to deal with the single-label proteins. Actually, multi-label proteins should not be ignored because they usually bear some special function worthy of in-depth studies. By introducing the "multi-label learning" approach, a new predictor, called iLoc-Animal, has been developed that can be used to deal with the systems containing both single- and multi-label animal (metazoan except human) proteins. Meanwhile, to measure the prediction quality of a multi-label system in a rigorous way, five indices were introduced; they are "Absolute-True", "Absolute-False" (or Hamming-Loss"), "Accuracy", "Precision", and "Recall". As a demonstration, the jackknife cross-validation was performed with iLoc-Animal on a benchmark dataset of animal proteins classified into the following 20 location sites: (1) acrosome, (2) cell membrane, (3) centriole, (4) centrosome, (5) cell cortex, (6) cytoplasm, (7) cytoskeleton, (8) endoplasmic reticulum, (9) endosome, (10) extracellular, (11) Golgi apparatus, (12) lysosome, (13) mitochondrion, (14) melanosome, (15) microsome, (16) nucleus, (17) peroxisome, (18) plasma membrane, (19) spindle, and (20) synapse, where many proteins belong to two or more locations. For such a complicated system, the outcomes achieved by iLoc-Animal for all the aforementioned five indices were quite encouraging, indicating that the predictor may become a useful tool in this area. It has not escaped our notice that the multi-label approach and the rigorous measurement metrics can also be used to investigate many other multi-label problems in molecular biology. As a user-friendly web-server, iLoc-Animal is freely accessible to the public at the web-site .

  9. Prediction of resource volumes at untested locations using simple local prediction models

    Science.gov (United States)

    Attanasi, E.D.; Coburn, T.C.; Freeman, P.A.

    2006-01-01

    This paper shows how local spatial nonparametric prediction models can be applied to estimate volumes of recoverable gas resources at individual undrilled sites, at multiple sites on a regional scale, and to compute confidence bounds for regional volumes based on the distribution of those estimates. An approach that combines cross-validation, the jackknife, and bootstrap procedures is used to accomplish this task. Simulation experiments show that cross-validation can be applied beneficially to select an appropriate prediction model. The cross-validation procedure worked well for a wide range of different states of nature and levels of information. Jackknife procedures are used to compute individual prediction estimation errors at undrilled locations. The jackknife replicates also are used with a bootstrap resampling procedure to compute confidence bounds for the total volume. The method was applied to data (partitioned into a training set and target set) from the Devonian Antrim Shale continuous-type gas play in the Michigan Basin in Otsego County, Michigan. The analysis showed that the model estimate of total recoverable volumes at prediction sites is within 4 percent of the total observed volume. The model predictions also provide frequency distributions of the cell volumes at the production unit scale. Such distributions are the basis for subsequent economic analyses. ?? Springer Science+Business Media, LLC 2007.

  10. Caracterização do processo de rigor mortis do músculo Ilio-ischiocaudalis de jacaré-do-pantanal (Caiman crocodilus yacare e maciez da carne Characterization of rigor mortis process of muscle Ilio-ischiocaudalis of pantanal alligator (Caiman crocodilus yacare and meat tenderness

    Directory of Open Access Journals (Sweden)

    Juliana Paulino Vieira

    2012-03-01

    Full Text Available Este trabalho utilizou seis carcaças de jacaré-do-pantanal (Caiman crocodilus yacare com o objetivo de caracterizar o processo de rigor mortis do músculo Ílio-ischiocaudalis durante o resfriamento industrial e avaliar a maciez dessa carne. Os jacarés foram escolhidos aleatoriamente e abatidos na Cooperativa de Criadores do Jacaré do Pantanal (COOCRIJAPAN, Cáceres, Mato Grosso. Após a sangria, aferiu-se as variações das temperaturas da câmara de resfriamento, das carcaças e o pH. Foram colhidas amostras para determinação do comprimento de sarcômero, da força de cisalhamento e perdas por cozimento em diferentes intervalos de tempo (0,5, 3, 5, 7, 10, 12, 15, 24 e 36h. A temperatura da câmara de resfriamento variou de 2,6°C (0,5h a 0,9°C (36h e a temperatura média das carcaças variou de 21,0°C a 4,2°C, respectivamente. O pH médio inicial do músculo foi de 6,7 e o final 5,6 e a contração máxima do sarcômero do músculo Ilio-ischiocaudalis ocorreu na 15ª hora após a sangria (1,5µm. Essa carne apresentou força de cisalhamento menor que 6,0kg.This paper studied six pantanal alligators (Caiman crocodilus yacare carcass with goal of rigor mortis process characterization of Ilio-ischiocaudalis muscle during industrial cooling and meat tenderness. The alligators were randomly assembled and slaughtered at Cooperativa de Criadores do Jacaré do Pantanal (COOCRIJAPAN - Cáceres- Mato Grosso After exsanguination, were mensured temperature of chilling room and carcasses, pH and samples were collected for determination the sarcomere length, shear force and cooking loss at different times (0.5, 3, 5, 7, 10, 12, 15, 24 and 36 hours. The temperature of chilling room varied from 2.6°C (0.5h to 0.9°C (36h and the mean carcass temperature from 21.0°C to 4.2°C, respectively. The mean initial pH of the muscle was 6.7 and the final was 5.6. The smallest sarcomere size ocurred at 15 hours after exsanguination (1.5µm. This meat presents

  11. Injection-salting and cold-smoking of farmed atlantic cod (Gadus morhua L.) and Atlantic salmon (Salmo salar L.) at different stages of Rigor Mortis: effect on physical properties.

    Science.gov (United States)

    Akse, L; Birkeland, S; Tobiassen, T; Joensen, S; Larsen, R

    2008-10-01

    Processing of fish is generally conducted postrigor, but prerigor processing is associated with some potential advantages. The aim of this study was to study how 5 processing regimes of cold-smoked cod and salmon conducted at different stages of rigor influenced yield, fillet shrinkage, and gaping. Farmed cod and salmon was filleted, salted by brine injection of 25% NaCl, and smoked for 2 h at different stages of rigor. Filleting and salting prerigor resulted in increased fillet shrinkage and less increase in weight during brine injection, which in turn was correlated to the salt content of the fillet. These effects were more pronounced in cod fillets when compared to salmon. Early processing reduced fillet gaping and fillets were evaluated as having a firmer texture. In a follow-up trial with cod, shrinkage and weight gain during injection was studied as an effect of processing time postmortem. No changes in weight gain were observed for fillets salted the first 24 h postmortem; however, by delaying the processing 12 h postmortem, the high and rapid shrinking of cod fillets during brine injection was halved.

  12. NBA-Palm: prediction of palmitoylation site implemented in Naïve Bayes algorithm

    Directory of Open Access Journals (Sweden)

    Jin Changjiang

    2006-10-01

    Full Text Available Abstract Background Protein palmitoylation, an essential and reversible post-translational modification (PTM, has been implicated in cellular dynamics and plasticity. Although numerous experimental studies have been performed to explore the molecular mechanisms underlying palmitoylation processes, the intrinsic feature of substrate specificity has remained elusive. Thus, computational approaches for palmitoylation prediction are much desirable for further experimental design. Results In this work, we present NBA-Palm, a novel computational method based on Naïve Bayes algorithm for prediction of palmitoylation site. The training data is curated from scientific literature (PubMed and includes 245 palmitoylated sites from 105 distinct proteins after redundancy elimination. The proper window length for a potential palmitoylated peptide is optimized as six. To evaluate the prediction performance of NBA-Palm, 3-fold cross-validation, 8-fold cross-validation and Jack-Knife validation have been carried out. Prediction accuracies reach 85.79% for 3-fold cross-validation, 86.72% for 8-fold cross-validation and 86.74% for Jack-Knife validation. Two more algorithms, RBF network and support vector machine (SVM, also have been employed and compared with NBA-Palm. Conclusion Taken together, our analyses demonstrate that NBA-Palm is a useful computational program that provides insights for further experimentation. The accuracy of NBA-Palm is comparable with our previously described tool CSS-Palm. The NBA-Palm is freely accessible from: http://www.bioinfo.tsinghua.edu.cn/NBA-Palm.

  13. NBA-Palm: prediction of palmitoylation site implemented in Naïve Bayes algorithm.

    Science.gov (United States)

    Xue, Yu; Chen, Hu; Jin, Changjiang; Sun, Zhirong; Yao, Xuebiao

    2006-10-17

    Protein palmitoylation, an essential and reversible post-translational modification (PTM), has been implicated in cellular dynamics and plasticity. Although numerous experimental studies have been performed to explore the molecular mechanisms underlying palmitoylation processes, the intrinsic feature of substrate specificity has remained elusive. Thus, computational approaches for palmitoylation prediction are much desirable for further experimental design. In this work, we present NBA-Palm, a novel computational method based on Naïve Bayes algorithm for prediction of palmitoylation site. The training data is curated from scientific literature (PubMed) and includes 245 palmitoylated sites from 105 distinct proteins after redundancy elimination. The proper window length for a potential palmitoylated peptide is optimized as six. To evaluate the prediction performance of NBA-Palm, 3-fold cross-validation, 8-fold cross-validation and Jack-Knife validation have been carried out. Prediction accuracies reach 85.79% for 3-fold cross-validation, 86.72% for 8-fold cross-validation and 86.74% for Jack-Knife validation. Two more algorithms, RBF network and support vector machine (SVM), also have been employed and compared with NBA-Palm. Taken together, our analyses demonstrate that NBA-Palm is a useful computational program that provides insights for further experimentation. The accuracy of NBA-Palm is comparable with our previously described tool CSS-Palm. The NBA-Palm is freely accessible from: http://www.bioinfo.tsinghua.edu.cn/NBA-Palm.

  14. Cerebral Blood Flow Measurement Using fMRI and PET: A Cross-Validation Study

    Directory of Open Access Journals (Sweden)

    Jean J. Chen

    2008-01-01

    Full Text Available An important aspect of functional magnetic resonance imaging (fMRI is the study of brain hemodynamics, and MR arterial spin labeling (ASL perfusion imaging has gained wide acceptance as a robust and noninvasive technique. However, the cerebral blood flow (CBF measurements obtained with ASL fMRI have not been fully validated, particularly during global CBF modulations. We present a comparison of cerebral blood flow changes (ΔCBF measured using a flow-sensitive alternating inversion recovery (FAIR ASL perfusion method to those obtained using H2O15 PET, which is the current gold standard for in vivo imaging of CBF. To study regional and global CBF changes, a group of 10 healthy volunteers were imaged under identical experimental conditions during presentation of 5 levels of visual stimulation and one level of hypercapnia. The CBF changes were compared using 3 types of region-of-interest (ROI masks. FAIR measurements of CBF changes were found to be slightly lower than those measured with PET (average ΔCBF of 21.5±8.2% for FAIR versus 28.2±12.8% for PET at maximum stimulation intensity. Nonetheless, there was a strong correlation between measurements of the two modalities. Finally, a t-test comparison of the slopes of the linear fits of PET versus ASL ΔCBF for all 3 ROI types indicated no significant difference from unity (P>.05.

  15. The Bland-Altman Method Should Not Be Used in Regression Cross-Validation Studies

    Science.gov (United States)

    O'Connor, Daniel P.; Mahar, Matthew T.; Laughlin, Mitzi S.; Jackson, Andrew S.

    2011-01-01

    The purpose of this study was to demonstrate the bias in the Bland-Altman (BA) limits of agreement method when it is used to validate regression models. Data from 1,158 men were used to develop three regression equations to estimate maximum oxygen uptake (R[superscript 2] = 0.40, 0.61, and 0.82, respectively). The equations were evaluated in a…

  16. Evidence cross-validation and Bayesian inference of MAST plasma equilibria

    Energy Technology Data Exchange (ETDEWEB)

    Nessi, G. T. von; Hole, M. J. [Research School of Physical Sciences and Engineering, Australian National University, Canberra ACT 0200 (Australia); Svensson, J. [Max-Planck-Institut fuer Plasmaphysik, D-17491 Greifswald (Germany); Appel, L. [EURATOM/CCFE Fusion Association, Culham Science Centre, Abingdon, Oxon OX14 3DB (United Kingdom)

    2012-01-15

    In this paper, current profiles for plasma discharges on the mega-ampere spherical tokamak are directly calculated from pickup coil, flux loop, and motional-Stark effect observations via methods based in the statistical theory of Bayesian analysis. By representing toroidal plasma current as a series of axisymmetric current beams with rectangular cross-section and inferring the current for each one of these beams, flux-surface geometry and q-profiles are subsequently calculated by elementary application of Biot-Savart's law. The use of this plasma model in the context of Bayesian analysis was pioneered by Svensson and Werner on the joint-European tokamak [Svensson and Werner,Plasma Phys. Controlled Fusion 50(8), 085002 (2008)]. In this framework, linear forward models are used to generate diagnostic predictions, and the probability distribution for the currents in the collection of plasma beams was subsequently calculated directly via application of Bayes' formula. In this work, we introduce a new diagnostic technique to identify and remove outlier observations associated with diagnostics falling out of calibration or suffering from an unidentified malfunction. These modifications enable a good agreement between Bayesian inference of the last-closed flux-surface with other corroborating data, such as that from force balance considerations using EFIT++[Appel et al., ''A unified approach to equilibrium reconstruction'' Proceedings of the 33rd EPS Conference on Plasma Physics (Rome, Italy, 2006)]. In addition, this analysis also yields errors on the plasma current profile and flux-surface geometry as well as directly predicting the Shafranov shift of the plasma core.

  17. Estimation of Posterior Probabilities Using Multivariate Smoothing Splines and Generalized Cross-Validation.

    Science.gov (United States)

    1983-09-01

    Ciencia y Tecnologia -Mexico, by ONR under Contract No. N00014-77-C-0675, and by ARO under Contract No. DAAG29-80-K-0042. LUJ THE VIE~W, rTIJ. ’~v ’’~c...Department of Statis- tics. For financial support I thank the Consejo Nacional de Ciencia y Tecnologia - Mexico, and the Department of Statistics of the...from the context of the expression what they should be. The ia element (covariate) of an observations y will be denoted by ",(4) and all vectors will be

  18. Cross Validation on the Equality of Uav-Based and Contour-Based Dems

    Science.gov (United States)

    Ma, R.; Xu, Z.; Wu, L.; Liu, S.

    2018-04-01

    Unmanned Aerial Vehicles (UAV) have been widely used for Digital Elevation Model (DEM) generation in geographic applications. This paper proposes a novel framework of generating DEM from UAV images. It starts with the generation of the point clouds by image matching, where the flight control data are used as reference for searching for the corresponding images, leading to a significant time saving. Besides, a set of ground control points (GCP) obtained from field surveying are used to transform the point clouds to the user's coordinate system. Following that, we use a multi-feature based supervised classification method for discriminating non-ground points from ground ones. In the end, we generate DEM by constructing triangular irregular networks and rasterization. The experiments are conducted in the east of Jilin province in China, which has been suffered from soil erosion for several years. The quality of UAV based DEM (UAV-DEM) is compared with that generated from contour interpolation (Contour-DEM). The comparison shows a higher resolution, as well as higher accuracy of UAV-DEMs, which contains more geographic information. In addition, the RMSE errors of the UAV-DEMs generated from point clouds with and without GCPs are ±0.5 m and ±20 m, respectively.

  19. Translation, cultural adaptation, cross-validation of the Turkish diabetes quality-of-life (DQOL) measure.

    Science.gov (United States)

    Yildirim, Aysegul; Akinci, Fevzi; Gozu, Hulya; Sargin, Haluk; Orbay, Ekrem; Sargin, Mehmet

    2007-06-01

    The aim of this study was to test the validity and reliability of the Turkish version of the diabetes quality of life (DQOL) questionnaire for use with patients with diabetes. Turkish version of the generic quality of life (QoL) scale 15D and DQOL, socio-demographics and clinical parameter characteristics were administered to 150 patients with type 2 diabetes. Study participants were randomly sampled from the Endocrinology and Diabetes Outpatient Department of Dr. Lutfi Kirdar Kartal Education and Research Hospital in Istanbul, Turkey. The Cronbach alpha coefficient of the overall DQOL scale was 0.89; the Cronbach alpha coefficient ranged from 0.80 to 0.94 for subscales. Distress, discomfort and its symptoms, depression, mobility, usual activities, and vitality on the 15 D scale had statistically significant correlations with social/vocational worry and diabetes-related worry on the DQOL scale indicating good convergent validity. Factor analysis identified four subscales: satisfaction", impact", "diabetes-related worry", and "social/vocational worry". Statistical analyses showed that the Turkish version of the DQOL is a valid and reliable instrument to measure disease related QoL in patients with diabetes. It is a simple and quick screening tool with about 15 +/- 5.8 min administration time for measuring QoL in this population.

  20. Cross validation of bi-modal health-related stress assessment

    NARCIS (Netherlands)

    van den Broek, Egon; van der Sluis, Frans; Dijkstra, Ton

    This study explores the feasibility of objective and ubiquitous stress assessment. 25 post-traumatic stress disorder patients participated in a controlled storytelling (ST) study and an ecologically valid reliving (RL) study. The two studies were meant to represent an early and a late therapy

  1. Cross-Validation of AFOQT Form S for Cyberspace Operations - Cyberspace Control

    Science.gov (United States)

    2016-04-20

    TECHNICIAN, ACADEMIC, VERBAL, and QUANTITATIVE. Thompson, Skinner , Gould, Alley, & Shore (2010) provided a full description of the subtest and the...Strategic Research and Assessment Branch. Thompson, N., Skinner , J., Gould, R. B., Alley, W., & Shore, W. (2010). Development of the Air Force

  2. Evidence cross-validation and Bayesian inference of MAST plasma equilibria

    International Nuclear Information System (INIS)

    Nessi, G. T. von; Hole, M. J.; Svensson, J.; Appel, L.

    2012-01-01

    In this paper, current profiles for plasma discharges on the mega-ampere spherical tokamak are directly calculated from pickup coil, flux loop, and motional-Stark effect observations via methods based in the statistical theory of Bayesian analysis. By representing toroidal plasma current as a series of axisymmetric current beams with rectangular cross-section and inferring the current for each one of these beams, flux-surface geometry and q-profiles are subsequently calculated by elementary application of Biot-Savart's law. The use of this plasma model in the context of Bayesian analysis was pioneered by Svensson and Werner on the joint-European tokamak [Svensson and Werner,Plasma Phys. Controlled Fusion 50(8), 085002 (2008)]. In this framework, linear forward models are used to generate diagnostic predictions, and the probability distribution for the currents in the collection of plasma beams was subsequently calculated directly via application of Bayes' formula. In this work, we introduce a new diagnostic technique to identify and remove outlier observations associated with diagnostics falling out of calibration or suffering from an unidentified malfunction. These modifications enable a good agreement between Bayesian inference of the last-closed flux-surface with other corroborating data, such as that from force balance considerations using EFIT++[Appel et al., ''A unified approach to equilibrium reconstruction'' Proceedings of the 33rd EPS Conference on Plasma Physics (Rome, Italy, 2006)]. In addition, this analysis also yields errors on the plasma current profile and flux-surface geometry as well as directly predicting the Shafranov shift of the plasma core.

  3. Continuously revised assurance cases with stakeholders’ cross-validation: a DEOS experience

    Directory of Open Access Journals (Sweden)

    Kimio Kuramitsu

    2016-12-01

    Full Text Available Recently, assurance cases have received much attention in the field of software-based computer systems and IT services. However, software changes very often, and there are no strong regulations for software. These facts are two main challenges to be addressed in the development of software assurance cases. We propose a method of developing assurance cases by means of continuous revision at every stage of the system life cycle, including in operation and service recovery in failure cases. Instead of a regulator, dependability arguments are validated by multiple stakeholders competing with each other. This paper reported our experience with the proposed method in the case of Aspen education service. The case study demonstrates that continuous revisions enable stakeholders to share dependability problems across software life cycle stages, which will lead to the long-term improvement of service dependability.

  4. Cross-validation of picture completion effort indices in personal injury litigants and disability claimants.

    Science.gov (United States)

    Davis, Jeremy J; McHugh, Tara S; Bagley, Amy D; Axelrod, Bradley N; Hanks, Robin A

    2011-12-01

    Picture Completion (PC) indices from the Wechsler Adult Intelligence Scale, Third Edition, were investigated as performance validity indicators (PVIs) in a sample referred for independent neuropsychological examination. Participants from an archival database were included in the study if they were between the ages of 18 and 65 and were administered at least two PVIs. Effort measure performance yielded groups that passed all or failed one measure (Pass; n= 95) and failed two or more PVIs (Fail-2; n= 61). The Pass group performed better on PC than the Fail-2 group. PC cut scores were compared in differentiating Pass and Fail-2 groups. PC raw score of ≤12 showed the best classification accuracy in this sample correctly classifying 91% of Pass and 41% of Fail-2 cases. Overall, PC indices show good specificity and low sensitivity for exclusive use as PVIs, demonstrating promise for use as adjunctive embedded measures.

  5. Cross-Validation of Levenson's Psychopathy Scale in a Sample of Federal Female Inmates

    Science.gov (United States)

    Brinkley, Chad A.; Diamond, Pamela M.; Magaletta, Philip R.; Heigel, Caron P.

    2008-01-01

    Levenson, Kiehl, and Fitzpatrick's Self-Report Psychopathy Scale (LSRPS) is evaluated to determine the factor structure and concurrent validity of the instrument among 430 federal female inmates. Confirmatory factor analysis fails to validate the expected 2-factor structure. Subsequent exploratory factor analysis reveals a 3-factor structure…

  6. A Cross-Validation Study of the School Attitude Assessment Survey (SAAS).

    Science.gov (United States)

    McCoach, D. Betsy

    Factors commonly associated with underachievement in the research literature include low self-concept, low self-motivation/self-regulation, negative attitude toward school, and negative peer influence. This study attempts to isolate these four factors within a secondary school population. The purpose of the study was to design a valid and reliable…

  7. Cross-Validation of the Emotion Awareness Questionnaire for Children in Three Populations

    Science.gov (United States)

    Lahaye, Magali; Mikolajczak, Moira; Rieffe, Carolien; Villanueva, Lidon; Van Broeck, Nady; Bodart, Eddy; Luminet, Olivier

    2011-01-01

    The main aim of the present study was to examine the cross-cultural equivalence of a newly developed questionnaire, the Emotion Awareness Questionnaire (EAQ30) that assesses emotional awareness of children through self-report. Participants were recruited in three countries: the Netherlands (N = 665), Spain (N = 464), and Belgium (N = 707),…

  8. PARAMETER SELECTION IN LEAST SQUARES-SUPPORT VECTOR MACHINES REGRESSION ORIENTED, USING GENERALIZED CROSS-VALIDATION

    Directory of Open Access Journals (Sweden)

    ANDRÉS M. ÁLVAREZ MEZA

    2012-01-01

    Full Text Available RESUMEN: En este trabajo, se propone una metodología para la selección automática de los parámetros libres de la técnica de regresión basada en mínimos cuadrados máquinas de vectores de soporte (LS-SVM, a partir de un análisis de validación cruzada generalizada multidimensional sobre el conjunto de ecuaciones lineales de LS-SVM. La técnica desarrollada no requiere de un conocimiento a priori por parte del usuario acerca de la influencia de los parámetros libres en los resultados. Se realizan experimentos sobre dos bases de datos artificiales y dos bases de datos reales. De acuerdo a los resultados obtenidos, se concluye que el algoritmo desarrollado calcula regresiones apropiadas con errores relativos competentes.

  9. Inter-hospital Cross-validation of Irregular Discharge Patterns for Young vs. Old Psychiatric Patients

    Science.gov (United States)

    Mozdzierz, Gerald J.; Davis, William E.

    1975-01-01

    Type of discharge (irregular vs. regular) and length of time hospitalized were used as unobtrusive measures of psychiatric patient acceptance of hospital treatment regime among two groups (18-27 years and 45 years and above) of patients. (Author)

  10. The Adolescent Religious Coping Scale: Development, Validation, and Cross-Validation

    Science.gov (United States)

    Bjorck, Jeffrey P.; Braese, Robert W.; Tadie, Joseph T.; Gililland, David D.

    2010-01-01

    Research literature on adolescent coping is growing, but typically such studies have ignored religious coping strategies and their potential impact on functioning. To address this lack, we developed the Adolescent Religious Coping Scale and used its seven subscales to examine the relationship between religious coping and emotional functioning. A…

  11. A cross-validation study of the TGMD-2: The case of an adolescent population.

    Science.gov (United States)

    Issartel, Johann; McGrane, Bronagh; Fletcher, Richard; O'Brien, Wesley; Powell, Danielle; Belton, Sarahjane

    2017-05-01

    This study proposes an extension of a widely used test evaluating fundamental movement skills proficiency to an adolescent population, with a specific emphasis on validity and reliability for this older age group. Cross-sectional observational study. A total of 844 participants (n=456 male, 12.03±0.49) participated in this study. The 12 fundamental movement skills of the TGMD-2 were assessed. Inter-rater reliability was examined to ensure a minimum of 95% consistency between coders. Confirmatory factor analysis was undertaken with a one-factor model (all 12 skills) and two-factor model (6 locomotor skills and 6 object-control skills) as proposed by Ulrich et al. (2000). The model fit was examined using χ 2 , TLI, CFI and RMSEA. Test-retest reliability was carried out with a subsample of 35 participants. The test-retest reliability reached Intraclass Correlation Coefficient of 0.78 (locomotor), 0.76 (object related) and 0.91 (gross motor skill proficiency). The confirmatory factor analysis did not display a good fit for either the one-factor or two-factor model due to a really low contribution of several skills. A reduction in the number of skills to just seven (run, gallop, hop, horizontal jump, bounce, kick and roll) revealed an overall good fit by TLI, CFI and RMSEA measures. The proposed new model offers the possibility of longitudinal studies to track the maturation of fundamental movement skills across the child and adolescent spectrum, while also giving researchers a valid assessment to tool to evaluate adolescent fundamental movement skills proficiency level. Copyright © 2016 Sports Medicine Australia. All rights reserved.

  12. Cross-Validation of the PAI Negative Distortion Scale for Feigned Mental Disorders: A Research Report

    Science.gov (United States)

    Rogers, Richard; Gillard, Nathan D.; Wooley, Chelsea N.; Kelsey, Katherine R.

    2013-01-01

    A major strength of the Personality Assessment Inventory (PAI) is its systematic assessment of response styles, including feigned mental disorders. Recently, Mogge, Lepage, Bell, and Ragatz developed and provided the initial validation for the Negative Distortion Scale (NDS). Using rare symptoms as its detection strategy for feigning, the…

  13. Cross-validation of independent ultra-low-frequency magnetic recording systems for active fault studies

    Science.gov (United States)

    Wang, Can; Bin, Chen; Christman, Lilianna E.; Glen, Jonathan M. G.; Klemperer, Simon L.; McPhee, Darcy K.; Kappler, Karl N.; Bleier, Tom E.; Dunson, J. Clark

    2018-04-01

    When working with ultra-low-frequency (ULF) magnetic datasets, as with most geophysical time-series data, it is important to be able to distinguish between cultural signals, internal instrument noise, and natural external signals with their induced telluric fields. This distinction is commonly attempted using simultaneously recorded data from a spatially remote reference site. Here, instead, we compared data recorded by two systems with different instrumental characteristics at the same location over the same time period. We collocated two independent ULF magnetic systems, one from the QuakeFinder network and the other from the United States Geological Survey (USGS)-Stanford network, in order to cross-compare their data, characterize data reproducibility, and characterize signal origin. In addition, we used simultaneous measurements at a remote geomagnetic observatory to distinguish global atmospheric signals from local cultural signals. We demonstrated that the QuakeFinder and USGS-Stanford systems have excellent coherence, despite their different sensors and digitizers. Rare instances of isolated signals recorded by only one system or only one sensor indicate that caution is needed when attributing specific recorded signal features to specific origins.[Figure not available: see fulltext.

  14. Repeated holdout Cross-Validation of Model to Estimate Risk of Lyme Disease by Landscape Attributes

    Science.gov (United States)

    We previously modeled Lyme disease (LD) risk at the landscape scale; here we evaluate the model's overall goodness-of-fit using holdout validation. Landscapes were characterized within road-bounded analysis units (AU). Observed LD cases (obsLD) were ascertained per AU. Data were ...

  15. Bi-national cross-validation of an evidence-based conduct problem prevention model.

    Science.gov (United States)

    Porta, Carolyn M; Bloomquist, Michael L; Garcia-Huidobro, Diego; Gutiérrez, Rafael; Vega, Leticia; Balch, Rosita; Yu, Xiaohui; Cooper, Daniel K

    2018-04-01

    To (a) explore the preferences of Mexican parents and Spanish-speaking professionals working with migrant Latino families in Minnesota regarding the Mexican-adapted brief model versus the original conduct problems intervention and (b) identifying the potential challenges, and preferred solutions, to implementation of a conduct problems preventive intervention. The core practice elements of a conduct problems prevention program originating in the United States were adapted for prevention efforts in Mexico. Three focus groups were conducted in the United States, with Latino parents (n = 24; 2 focus groups) and professionals serving Latino families (n = 9; 1 focus group), to compare and discuss the Mexican-adapted model and the original conduct problems prevention program. Thematic analysis was conducted on the verbatim focus group transcripts in the original language spoken. Participants preferred the Mexican-adapted model. The following key areas were identified for cultural adaptation when delivering a conduct problems prevention program with Latino families: recruitment/enrollment strategies, program delivery format, and program content (i.e., child skills training, parent skills training, child-parent activities, and child-parent support). For both models, strengths, concerns, barriers, and strategies for overcoming concerns and barriers were identified. We summarize recommendations offered by participants to strengthen the effective implementation of a conduct problems prevention model with Latino families in the United States. This project demonstrates the strength in binational collaboration to critically examine cultural adaptations of evidence-based prevention programs that could be useful to diverse communities, families, and youth in other settings. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  16. Cross-Validation of the Implementation Leadership Scale (ILS) in Child Welfare Service Organizations.

    Science.gov (United States)

    Finn, Natalie K; Torres, Elisa M; Ehrhart, Mark G; Roesch, Scott C; Aarons, Gregory A

    2016-08-01

    The Implementation Leadership Scale (ILS) is a brief, pragmatic, and efficient measure that can be used for research or organizational development to assess leader behaviors and actions that actively support effective implementation of evidence-based practices (EBPs). The ILS was originally validated with mental health clinicians. This study validates the ILS factor structure with providers in community-based organizations (CBOs) providing child welfare services. Participants were 214 service providers working in 12 CBOs that provide child welfare services. All participants completed the ILS, reporting on their immediate supervisor. Confirmatory factor analyses were conducted to examine the factor structure of the ILS. Internal consistency reliability and measurement invariance were also examined. Confirmatory factor analyses showed acceptable fit to the hypothesized first- and second-order factor structure. Internal consistency reliability was strong and there was partial measurement invariance for the first-order factor structure when comparing child welfare and mental health samples. The results support the use of the ILS to assess leadership for implementation of EBPs in child welfare organizations. © The Author(s) 2016.

  17. Psychophysiological Associations between Chronic Tinnitus and Sleep: A Cross Validation of Tinnitus and Insomnia Questionnaires

    Directory of Open Access Journals (Sweden)

    Martin Schecklmann

    2015-01-01

    Full Text Available Background. The aim of the present study was to assess the prevalence of insomnia in chronic tinnitus and the association of tinnitus distress and sleep disturbance. Methods. We retrospectively analysed data of 182 patients with chronic tinnitus who completed the Tinnitus Questionnaire (TQ and the Regensburg Insomnia Scale (RIS. Descriptive comparisons with the validation sample of the RIS including exclusively patients with primary/psychophysiological insomnia, correlation analyses of the RIS with TQ scales, and principal component analyses (PCA in the tinnitus sample were performed. TQ total score was corrected for the TQ sleep items. Results. Prevalence of insomnia was high in tinnitus patients (76% and tinnitus distress correlated with sleep disturbance (r=0.558. TQ sleep subscore correlated with the RIS sum score (r=0.690. PCA with all TQ and RIS items showed one sleep factor consisting of all RIS and the TQ sleep items. PCA with only TQ sleep and RIS items showed sleep- and tinnitus-specific factors. The sleep factors (only RIS items were sleep depth and fearful focusing. The TQ sleep items represented tinnitus-related sleep problems. Discussion. Chronic tinnitus and primary insomnia are highly related and might share similar psychological and neurophysiological mechanisms leading to impaired sleep quality.

  18. Cross-Validation of Numerical and Experimental Studies of Transitional Airfoil Performance

    DEFF Research Database (Denmark)

    Frere, Ariane; Hillewaert, Koen; Sarlak, Hamid

    2015-01-01

    The aerodynamic performance characteristic of airfoils are the main input for estimating wind turbine blade loading as well as annual energy production of wind farms. For transitional flow regimes these data are difficult to obtain, both experimentally as well as numerically, due to the very high...... sensitivity of the flow to perturbations, large scale separation and performance hysteresis. The objective of this work is to improve the understanding of the transitional airfoil flow performance by studying the S826 NREL airfoil at low Reynolds numbers (Re = 4:104 and 1:105) with two inherently different...

  19. Tomorrow's Research Library: Vigor or Rigor Mortis?

    Science.gov (United States)

    Hacken, Richard D.

    1988-01-01

    Compares, contrasts, and critiques predictions that have been made about the future of research libraries, focusing on the impact of technology on the library's role and users' needs. The discussion includes models for the adaptation of new technologies that may assist in library planning and change. (38 references) (CLB)

  20. Thermodynamic limit and decoherence: rigorous results

    Energy Technology Data Exchange (ETDEWEB)

    Frasca, Marco [Via Erasmo Gattamelata 3, 00176 Rome (Italy)

    2007-05-15

    Time evolution operator in quantum mechanics can be changed into a statistical operator by a Wick rotation. This strict relation between statistical mechanics and quantum evolution can reveal deep results when the thermodynamic limit is considered. These results translate in a set of theorems proving that these effects can be effectively at work producing an emerging classical world without recurring to any external entity that in some cases cannot be properly defined. For a many-body system it has been recently shown that Gaussian decay of the coherence is the rule with a duration of recurrence more and more small as the number of particles increases. This effect has been observed experimentally. More generally, a theorem about coherence of bulk matter can be proved. All this takes us to the conclusion that a well defined boundary for the quantum to classical world does exist and that can be drawn by the thermodynamic limit, extending in this way the deep link between statistical mechanics and quantum evolution to a high degree.