WorldWideScience

Sample records for scatterplots

  1. Extending a scatterplot for displaying group structure in multivariate ...

    African Journals Online (AJOL)

    ... when regarded as extensions of ordinary scatterplots to describe variation and group structure in multivariate observations, is demonstrated by presenting a case study from the South African wood pulp industry. It is shown how multidimensional standards specified by users of a product may be added to the biplot in the ...

  2. Comparative eye-tracking evaluation of scatterplots and parallel coordinates

    Directory of Open Access Journals (Sweden)

    Rudolf Netzel

    2017-06-01

    Full Text Available We investigate task performance and reading characteristics for scatterplots (Cartesian coordinates and parallel coordinates. In a controlled eye-tracking study, we asked 24 participants to assess the relative distance of points in multidimensional space, depending on the diagram type (parallel coordinates or a horizontal collection of scatterplots, the number of data dimensions (2, 4, 6, or 8, and the relative distance between points (15%, 20%, or 25%. For a given reference point and two target points, we instructed participants to choose the target point that was closer to the reference point in multidimensional space. We present a visual scanning model that describes different strategies to solve this retrieval task for both diagram types, and propose corresponding hypotheses that we test using task completion time, accuracy, and gaze positions as dependent variables. Our results show that scatterplots outperform parallel coordinates significantly in 2 dimensions, however, the task was solved more quickly and more accurately with parallel coordinates in 8 dimensions. The eye-tracking data further shows significant differences between Cartesian and parallel coordinates, as well as between different numbers of dimensions. For parallel coordinates, there is a clear trend toward shorter fixations and longer saccades with increasing number of dimensions. Using an area-of-interest (AOI based approach, we identify different reading strategies for each diagram type: For parallel coordinates, the participants’ gaze frequently jumped back and forth between pairs of axes, while axes were rarely focused on when viewing Cartesian coordinates. We further found that participants’ attention is biased: toward the center of the whole plotfor parallel coordinates and skewed to the center/left side for Cartesian coordinates. We anticipate that these results may support the design of more effective visualizations for multidimensional data.

  3. TOP-DRAWER, Histograms, Scatterplots, Curve-Smoothing

    International Nuclear Information System (INIS)

    Chaffee, R.B.

    1988-01-01

    Description of program or function: TOP DRAWER produces histograms, scatterplots, data points with error bars and plots symbols, and curves passing through data points, with elaborate titles. It also does smoothing and calculates frequency distributions. There is little facility, however, for arithmetic manipulation. Because of its restricted applicability, TOP DRAWER can be controlled by a relatively simple set of commands, and this control is further simplified by the choice of reasonable default values for all parameters. Despite this emphasis on simplicity, TOP DRAWER plots are of exceptional quality and are suitable for publication. Input is normally from card-image records, although a set of subroutines is provided to accommodate FORTRAN calls. The program contains switches which can be set to generate code suitable for execution on IBM, DECX VAX, and PRIME computers

  4. Shape Perception in 3-D Scatterplots Using Constant Visual Angle Glyphs

    DEFF Research Database (Denmark)

    Stenholt, Rasmus; Madsen, Claus B.

    2012-01-01

    When viewing 3-D scatterplots in immersive virtual environments, one commonly encountered problem is the presence of clutter, which obscures the view of any structures of interest in the visualization. In order to solve this problem, we propose to render the 3-D glyphs such that they always cover...... to regular perspective glyphs, especially when a large amount of clutter is present. Furthermore, our evaluation revealed that perception of structures in 3-D scatterplots is significantly affected by the volumetric density of the glyphs in the plot....

  5. Statistical Analyses of Scatterplots to Identify Important Factors in Large-Scale Simulations

    Energy Technology Data Exchange (ETDEWEB)

    Kleijnen, J.P.C.; Helton, J.C.

    1999-04-01

    The robustness of procedures for identifying patterns in scatterplots generated in Monte Carlo sensitivity analyses is investigated. These procedures are based on attempts to detect increasingly complex patterns in the scatterplots under consideration and involve the identification of (1) linear relationships with correlation coefficients, (2) monotonic relationships with rank correlation coefficients, (3) trends in central tendency as defined by means, medians and the Kruskal-Wallis statistic, (4) trends in variability as defined by variances and interquartile ranges, and (5) deviations from randomness as defined by the chi-square statistic. The following two topics related to the robustness of these procedures are considered for a sequence of example analyses with a large model for two-phase fluid flow: the presence of Type I and Type II errors, and the stability of results obtained with independent Latin hypercube samples. Observations from analysis include: (1) Type I errors are unavoidable, (2) Type II errors can occur when inappropriate analysis procedures are used, (3) physical explanations should always be sought for why statistical procedures identify variables as being important, and (4) the identification of important variables tends to be stable for independent Latin hypercube samples.

  6. ScatterJn: An ImageJ Plugin for Scatterplot-Matrix Analysis and Classification of Spatially Resolved Analytical Microscopy Data

    Directory of Open Access Journals (Sweden)

    Fabian Zeitvogel

    2016-02-01

    Full Text Available We present ScatterJn, an ImageJ (and Fiji plugin for scatterplot-based exploration and analysis of analytical microscopy data. In contrast to commonly used scatterplot tools, it handles more than two input images (or image stacks, respectively by creating a matrix of pairwise scatterplots. The tool offers the possibility to manually classify pixels by selecting regions of datapoints in the scatterplots as well as in the spatial domain. We demonstrate its functioning using a set of elemental maps acquired by SEM-EDX mapping of a soil sample. The plugin is available at https://savannah.nongnu.org/projects/scatterjn.

  7. Statistical analyses of scatterplots to identify important factors in large-scale simulations, 1: Review and comparison of techniques

    International Nuclear Information System (INIS)

    Kleijnen, J.P.C.; Helton, J.C.

    1999-01-01

    Procedures for identifying patterns in scatterplots generated in Monte Carlo sensitivity analyses are described and illustrated. These procedures attempt to detect increasingly complex patterns in scatterplots and involve the identification of (i) linear relationships with correlation coefficients, (ii) monotonic relationships with rank correlation coefficients, (iii) trends in central tendency as defined by means, medians and the Kruskal-Wallis statistic, (iv) trends in variability as defined by variances and interquartile ranges, and (v) deviations from randomness as defined by the chi-square statistic. A sequence of example analyses with a large model for two-phase fluid flow illustrates how the individual procedures can differ in the variables that they identify as having effects on particular model outcomes. The example analyses indicate that the use of a sequence of procedures is a good analysis strategy and provides some assurance that an important effect is not overlooked

  8. Statistical analyses of scatterplots to identify important factors in large-scale simulations, 2: robustness of techniques

    International Nuclear Information System (INIS)

    Kleijnen, J.P.C.; Helton, J.C.

    1999-01-01

    The robustness of procedures for identifying patterns in scatterplots generated in Monte Carlo sensitivity analyses is investigated. These procedures are based on attempts to detect increasingly complex patterns in the scatterplots under consideration and involve the identification of (i) linear relationships with correlation coefficients, (ii) monotonic relationships with rank correlation coefficients, (iii) trends in central tendency as defined by means, medians and the Kruskal-Wallis statistic, (iv) trends in variability as defined by variances and interquartile ranges, and (v) deviations from randomness as defined by the chi-square statistic. The following two topics related to the robustness of these procedures are considered for a sequence of example analyses with a large model for two-phase fluid flow: the presence of Type I and Type II errors, and the stability of results obtained with independent Latin hypercube samples. Observations from analysis include: (i) Type I errors are unavoidable, (ii) Type II errors can occur when inappropriate analysis procedures are used, (iii) physical explanations should always be sought for why statistical procedures identify variables as being important, and (iv) the identification of important variables tends to be stable for independent Latin hypercube samples

  9. Extending a scatterplot for displaying group structure in multivariate ...

    African Journals Online (AJOL)

    is awarded a credibility rating of A, B, C or D according to various financial ... of finding groups in data (see for example Kaufman and Rousseeuw (1990), McLachlan ..... Kraft liner. Sack paper. Min. Max. Min. Max. Total Yield. 43. 50. 43. 50. Alkali. 70 ..... ley J, Gourlay ID, O'Brien E, Plumptre RA & Quilter AK, 2003, Effect of.

  10. On the Benefits of Using Constant Visual Angle Glyphs in Interactive Exploration of 3D Scatterplots

    DEFF Research Database (Denmark)

    Stenholt, Rasmus

    2014-01-01

    structures. Furthermore, we introduce a new approach to glyph visualization—constant visual angle (CVA) glyphs—which has the potential to mitigate the effect of clutter at the cost of dispensing with the common real-world depth cue of relative size. In a controlled experiment where test subjects had...... to locate and select visualized structures in an immersive virtual environment, we identified several significant results. One result is that CVA glyphs ease perception of structures in cluttered environments while not deteriorating it when clutter is absent. Another is the existence of threshold densities...

  11. Radiometric Normalization of Temporal Images Combining Automatic Detection of Pseudo-Invariant Features from the Distance and Similarity Spectral Measures, Density Scatterplot Analysis, and Robust Regression

    Directory of Open Access Journals (Sweden)

    Ana Paula Ferreira de Carvalho

    2013-05-01

    Full Text Available Radiometric precision is difficult to maintain in orbital images due to several factors (atmospheric conditions, Earth-sun distance, detector calibration, illumination, and viewing angles. These unwanted effects must be removed for radiometric consistency among temporal images, leaving only land-leaving radiances, for optimum change detection. A variety of relative radiometric correction techniques were developed for the correction or rectification of images, of the same area, through use of reference targets whose reflectance do not change significantly with time, i.e., pseudo-invariant features (PIFs. This paper proposes a new technique for radiometric normalization, which uses three sequential methods for an accurate PIFs selection: spectral measures of temporal data (spectral distance and similarity, density scatter plot analysis (ridge method, and robust regression. The spectral measures used are the spectral angle (Spectral Angle Mapper, SAM, spectral correlation (Spectral Correlation Mapper, SCM, and Euclidean distance. The spectral measures between the spectra at times t1 and t2 and are calculated for each pixel. After classification using threshold values, it is possible to define points with the same spectral behavior, including PIFs. The distance and similarity measures are complementary and can be calculated together. The ridge method uses a density plot generated from images acquired on different dates for the selection of PIFs. In a density plot, the invariant pixels, together, form a high-density ridge, while variant pixels (clouds and land cover changes are spread, having low density, facilitating its exclusion. Finally, the selected PIFs are subjected to a robust regression (M-estimate between pairs of temporal bands for the detection and elimination of outliers, and to obtain the optimal linear equation for a given set of target points. The robust regression is insensitive to outliers, i.e., observation that appears to deviate strongly from the rest of the data in which it occurs, and as in our case, change areas. New sequential methods enable one to select by different attributes, a number of invariant targets over the brightness range of the images.

  12. Using Maslows Hierarchy of Needs to Identify Indicators of Potential Mass Migration Events

    Science.gov (United States)

    2016-03-23

    34, Psychological Review, 1943: 370-396. 7 Ibid. 10 subconsciously ; however, one will consciously seek them out if they are denied. Other...and less prone to bias . Examining statistical data 4 The scatterplots for the most promising

  13. Using Maslow’s Hierarchy of Needs to Identify Indicators of Potential Mass Migration Events

    Science.gov (United States)

    2016-03-23

    34, Psychological Review, 1943: 370-396. 7 Ibid. 10 subconsciously ; however, one will consciously seek them out if they are denied. Other...and less prone to bias . Examining statistical data 4 The scatterplots for the most promising

  14. Cadmium versus phosphate in the world ocean

    NARCIS (Netherlands)

    Baar, Hein J.W. de; Saager, Paul M.; Nolting, Rob F.; Meer, Jaap van der

    1994-01-01

    Cadmium (Cd) is one of the best studied trace metals in seawater and at individual stations exhibits a more or less linear relation with phosphate. The compilation of all data from all oceans taken from over 30 different published sources into one global dataset yields only a broad scatterplot of Cd

  15. The half-half plot

    NARCIS (Netherlands)

    Einmahl, J.H.J.; Gantner, M.

    2012-01-01

    The Half-Half (HH) plot is a new graphical method to investigate qualitatively the shape of a regression curve. The empirical HH-plot counts observations in the lower and upper quarter of a strip that moves horizontally over the scatterplot. The plot displays jumps clearly and reveals further

  16. Microprocessor aided data acquisition at VEDAS

    International Nuclear Information System (INIS)

    Ziem, P.; Drescher, B.; Kapper, K.; Kowallik, R.

    1985-01-01

    Three microprocessor systems have been developed to support data acquisition in nuclear physics multiparameter experiments. A bit-slice processor accumulates up to 256 1-dim spectra and 16 2-dim spectra. A microprocessor, based on the AM 29116 ALU, performs a fast consistency check on the coincidence data. A VME-Bus double-processor displays a colored scatterplot

  17. GeoXp : An R Package for Exploratory Spatial Data Analysis

    Directory of Open Access Journals (Sweden)

    Thibault Laurent

    2012-04-01

    Full Text Available We present GeoXp, an R package implementing interactive graphics for exploratory spatial data analysis. We use a data set concerning public schools of the French MidiPyrenees region to illustrate the use of these exploratory techniques based on the coupling between a statistical graph and a map. Besides elementary plots like boxplots,histograms or simple scatterplots, GeoXp also couples maps with Moran scatterplots, variogram clouds, Lorenz curves and other graphical tools. In order to make the most of the multidimensionality of the data, GeoXp includes dimension reduction techniques such as principal components analysis and cluster analysis whose results are also linked to the map.

  18. Unravelling the performance of individual scholars: use of Canonical Biplot analysis to explore the performance of scientists by academic rank and scientific field

    OpenAIRE

    Díaz-Faes, Adrián A.; Costas Comesaña, Rodrigo; Galindo, M. Purificación; Bordons, María

    2015-01-01

    Individual research performance needs to be addressed by means of a diverse set of indicators capturing the multidimensional framework of science. In this context, Biplot methods emerge as powerful and reliable visualization tools similar to a scatterplot but capturing the multivariate covariance structures among bibliometric indicators. In this paper, we introduce the Canonical Biplot technique to explore differences in the scientific performance of Spanish CSIC researchers, o...

  19. Ventricular Cycle Length Characteristics Estimative of Prolonged RR Interval during Atrial Fibrillation

    Science.gov (United States)

    CIACCIO, EDWARD J.; BIVIANO, ANGELO B.; GAMBHIR, ALOK; EINSTEIN, ANDREW J.; GARAN, HASAN

    2014-01-01

    Background When atrial fibrillation (AF) is incessant, imaging during a prolonged ventricular RR interval may improve image quality. It was hypothesized that long RR intervals could be predicted from preceding RR values. Methods From the PhysioNet database, electrocardiogram RR intervals were obtained from 74 persistent AF patients. An RR interval lengthened by at least 250 ms beyond the immediately preceding RR interval (termed T0 and T1, respectively) was considered prolonged. A two-parameter scatterplot was used to predict the occurrence of a prolonged interval T0. The scatterplot parameters were: (1) RR variability (RRv) estimated as the average second derivative from 10 previous pairs of RR differences, T13–T2, and (2) Tm–T1, the difference between Tm, the mean from T13 to T2, and T1. For each patient, scatterplots were constructed using preliminary data from the first hour. The ranges of parameters 1 and 2 were adjusted to maximize the proportion of prolonged RR intervals within range. These constraints were used for prediction of prolonged RR in test data collected during the second hour. Results The mean prolonged event was 1.0 seconds in duration. Actual prolonged events were identified with a mean positive predictive value (PPV) of 80% in the test set. PPV was >80% in 36 of 74 patients. An average of 10.8 prolonged RR intervals per 60 minutes was correctly identified. Conclusions A method was developed to predict prolonged RR intervals using two parameters and prior statistical sampling for each patient. This or similar methodology may help improve cardiac imaging in many longstanding persistent AF patients. PMID:23998759

  20. Random Number Simulations Reveal How Random Noise Affects the Measurements and Graphical Portrayals of Self-Assessed Competency

    Directory of Open Access Journals (Sweden)

    Edward Nuhfer

    2016-01-01

    Full Text Available Self-assessment measures of competency are blends of an authentic self-assessment signal that researchers seek to measure and random disorder or "noise" that accompanies that signal. In this study, we use random number simulations to explore how random noise affects critical aspects of self-assessment investigations: reliability, correlation, critical sample size, and the graphical representations of self-assessment data. We show that graphical conventions common in the self-assessment literature introduce artifacts that invite misinterpretation. Troublesome conventions include: (y minus x vs. (x scatterplots; (y minus x vs. (x column graphs aggregated as quantiles; line charts that display data aggregated as quantiles; and some histograms. Graphical conventions that generate minimal artifacts include scatterplots with a best-fit line that depict (y vs. (x measures (self-assessed competence vs. measured competence plotted by individual participant scores, and (y vs. (x scatterplots of collective average measures of all participants plotted item-by-item. This last graphic convention attenuates noise and improves the definition of the signal. To provide relevant comparisons across varied graphical conventions, we use a single dataset derived from paired measures of 1154 participants' self-assessed competence and demonstrated competence in science literacy. Our results show that different numerical approaches employed in investigating and describing self-assessment accuracy are not equally valid. By modeling this dataset with random numbers, we show how recognizing the varied expressions of randomness in self-assessment data can improve the validity of numeracy-based descriptions of self-assessment.

  1. Easy QC 7 tools

    International Nuclear Information System (INIS)

    1981-04-01

    This book explains method of QC 7 tools, mind for using QC 7 tools, effect of QC 7 tools application, giving descriptions of graph, pareto's diagram like application writing way and using method of pareto's diagram, characteristic diagram, check sheet such as purpose and subject of check, goals and types of check sheet, and using point of check sheet, histogram like application and using method, and stratification, scatterplot, control chart, promotion method and improvement and cases of practice of QC tools.

  2. Measuring magnetic correlations in nanoparticle assemblies

    DEFF Research Database (Denmark)

    Beleggia, Marco; Frandsen, Cathrine

    2014-01-01

    We illustrate how to extract correlations between magnetic moments in assemblies of nanoparticles from, e.g., electron holography data providing the combined knowledge of particle size distribution, inter-particle distances, and magnitude and orientation of each magnetic moment within...... a nanoparticle superstructure, We show, based on simulated data, how to build a radial/angular pair distribution function f(r,θ) encoding the spatial and angular difference between every pair of magnetic moments. A scatter-plot of f(r,θ) reveals the degree of structural and magnetic order present, and hence...

  3. Indirect estimation of signal-dependent noise with nonadaptive heterogeneous samples.

    Science.gov (United States)

    Azzari, Lucio; Foi, Alessandro

    2014-08-01

    We consider the estimation of signal-dependent noise from a single image. Unlike conventional algorithms that build a scatterplot of local mean-variance pairs from either small or adaptively selected homogeneous data samples, our proposed approach relies on arbitrarily large patches of heterogeneous data extracted at random from the image. We demonstrate the feasibility of our approach through an extensive theoretical analysis based on mixture of Gaussian distributions. A prototype algorithm is also developed in order to validate the approach on simulated data as well as on real camera raw images.

  4. Easy QC 7 tools

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1981-04-15

    This book explains method of QC 7 tools, mind for using QC 7 tools, effect of QC 7 tools application, giving descriptions of graph, pareto's diagram like application writing way and using method of pareto's diagram, characteristic diagram, check sheet such as purpose and subject of check, goals and types of check sheet, and using point of check sheet, histogram like application and using method, and stratification, scatterplot, control chart, promotion method and improvement and cases of practice of QC tools.

  5. Statistical analysis of medical data using SAS

    CERN Document Server

    Der, Geoff

    2005-01-01

    An Introduction to SASDescribing and Summarizing DataBasic InferenceScatterplots Correlation: Simple Regression and SmoothingAnalysis of Variance and CovarianceMultiple RegressionLogistic RegressionThe Generalized Linear ModelGeneralized Additive ModelsNonlinear Regression ModelsThe Analysis of Longitudinal Data IThe Analysis of Longitudinal Data II: Models for Normal Response VariablesThe Analysis of Longitudinal Data III: Non-Normal ResponseSurvival AnalysisAnalysis Multivariate Date: Principal Components and Cluster AnalysisReferences

  6. Calculating the knowledge-based similarity of functional groups using crystallographic data

    Science.gov (United States)

    Watson, Paul; Willett, Peter; Gillet, Valerie J.; Verdonk, Marcel L.

    2001-09-01

    A knowledge-based method for calculating the similarity of functional groups is described and validated. The method is based on experimental information derived from small molecule crystal structures. These data are used in the form of scatterplots that show the likelihood of a non-bonded interaction being formed between functional group A (the `central group') and functional group B (the `contact group' or `probe'). The scatterplots are converted into three-dimensional maps that show the propensity of the probe at different positions around the central group. Here we describe how to calculate the similarity of a pair of central groups based on these maps. The similarity method is validated using bioisosteric functional group pairs identified in the Bioster database and Relibase. The Bioster database is a critical compilation of thousands of bioisosteric molecule pairs, including drugs, enzyme inhibitors and agrochemicals. Relibase is an object-oriented database containing structural data about protein-ligand interactions. The distributions of the similarities of the bioisosteric functional group pairs are compared with similarities for all the possible pairs in IsoStar, and are found to be significantly different. Enrichment factors are also calculated showing the similarity method is statistically significantly better than random in predicting bioisosteric functional group pairs.

  7. Different rates of DNA replication at early versus late S-phase sections: multiscale modeling of stochastic events related to DNA content/EdU (5-ethynyl-2'deoxyuridine) incorporation distributions.

    Science.gov (United States)

    Li, Biao; Zhao, Hong; Rybak, Paulina; Dobrucki, Jurek W; Darzynkiewicz, Zbigniew; Kimmel, Marek

    2014-09-01

    Mathematical modeling allows relating molecular events to single-cell characteristics assessed by multiparameter cytometry. In the present study we labeled newly synthesized DNA in A549 human lung carcinoma cells with 15-120 min pulses of EdU. All DNA was stained with DAPI and cellular fluorescence was measured by laser scanning cytometry. The frequency of cells in the ascending (left) side of the "horseshoe"-shaped EdU/DAPI bivariate distributions reports the rate of DNA replication at the time of entrance to S phase while their frequency in the descending (right) side is a marker of DNA replication rate at the time of transition from S to G2 phase. To understand the connection between molecular-scale events and scatterplot asymmetry, we developed a multiscale stochastic model, which simulates DNA replication and cell cycle progression of individual cells and produces in silico EdU/DAPI scatterplots. For each S-phase cell the time points at which replication origins are fired are modeled by a non-homogeneous Poisson Process (NHPP). Shifted gamma distributions are assumed for durations of cell cycle phases (G1, S and G2 M), Depending on the rate of DNA synthesis being an increasing or decreasing function, simulated EdU/DAPI bivariate graphs show predominance of cells in left (early-S) or right (late-S) side of the horseshoe distribution. Assuming NHPP rate estimated from independent experiments, simulated EdU/DAPI graphs are nearly indistinguishable from those experimentally observed. This finding proves consistency between the S-phase DNA-replication rate based on molecular-scale analyses, and cell population kinetics ascertained from EdU/DAPI scatterplots and demonstrates that DNA replication rate at entrance to S is relatively slow compared with its rather abrupt termination during S to G2 transition. Our approach opens a possibility of similar modeling to study the effect of anticancer drugs on DNA replication/cell cycle progression and also to quantify other

  8. Preliminary Geologic/spectral Analysis of LANDSAT-4 Thematic Mapper Data, Wind River/bighorn Basin Area, Wyoming

    Science.gov (United States)

    Lang, H. R.; Conel, J. E.; Paylor, E. D.

    1984-01-01

    A LIDQA evaluation for geologic applications of a LANDSAT TM scene covering the Wind River/Bighorn Basin area, Wyoming, is examined. This involves a quantitative assessment of data quality including spatial and spectral characteristics. Analysis is concentrated on the 6 visible, near infrared, and short wavelength infrared bands. Preliminary analysis demonstrates that: (1) principal component images derived from the correlation matrix provide the most useful geologic information. To extract surface spectral reflectance, the TM radiance data must be calibrated. Scatterplots demonstrate that TM data can be calibrated and sensor response is essentially linear. Low instrumental offset and gain settings result in spectral data that do not utilize the full dynamic range of the TM system.

  9. The Effect of Adherence to Dietary Tracking on Weight Loss: Using HLM to Model Weight Loss over Time.

    Science.gov (United States)

    Ingels, John Spencer; Misra, Ranjita; Stewart, Jonathan; Lucke-Wold, Brandon; Shawley-Brzoska, Samantha

    2017-01-01

    The role of dietary tracking on weight loss remains unexplored despite being part of multiple diabetes and weight management programs. Hence, participants of the Diabetes Prevention and Management (DPM) program (12 months, 22 sessions) tracked their food intake for the duration of the study. A scatterplot of days tracked versus total weight loss revealed a nonlinear relationship. Hence, the number of possible tracking days was divided to create the 3 groups of participants: rare trackers (66% total days tracked). After controlling for initial body mass index, hemoglobin A 1c , and gender, only consistent trackers had significant weight loss (-9.99 pounds), following a linear relationship with consistent loss throughout the year. In addition, the weight loss trend for the rare and inconsistent trackers followed a nonlinear path, with the holidays slowing weight loss and the onset of summer increasing weight loss. These results show the importance of frequent dietary tracking for consistent long-term weight loss success.

  10. Local regression type methods applied to the study of geophysics and high frequency financial data

    Science.gov (United States)

    Mariani, M. C.; Basu, K.

    2014-09-01

    In this work we applied locally weighted scatterplot smoothing techniques (Lowess/Loess) to Geophysical and high frequency financial data. We first analyze and apply this technique to the California earthquake geological data. A spatial analysis was performed to show that the estimation of the earthquake magnitude at a fixed location is very accurate up to the relative error of 0.01%. We also applied the same method to a high frequency data set arising in the financial sector and obtained similar satisfactory results. The application of this approach to the two different data sets demonstrates that the overall method is accurate and efficient, and the Lowess approach is much more desirable than the Loess method. The previous works studied the time series analysis; in this paper our local regression models perform a spatial analysis for the geophysics data providing different information. For the high frequency data, our models estimate the curve of best fit where data are dependent on time.

  11. Fragmentation, dissipative expansion, and freeze-out in medium energy heavy-ion collisions

    International Nuclear Information System (INIS)

    Gross, D.H.E.; Li Baoan; DeAngelis, A.R.

    1992-01-01

    The collision dynamics of 96 Mo + 96 Mo at 55 A MeV is simulated by solving numerically the Boltzmann-Uehling-Uhlenbeck (BUU) transport equation for the one-body phase-space distribution-function of nucleons with and without Coulomb interaction. A scatter-plot of the one-body density distribution shows an initial compression, subsequent homogeneous expansion, a breaking into ''fragments'', a very slow creeping expansion up to a freeze-out and in the case of included Coulomb-interaction a Coulomb-explosion. In the calculation which included Coulomb-interaction the overall shape of the ensemble of dense fragments is spherical. The fragments are created over the entire volume of the dense part of the source and not at the surface only. In the simulation without Coulomb interaction a doughnut-like shape may develop. (orig.)

  12. Survey of sampling-based methods for uncertainty and sensitivity analysis

    International Nuclear Information System (INIS)

    Helton, J.C.; Johnson, J.D.; Sallaberry, C.J.; Storlie, C.B.

    2006-01-01

    Sampling-based methods for uncertainty and sensitivity analysis are reviewed. The following topics are considered: (i) definition of probability distributions to characterize epistemic uncertainty in analysis inputs (ii) generation of samples from uncertain analysis inputs (iii) propagation of sampled inputs through an analysis (iv) presentation of uncertainty analysis results, and (v) determination of sensitivity analysis results. Special attention is given to the determination of sensitivity analysis results, with brief descriptions and illustrations given for the following procedures/techniques: examination of scatterplots, correlation analysis, regression analysis, partial correlation analysis, rank transformations, statistical tests for patterns based on gridding, entropy tests for patterns based on gridding, nonparametric regression analysis, squared rank differences/rank correlation coefficient test, two-dimensional Kolmogorov-Smirnov test, tests for patterns based on distance measures, top down coefficient of concordance, and variance decomposition

  13. Morphological variation between isolates of the nematode Haemonchus contortus from sheep and goat populations in Malaysia and Yemen.

    Science.gov (United States)

    Gharamah, A A; Rahman, W A; Siti Azizah, M N

    2014-03-01

    Haemonchus contortus is a highly pathogenic nematode parasite of sheep and goats. This work was conducted to investigate the population and host variations of the parasitic nematode H. contortus of sheep and goats from Malaysia and Yemen. Eight morphological characters were investigated, namely the total body length, cervical papillae, right spicule, left spicule, right barb, left barb, gubernaculum and cuticular ridge (synlophe) pattern. Statistical analysis showed the presence of morphological variation between populations of H. contortus from Malaysia and Yemen, with minor variation in the synlophe pattern of these isolates. Isolates from each country were grouped together in the scatterplots with no host isolation. Body, cervical papillae and spicule lengths were the most important characters that distinguished between populations of the two countries. This variation between Malaysia and Yemen may be attributed to geographical isolation and the possible presence of a different isolate of this worm in each country.

  14. CROSSPLOT-3/CON-3D, 3-D and Stereoscopic Computer-Aided Design Graphics

    International Nuclear Information System (INIS)

    Grotch, S.L.

    1986-01-01

    Description of program or function: CROSSPLOT3 is a general three- dimensional point plotting program which generates scatterplots of a data matrix from any user-specified viewpoint. Images can be rotated for a movie-like effect enhancing stereo perception. A number of features can be invoked by the user including: color, class distinction, flickering, sectioning, projections to grid surfaces, and drawing a plane. Plots may be viewed in real time as they are generated. CON3D generates three-dimensional surfaces plus contours on a lower plane from either data on a rectangular grid or an analytical function z=f(x,y). The user may choose any viewing perspective. Plots may be generated in color with many refinements under user control

  15. Principal component analysis and neurocomputing-based models for total ozone concentration over different urban regions of India

    Science.gov (United States)

    Chattopadhyay, Goutami; Chattopadhyay, Surajit; Chakraborthy, Parthasarathi

    2012-07-01

    The present study deals with daily total ozone concentration time series over four metro cities of India namely Kolkata, Mumbai, Chennai, and New Delhi in the multivariate environment. Using the Kaiser-Meyer-Olkin measure, it is established that the data set under consideration are suitable for principal component analysis. Subsequently, by introducing rotated component matrix for the principal components, the predictors suitable for generating artificial neural network (ANN) for daily total ozone prediction are identified. The multicollinearity is removed in this way. Models of ANN in the form of multilayer perceptron trained through backpropagation learning are generated for all of the study zones, and the model outcomes are assessed statistically. Measuring various statistics like Pearson correlation coefficients, Willmott's indices, percentage errors of prediction, and mean absolute errors, it is observed that for Mumbai and Kolkata the proposed ANN model generates very good predictions. The results are supported by the linearly distributed coordinates in the scatterplots.

  16. Validation of the human activity profile questionnaire as a measure of physical activity levels in older community-dwelling women.

    Science.gov (United States)

    Bastone, Alessandra de Carvalho; Moreira, Bruno de Souza; Vieira, Renata Alvarenga; Kirkwood, Renata Noce; Dias, João Marcos Domingues; Dias, Rosângela Corrêa

    2014-07-01

    The purpose of this study was to assess the validity of the Human Activity Profile (HAP) by comparing scores with accelerometer data and by objectively testing its cutoff points. This study included 120 older women (age 60-90 years). Average daily time spent in sedentary, moderate, and hard activity; counts; number of steps; and energy expenditure were measured using an accelerometer. Spearman rank order correlations were used to evaluate the correlation between the HAP scores and accelerometer variables. Significant relationships were detected (rho = .47-.75, p < .001), indicating that the HAP estimates physical activity at a group level well; however, scatterplots showed individual errors. Receiver operating characteristic curves were constructed to determine HAP cutoff points on the basis of physical activity level recommendations, and the cutoff points found were similar to the original HAP cutoff points. The HAP is a useful indicator of physical activity levels in older women.

  17. Preterm infant thermal care: differing thermal environments produced by air versus skin servo-control incubators.

    Science.gov (United States)

    Thomas, K A; Burr, R

    1999-06-01

    Incubator thermal environments produced by skin versus air servo-control were compared. Infant abdominal skin and incubator air temperatures were recorded from 18 infants in skin servo-control and 14 infants in air servo-control (26- to 29-week gestational age, 14 +/- 2 days postnatal age) for 24 hours. Differences in incubator and infant temperature, neutral thermal environment (NTE) maintenance, and infant and incubator circadian rhythm were examined using analysis of variance and scatterplots. Skin servo-control resulted in more variable air temperature, yet more stable infant temperature, and more time within the NTE. Circadian rhythm of both infant and incubator temperature differed by control mode and the relationship between incubator and infant temperature rhythms was a function of control mode. The differences between incubator control modes extend beyond temperature stability and maintenance of NTE. Circadian rhythm of incubator and infant temperatures is influenced by incubator control.

  18. Weapons of Maths Instruction: A Thousand Years of Technological Stasis in Arrowheads from the South Scandinavian Middle Mesolithic

    Directory of Open Access Journals (Sweden)

    Kevan Edinborough

    2005-11-01

    Full Text Available This paper presents some results from my doctoral research into the evolution of bow-arrow technology using archaeological data from the south Scandinavian Mesolithic (Edinborough 2004. A quantitative approach is used to describe the morphological variation found in samples taken from over 3600 armatures from nine Danish and Swedish lithic assemblages. A linked series of statistical techniques determines the two most significant metric variables across the nine arrowhead assemblages in terms of the cultural transmission of arrowhead technology. The resultant scatterplot uses confidence ellipses to reveal highly distinctive patterns of morphological variation that are related to population-specific technological traditions. A population-level hypothesis of a socially constrained transmission mechanism is presented that may explain the unusually long period of technological stasis demonstrated by six of the nine arrowhead phase-assemblages.

  19. Survey of sampling-based methods for uncertainty and sensitivity analysis.

    Energy Technology Data Exchange (ETDEWEB)

    Johnson, Jay Dean; Helton, Jon Craig; Sallaberry, Cedric J. PhD. (.; .); Storlie, Curt B. (Colorado State University, Fort Collins, CO)

    2006-06-01

    Sampling-based methods for uncertainty and sensitivity analysis are reviewed. The following topics are considered: (1) Definition of probability distributions to characterize epistemic uncertainty in analysis inputs, (2) Generation of samples from uncertain analysis inputs, (3) Propagation of sampled inputs through an analysis, (4) Presentation of uncertainty analysis results, and (5) Determination of sensitivity analysis results. Special attention is given to the determination of sensitivity analysis results, with brief descriptions and illustrations given for the following procedures/techniques: examination of scatterplots, correlation analysis, regression analysis, partial correlation analysis, rank transformations, statistical tests for patterns based on gridding, entropy tests for patterns based on gridding, nonparametric regression analysis, squared rank differences/rank correlation coefficient test, two dimensional Kolmogorov-Smirnov test, tests for patterns based on distance measures, top down coefficient of concordance, and variance decomposition.

  20. Sensitivity analysis using contribution to sample variance plot: Application to a water hammer model

    International Nuclear Information System (INIS)

    Tarantola, S.; Kopustinskas, V.; Bolado-Lavin, R.; Kaliatka, A.; Ušpuras, E.; Vaišnoras, M.

    2012-01-01

    This paper presents “contribution to sample variance plot”, a natural extension of the “contribution to the sample mean plot”, which is a graphical tool for global sensitivity analysis originally proposed by Sinclair. These graphical tools have a great potential to display graphically sensitivity information given a generic input sample and its related model realizations. The contribution to the sample variance can be obtained at no extra computational cost, i.e. from the same points used for deriving the contribution to the sample mean and/or scatter-plots. The proposed approach effectively instructs the analyst on how to achieve a targeted reduction of the variance, by operating on the extremes of the input parameters' ranges. The approach is tested against a known benchmark for sensitivity studies, the Ishigami test function, and a numerical model simulating the behaviour of a water hammer effect in a piping system.

  1. The design, build and test of a digital analyzer for mixed radiation fields

    International Nuclear Information System (INIS)

    Joyce, M. J.; Aspinall, M. D.; Georgopoulos, K.; Cave, F. D.; Jarrah, Z.

    2009-01-01

    The design, build and test of a digital analyzer for mixed radiation fields is described. This instrument has been developed to provide portable, real-time discrimination of hard mixed fields comprising both neutrons and γ rays with energies typically above 0.5 MeV. The instrument in its standard form comprises a sensor head and a system unit, and affords the flexibility to provide processed data in the form of the traditional scatter-plot representation separating neutron and γ-ray components, or the full, sampled pulse data itself. The instrument has been tested with an americium-beryllium source in three different shielding arrangements to replicate the case in which there are only neutrons, only γ rays and where both neutrons and γ-rays are present. The instrument is observed to return consistent results. (authors)

  2. Uncertainty and sensitivity analysis for two-phase flow in the vicinity of the repository in the 1996 performance assessment for the Waste Isolation Pilot Plant: Disturbed conditions

    International Nuclear Information System (INIS)

    HELTON, JON CRAIG; BEAN, J.E.; ECONOMY, K.; GARNER, J.W.; MACKINNON, ROBERT J.; MILLER, JOEL D.; SCHREIBER, J.D.; VAUGHN, PALMER

    2000-01-01

    Uncertainty and sensitivity analysis results obtained in the 1996 performance assessment (PA) for the Waste Isolation Pilot Plant (WIPP) are presented for two-phase flow in the vicinity of the repository under disturbed conditions resulting from drilling intrusions. Techniques based on Latin hypercube sampling, examination of scatterplots, stepwise regression analysis, partial correlation analysis and rank transformations are used to investigate brine inflow, gas generation repository pressure, brine saturation and brine and gas outflow. Of the variables under study, repository pressure and brine flow from the repository to the Culebra Dolomite are potentially the most important in PA for the WIPP. Subsequent to a drilling intrusion repository pressure was dominated by borehole permeability and generally below the level (i.e., 8 MPa) that could potentially produce spallings and direct brine releases. Brine flow from the repository to the Culebra Dolomite tended to be small or nonexistent with its occurrence and size also dominated by borehole permeability

  3. Assessing the role of pavement macrotexture in preventing crashes on highways.

    Science.gov (United States)

    Pulugurtha, Srinivas S; Kusam, Prasanna R; Patel, Kuvleshay J

    2010-02-01

    The objective of this article is to assess the role of pavement macrotexture in preventing crashes on highways in the State of North Carolina. Laser profilometer data obtained from the North Carolina Department of Transportation (NCDOT) for highways comprising four corridors are processed to calculate pavement macrotexture at 100-m (approximately 330-ft) sections according to the American Society for Testing and Materials (ASTM) standards. Crash data collected over the same lengths of the corridors were integrated with the calculated pavement macrotexture for each section. Scatterplots were generated to assess the role of pavement macrotexture on crashes and logarithm of crashes. Regression analyses were conducted by considering predictor variables such as million vehicle miles of travel (as a function of traffic volume and length), the number of interchanges, the number of at-grade intersections, the number of grade-separated interchanges, and the number of bridges, culverts, and overhead signs along with pavement macrotexture to study the statistical significance of relationship between pavement macrotexture and crashes (both linear and log-linear) when compared to other predictor variables. Scatterplots and regression analysis conducted indicate a more statistically significant relationship between pavement macrotexture and logarithm of crashes than between pavement macrotexture and crashes. The coefficient for pavement macrotexture, in general, is negative, indicating that the number of crashes or logarithm of crashes decreases as it increases. The relation between pavement macrotexture and logarithm of crashes is generally stronger than between most other predictor variables and crashes or logarithm of crashes. Based on results obtained, it can be concluded that maintaining pavement macrotexture greater than or equal to 1.524 mm (0.06 in.) as a threshold limit would possibly reduce crashes and provide safe transportation to road users on highways.

  4. Early Mucosal Reactions During and After Head-and-Neck Radiotherapy: Dependence of Treatment Tolerance on Radiation Dose and Schedule Duration

    International Nuclear Information System (INIS)

    Fenwick, John D.; Lawrence, Geoff P.; Malik, Zafar; Nahum, Alan E.; Mayles, W. Philip M.

    2008-01-01

    Purpose: To more precisely localize the dose-time boundary between head-and-neck radiotherapy schedules inducing tolerable and intolerable early mucosal reactions. Methods and Materials: Total cell-kill biologically effective doses (BED CK ) have been calculated for 84 schedules, including incomplete repair effects, but making no other corrections for the effect of schedule duration T. [BED CK ,T] scatterplots are graphed, overlying BED CKboundary (T) curves on the plots and using discriminant analysis to optimize BED CKboundary (T) to best represent the boundary between the tolerable and intolerable schedules. Results: More overlap than expected is seen between the tolerable and intolerable treatments in the 84-schedule [BED CK ,T] scatterplot, but this was largely eliminated by removing gap and tolerated accelerating schedules from the plot. For the remaining 57 predominantly regular schedules, the BED CKboundary (T) boundary increases with increasing T (p = 0.0001), curving upwards significantly nonlinearly (p = 0.00007) and continuing to curve beyond 15 days (p = 0.035). The regular schedule BED CKboundary (T) boundary does not describe tolerability well for accelerating schedules (p = 0.002), with several tolerated accelerating schedules lying above the boundary where regular schedules would be intolerable. Gap schedule tolerability also is not adequately described by the regular schedule boundary (p = 0.04), although no systematic offset exists between the regular boundary and the overall gap schedule tolerability pattern. Conclusions: All schedules analyzed (regular, gap, and accelerating) with BED CK values below BED CKboundary (T)=69.5x(T/32.2)/sin((T/32.2) (radians) )-3.5Gy 10 (forT≤50 days) are tolerable, and many lying above the boundary are intolerable. The accelerating schedules analyzed were tolerated better overall than are the regular schedules with similar [BED CK ,T] values.

  5. Using empirical Bayes predictors from generalized linear mixed models to test and visualize associations among longitudinal outcomes.

    Science.gov (United States)

    Mikulich-Gilbertson, Susan K; Wagner, Brandie D; Grunwald, Gary K; Riggs, Paula D; Zerbe, Gary O

    2018-01-01

    Medical research is often designed to investigate changes in a collection of response variables that are measured repeatedly on the same subjects. The multivariate generalized linear mixed model (MGLMM) can be used to evaluate random coefficient associations (e.g. simple correlations, partial regression coefficients) among outcomes that may be non-normal and differently distributed by specifying a multivariate normal distribution for their random effects and then evaluating the latent relationship between them. Empirical Bayes predictors are readily available for each subject from any mixed model and are observable and hence, plotable. Here, we evaluate whether second-stage association analyses of empirical Bayes predictors from a MGLMM, provide a good approximation and visual representation of these latent association analyses using medical examples and simulations. Additionally, we compare these results with association analyses of empirical Bayes predictors generated from separate mixed models for each outcome, a procedure that could circumvent computational problems that arise when the dimension of the joint covariance matrix of random effects is large and prohibits estimation of latent associations. As has been shown in other analytic contexts, the p-values for all second-stage coefficients that were determined by naively assuming normality of empirical Bayes predictors provide a good approximation to p-values determined via permutation analysis. Analyzing outcomes that are interrelated with separate models in the first stage and then associating the resulting empirical Bayes predictors in a second stage results in different mean and covariance parameter estimates from the maximum likelihood estimates generated by a MGLMM. The potential for erroneous inference from using results from these separate models increases as the magnitude of the association among the outcomes increases. Thus if computable, scatterplots of the conditionally independent empirical Bayes

  6. Foreshock Langmuir Waves for Unusually Constant Solar Wind Conditions: Data and Implications for Foreshock Structure

    Science.gov (United States)

    Cairns, Iver H.; Robinson, P. A.; Anderson, Roger R.; Strangeway, R. J.

    1997-01-01

    Plasma wave data are compared with ISEE 1's position in the electron foreshock for an interval with unusually constant (but otherwise typical) solar wind magnetic field and plasma characteristics. For this period, temporal variations in the wave characteristics can be confidently separated from sweeping of the spatially varying foreshock back and forth across the spacecraft. The spacecraft's location, particularly the coordinate D(sub f) downstream from the foreshock boundary (often termed DIFF), is calculated by using three shock models and the observed solar wind magnetometer and plasma data. Scatterplots of the wave field versus D(sub f) are used to constrain viable shock models, to investigate the observed scatter in the wave fields at constant D(sub f), and to test the theoretical predictions of linear instability theory. The scatterplots confirm the abrupt onset of the foreshock waves near the upstream boundary, the narrow width in D(sub f) of the region with high fields, and the relatively slow falloff of the fields at large D(sub f), as seen in earlier studies, but with much smaller statistical scatter. The plots also show an offset of the high-field region from the foreshock boundary. It is shown that an adaptive, time-varying shock model with no free parameters, determined by the observed solar wind data and published shock crossings, is viable but that two alternative models are not. Foreshock wave studies can therefore remotely constrain the bow shock's location. The observed scatter in wave field at constant D(sub f) is shown to be real and to correspond to real temporal variations, not to unresolved changes in D(sub f). By comparing the wave data with a linear instability theory based on a published model for the electron beam it is found that the theory can account qualitatively and semiquantitatively for the abrupt onset of the waves near D(sub f) = 0, for the narrow width and offset of the high-field region, and for the decrease in wave intensity

  7. An analysis of wildfire frequency and burned area relationships with human pressure and climate gradients in the context of fire regime

    Science.gov (United States)

    Jiménez-Ruano, Adrián; Rodrigues Mimbrero, Marcos; de la Riva Fernández, Juan

    2017-04-01

    Understanding fire regime is a crucial step towards achieving a better knowledge of the wildfire phenomenon. This study proposes a method for the analysis of fire regime based on multidimensional scatterplots (MDS). MDS are a visual approach that allows direct comparison among several variables and fire regime features so that we are able to unravel spatial patterns and relationships within the region of analysis. Our analysis is conducted in Spain, one of the most fire-affected areas within the Mediterranean region. Specifically, the Spanish territory has been split into three regions - Northwest, Hinterland and Mediterranean - considered as representative fire regime zones according to MAGRAMA (Spanish Ministry of Agriculture, Environment and Food). The main goal is to identify key relationships between fire frequency and burnt area, two of the most common fire regime features, with socioeconomic activity and climate. In this way we will be able to better characterize fire activity within each fire region. Fire data along the period 1974-2010 was retrieved from the General Statistics Forest Fires database (EGIF). Specifically, fire frequency and burnt area size was examined for each region and fire season (summer and winter). Socioeconomic activity was defined in terms of human pressure on wildlands, i.e. the presence and intensity of anthropogenic activity near wildland or forest areas. Human pressure was built from GIS spatial information about land use (wildland-agriculture and wildland-urban interface) and demographic potential. Climate variables (average maximum temperature and annual precipitation) were extracted from MOTEDAS (Monthly Temperature Dataset of Spain) and MOPREDAS (Monthly Precipitation Dataset of Spain) datasets and later reclassified into ten categories. All these data were resampled to fit the 10x10 Km grid used as spatial reference for fire data. Climate and socioeconomic variables were then explored by means of MDS to find the extent to

  8. Morphological analysis of Trichomycterus areolatus Valenciennes, 1846 from southern Chilean rivers using a truss-based system (Siluriformes, Trichomycteridae

    Directory of Open Access Journals (Sweden)

    Nelson Colihueque

    2017-09-01

    Full Text Available Trichomycterus areolatus Valenciennes, 1846 is a small endemic catfish inhabiting the Andean river basins of Chile. In this study, the morphological variability of three T. areolatus populations, collected in two river basins from southern Chile, was assessed with multivariate analyses, including principal component analysis (PCA and discriminant function analysis (DFA. It is hypothesized that populations must segregate morphologically from each other based on the river basin that they were sampled from, since each basin presents relatively particular hydrological characteristics. Significant morphological differences among the three populations were found with PCA (ANOSIM test, r = 0.552, p < 0.0001 and DFA (Wilks’s λ = 0.036, p < 0.01. PCA accounted for a total variation of 56.16% by the first two principal components. The first Principal Component (PC1 and PC2 explained 34.72 and 21.44% of the total variation, respectively. The scatter-plot of the first two discriminant functions (DF1 on DF2 also validated the existence of three different populations. In group classification using DFA, 93.3% of the specimens were correctly-classified into their original populations. Of the total of 22 transformed truss measurements, 17 exhibited highly significant (p < 0.01 differences among populations. The data support the existence of T. areolatus morphological variation across different rivers in southern Chile, likely reflecting the geographic isolation underlying population structure of the species.

  9. Preterm Versus Term Children: Analysis of Sedation/Anesthesia Adverse Events and Longitudinal Risk.

    Science.gov (United States)

    Havidich, Jeana E; Beach, Michael; Dierdorf, Stephen F; Onega, Tracy; Suresh, Gautham; Cravero, Joseph P

    2016-03-01

    Preterm and former preterm children frequently require sedation/anesthesia for diagnostic and therapeutic procedures. Our objective was to determine the age at which children who are born risk for sedation/anesthesia adverse events. Our secondary objective was to describe the nature and incidence of adverse events. This is a prospective observational study of children receiving sedation/anesthesia for diagnostic and/or therapeutic procedures outside of the operating room by the Pediatric Sedation Research Consortium. A total of 57,227 patients 0 to 22 years of age were eligible for this study. All adverse events and descriptive terms were predefined. Logistic regression and locally weighted scatterplot regression were used for analysis. Preterm and former preterm children had higher adverse event rates (14.7% vs 8.5%) compared with children born at term. Our analysis revealed a biphasic pattern for the development of adverse sedation/anesthesia events. Airway and respiratory adverse events were most commonly reported. MRI scans were the most commonly performed procedures in both categories of patients. Patients born preterm are nearly twice as likely to develop sedation/anesthesia adverse events, and this risk continues up to 23 years of age. We recommend obtaining birth history during the formulation of an anesthetic/sedation plan, with heightened awareness that preterm and former preterm children may be at increased risk. Further prospective studies focusing on the etiology and prevention of adverse events in former preterm patients are warranted. Copyright © 2016 by the American Academy of Pediatrics.

  10. Characteristic of Noise-induced Hearing Loss among Workers in Construction Industries

    Science.gov (United States)

    Naadia Mazlan, Ain; Yahya, Khairulzan; Haron, Zaiton; Amsharija Mohamed, Nik; Rasib, Edrin Nazri Abdul; Jamaludin, Nizam; Darus, Nadirah

    2018-03-01

    Noise-induced hearing loss (NIHL) is among the most common occupational disease in industries. This paper investigates NIHL in construction related industries in Malaysia with particular emphasis on its relation with risk factors. The objectives of this research were to (1) quantify the prevalence of NIHL in construction related industries, and (2) assess the relationship between hearing loss and risk factors and it's characteristic. The study was conducted using 110 NIHL compensation record collected from Social Security Organisation (SOCSO), Malaysia. Risk factors namely area noise, age, temperature, smoking habit, hobby, diabetic and cardiovascular disease were identified and analysed. Results showed that there was no direct relationship between area noise with hearing impairment while there was only low relationship between age and hearing impairment. The range for area noise and age were between 70 to 140 dB(A) and 20 to 70 years, respectively. The other risk factors classified as categorical data and analysed using frequency method. Grade of impairment does not depend solely on area noise but also in combination with age and other risk factors. Characteristic of NIHL prevailed in construction related industries were presented using scatterplots and can serve as a references for future hazard control on site.

  11. Mann-Whitney Type Tests for Microarray Experiments: The R Package gMWT

    Directory of Open Access Journals (Sweden)

    Daniel Fischer

    2015-06-01

    Full Text Available We present the R package gMWT which is designed for the comparison of several treatments (or groups for a large number of variables. The comparisons are made using certain probabilistic indices (PI. The PIs computed here tell how often pairs or triples of observations coming from different groups appear in a specific order of magnitude. Classical two and several sample rank test statistics such as the Mann-Whitney-Wilcoxon, Kruskal-Wallis, or Jonckheere-Terpstra test statistics are simple functions of these PI. Also new test statistics for directional alternatives are provided. The package gMWT can be used to calculate the variable-wise PI estimates, to illustrate their multivariate distribution and mutual dependence with joint scatterplot matrices, and to construct several classical and new rank tests based on the PIs. The aim of the paper is first to briefly explain the theory that is necessary to understand the behavior of the estimated PIs and the rank tests based on them. Second, the use of the package is described and illustrated with simulated and real data examples. It is stressed that the package provides a new flexible toolbox to analyze large gene or microRNA expression data sets, collected on microarrays or by other high-throughput technologies. The testing procedures can be used in an eQTL analysis, for example, as implemented in the package GeneticTools.

  12. MOD-AGE - an algorithm for age-depth model construction; U-series dated speleothems case study

    Science.gov (United States)

    Hercman, H.; Pawlak, J.

    2012-04-01

    We present MOD-AGE - a new system for chronology construction. MOD-AGE can be used for profiles that have been dated by different methods. As input data, the system uses the following basic measurements: activities, atomic ratios or age, as well as depth measurement. Based on probability distributions describing the measurement results, MOD-AGE estimates the age~depth relation and its confidence bands. To avoid the use of difficult-to-meet assumptions, MOD-AGE uses nonparametric methods. We applied a Monte Carlo simulation to model age and depth values based on the real distribution of counted data (activities, atomic ratios, depths etc.). Several fitting methods could be applied for estimating the relationships; based on several tests, we decide to use LOESS method (locally weighted scatterplot smoothing). The stratigraphic correction procedure applied in the MOD-AGE program uses a probability calculus, which assumes that the ages of all the samples are correctly estimated. Information about the probability distribution of the samples' ages is used to estimate the most probable sequence that is concordant according to the superposition rule. MOD-AGE is presented as a tool for the chronology construction of speleothems that have been analyzed by the U-series method, and it is compared to the StalAge algorithm presented by D. Scholtz and D.L Hoffmann (2011). Scholtz, D., Hoffmann, D. L., 2011. StalAge - An algorithm designed for construction of speleothem age models. Quaternary Geochronology 6, 369-382.

  13. StreamMap: Smooth Dynamic Visualization of High-Density Streaming Points.

    Science.gov (United States)

    Li, Chenhui; Baciu, George; Han, Yu

    2018-03-01

    Interactive visualization of streaming points for real-time scatterplots and linear blending of correlation patterns is increasingly becoming the dominant mode of visual analytics for both big data and streaming data from active sensors and broadcasting media. To better visualize and interact with inter-stream patterns, it is generally necessary to smooth out gaps or distortions in the streaming data. Previous approaches either animate the points directly or present a sampled static heat-map. We propose a new approach, called StreamMap, to smoothly blend high-density streaming points and create a visual flow that emphasizes the density pattern distributions. In essence, we present three new contributions for the visualization of high-density streaming points. The first contribution is a density-based method called super kernel density estimation that aggregates streaming points using an adaptive kernel to solve the overlapping problem. The second contribution is a robust density morphing algorithm that generates several smooth intermediate frames for a given pair of frames. The third contribution is a trend representation design that can help convey the flow directions of the streaming points. The experimental results on three datasets demonstrate the effectiveness of StreamMap when dynamic visualization and visual analysis of trend patterns on streaming points are required.

  14. Scanning fluorescent microscopy is an alternative for quantitative fluorescent cell analysis.

    Science.gov (United States)

    Varga, Viktor Sebestyén; Bocsi, József; Sipos, Ferenc; Csendes, Gábor; Tulassay, Zsolt; Molnár, Béla

    2004-07-01

    Fluorescent measurements on cells are performed today with FCM and laser scanning cytometry. The scientific community dealing with quantitative cell analysis would benefit from the development of a new digital multichannel and virtual microscopy based scanning fluorescent microscopy technology and from its evaluation on routine standardized fluorescent beads and clinical specimens. We applied a commercial motorized fluorescent microscope system. The scanning was done at 20 x (0.5 NA) magnification, on three channels (Rhodamine, FITC, Hoechst). The SFM (scanning fluorescent microscopy) software included the following features: scanning area, exposure time, and channel definition, autofocused scanning, densitometric and morphometric cellular feature determination, gating on scatterplots and frequency histograms, and preparation of galleries of the gated cells. For the calibration and standardization Immuno-Brite beads were used. With application of shading compensation, the CV of fluorescence of the beads decreased from 24.3% to 3.9%. Standard JPEG image compression until 1:150 resulted in no significant change. The change of focus influenced the CV significantly only after +/-5 microm error. SFM is a valuable method for the evaluation of fluorescently labeled cells. Copyright 2004 Wiley-Liss, Inc.

  15. What can we learn from the Dutch cannabis coffeeshop system?

    Science.gov (United States)

    MacCoun, Robert J

    2011-11-01

    To examine the empirical consequences of officially tolerated retail sales of cannabis in the Netherlands, and possible implications for the legalization debate. Available Dutch data on the prevalence and patterns of use, treatment, sanctioning, prices and purity for cannabis dating back to the 1970s are compared to similar indicators in Europe and the United States. The available evidence suggests that the prevalence of cannabis use among Dutch citizens rose and fell as the number of coffeeshops increased and later declined, but only modestly. The coffeeshops do not appear to encourage escalation into heavier use or lengthier using careers, although treatment rates for cannabis are higher than elsewhere in Europe. Scatterplot analyses suggest that Dutch patterns of use are very typical for Europe, and that the 'separation of markets' may indeed have somewhat weakened the link between cannabis use and the use of cocaine or amphetamines. Cannabis consumption in the Netherlands is lower than would be expected in an unrestricted market, perhaps because cannabis prices have remained high due to production-level prohibitions. The Dutch system serves as a nuanced alternative to both full prohibition and full legalization. © 2011 The Author, Addiction © 2011 Society for the Study of Addiction.

  16. Self- and surrogate-reported communication functioning in aphasia.

    Science.gov (United States)

    Doyle, Patrick J; Hula, William D; Austermann Hula, Shannon N; Stone, Clement A; Wambaugh, Julie L; Ross, Katherine B; Schumacher, James G

    2013-06-01

    To evaluate the dimensionality and measurement invariance of the aphasia communication outcome measure (ACOM), a self- and surrogate-reported measure of communicative functioning in aphasia. Responses to a large pool of items describing communication activities were collected from 133 community-dwelling persons with aphasia of ≥ 1 month post-onset and their associated surrogate respondents. These responses were evaluated using confirmatory and exploratory factor analysis. Chi-square difference tests of nested factor models were used to evaluate patient-surrogate measurement invariance and the equality of factor score means and variances. Association and agreement between self- and surrogate reports were examined using correlation and scatterplots of pairwise patient-surrogate differences. Three single-factor scales (Talking, Comprehension, and Writing) approximating patient-surrogate measurement invariance were identified. The variance of patient-reported scores on the Talking and Writing scales was higher than surrogate-reported variances on these scales. Correlations between self- and surrogate reports were moderate-to-strong, but there were significant disagreements in a substantial number of individual cases. Despite minimal bias and relatively strong association, surrogate reports of communicative functioning in aphasia are not reliable substitutes for self-reports by persons with aphasia. Furthermore, although measurement invariance is necessary for direct comparison of self- and surrogate reports, the costs of obtaining invariance in terms of scale reliability and content validity may be substantial. Development of non-invariant self- and surrogate report scales may be preferable for some applications.

  17. A preliminary analysis of quantifying computer security vulnerability data in "the wild"

    Science.gov (United States)

    Farris, Katheryn A.; McNamara, Sean R.; Goldstein, Adam; Cybenko, George

    2016-05-01

    A system of computers, networks and software has some level of vulnerability exposure that puts it at risk to criminal hackers. Presently, most vulnerability research uses data from software vendors, and the National Vulnerability Database (NVD). We propose an alternative path forward through grounding our analysis in data from the operational information security community, i.e. vulnerability data from "the wild". In this paper, we propose a vulnerability data parsing algorithm and an in-depth univariate and multivariate analysis of the vulnerability arrival and deletion process (also referred to as the vulnerability birth-death process). We find that vulnerability arrivals are best characterized by the log-normal distribution and vulnerability deletions are best characterized by the exponential distribution. These distributions can serve as prior probabilities for future Bayesian analysis. We also find that over 22% of the deleted vulnerability data have a rate of zero, and that the arrival vulnerability data is always greater than zero. Finally, we quantify and visualize the dependencies between vulnerability arrivals and deletions through a bivariate scatterplot and statistical observations.

  18. Experimental and theoretical high energy physics research. Annual grant progress report (FDP), January 15, 1993--January 14, 1993

    Energy Technology Data Exchange (ETDEWEB)

    Cline, D.B.

    1993-10-01

    Progress on seven tasks is reported. (I)UCLA hadronization model, antiproton decay, PEP4/9 e{sup +}e{sup {minus}} analysis: In addition to these topics, work on CP and CPT phenomenology at a {phi} factory and letters of support on the hadronization project are included. (II)ICARUS detector and rare B decays with hadron beams and colliders: Developments are summarized and some typcial events as shown; in addition, the RD5 collaboration at CERN and the asymmetric {phi} factory project are sketched. (III)Theoretical physics: Feynman diagram calculations in gauge theory; supersymmetric standard model; effects of quantum gravity in breaking of global symmetries; models of quark and lepton substructure; renormalized field theory; large-scale structure in the universe and particle-astrophysics/early universe cosmology. (IV)H dibaryon search at BNL, kaon experiments (E799/KTeV) at Fermilab: Project design and some scatterplots are given. (V)UCLA participation in the experiment CDF at Fermilab. (VI)Detectors for hadron physics at ultrahigh energy colliders: Scintillating fiber and visible light photon counter research. (VII)Administrative support and conference organization.

  19. New molecular evidence for fragmentation between two distant populations of the threatened stingless bee Melipona subnitida Ducke (Hymenoptera, Apidae, Meliponini

    Directory of Open Access Journals (Sweden)

    Geice R. Silva

    2014-06-01

    Full Text Available For a snapshot assessment of the genetic diversity present within Melipona subnitida, an endemic stingless bee distributed in the semi-arid region of northeastern Brazil, populations separated by over 1,000 km distance were analyzed by ISSR genotyping. This is a prerequisite for the establishment of efficient management and conservation practices. From 21 ISSR primers tested, only nine revealed consistent and polymorphic bands (loci. PCR reactions resulted in 165 loci, of which 92 were polymorphic (57.5%. Both ΦST (ARLEQUIN and θB (HICKORY presented high values of similar magnitude (0.34, p<0.0001 and 0.33, p<0.0001, respectively, showing that these two groups were highly structured. The dendrogram obtained by the cluster analysis and the scatter-plot of the PCoA corroborate with the data presented by the AMOVA and θB tests. Clear evidence of subdivision among sampling sites was also observed by the Bayesian grouping model analysis (STRUCTURE of the ISSR data. It is clear from this study that conservation strategies should take into account the heterogeneity of these two separate populations, and address actions towards their sustainability by integrating our findings with ecological tools.

  20. Laws of attraction: from perceptual forces to conceptual similarity.

    Science.gov (United States)

    Ziemkiewicz, Caroline; Kosara, Robert

    2010-01-01

    Many of the pressing questions in information visualization deal with how exactly a user reads a collection of visual marks as information about relationships between entities. Previous research has suggested that people see parts of a visualization as objects, and may metaphorically interpret apparent physical relationships between these objects as suggestive of data relationships. We explored this hypothesis in detail in a series of user experiments. Inspired by the concept of implied dynamics in psychology, we first studied whether perceived gravity acting on a mark in a scatterplot can lead to errors in a participant's recall of the mark's position. The results of this study suggested that such position errors exist, but may be more strongly influenced by attraction between marks. We hypothesized that such apparent attraction may be influenced by elements used to suggest relationship between objects, such as connecting lines, grouping elements, and visual similarity. We further studied what visual elements are most likely to cause this attraction effect, and whether the elements that best predicted attraction errors were also those which suggested conceptual relationships most strongly. Our findings show a correlation between attraction errors and intuitions about relatedness, pointing towards a possible mechanism by which the perception of visual marks becomes an interpretation of data relationships.

  1. Assessment of bone age in prepubertal healthy Korean children: Comparison among the Korean standard bone age chart, Greulich-Pyle method, and Tanner-Whitehouse method

    International Nuclear Information System (INIS)

    Kim Jeong Rye; Lee, Young Seok; Yu, Jee Suk

    2015-01-01

    To compare the reliability of the Greulich-Pyle (GP) method, Tanner-Whitehouse 3 (TW3) method and Korean standard bone age chart (KS) in the evaluation of bone age of prepubertal healthy Korean children. Left hand-wrist radiographs of 212 prepubertal healthy Korean children aged 7 to 12 years, obtained for the evaluation of the traumatic injury in emergency department, were analyzed by two observers. Bone age was estimated using the GP method, TW3 method and KS, and was calculated in months. The correlation between bone age measured by each method and chronological age of each child was analyzed using Pearson correlation coefficient, scatterplot. The three methods were compared using one-way analysis of variance. Significant correlations were found between chronological age and bone age estimated by all three methods in whole group and in each gender (R2 ranged from 0.87 to 0.9, p < 0.01). Although bone age estimated by KS was slightly closer to chronological age than those estimated by the GP and TW3 methods, the difference between three methods was not statistically significant (p > 0.01). The KS, GP, and TW3 methods show good reliability in the evaluation of bone age of prepubertal healthy Korean children without significant difference between them. Any are useful for evaluation of bone age in prepubertal healthy Korean children.

  2. Longitudinal data on cortical thickness before and after working memory training

    Directory of Open Access Journals (Sweden)

    Claudia Metzler-Baddeley

    2016-06-01

    Full Text Available The data and supplementary information provided in this article relate to our research article “Task complexity and location specific changes of cortical thickness in executive and salience networks after working memory training” (Metzler-Baddeley et al., 2016 [1]. We provide cortical thickness and subcortical volume data derived from parieto-frontal cortical regions and the basal ganglia with the FreeSurfer longitudinal analyses stream (http://surfer.nmr.mgh.harvard.edu [2] before and after Cogmed working memory training (Cogmed and Cogmed Working Memory Training, 2012 [3]. This article also provides supplementary information to the research article, i.e., within-group comparisons between baseline and outcome cortical thickness and subcortical volume measures, between-group tests of performance changes in cognitive benchmark tests (www.cambridgebrainsciences.com [4], correlation analyses between performance changes in benchmark tests and training-related structural changes, correlation analyses between the time spent training and structural changes, a scatterplot of the relationship between cortical thickness measures derived from the occipital lobe as control region and the chronological order of the MRI sessions to assess potential scanner drift effects and a post-hoc vertex-wise whole brain analysis with FreeSurfer Qdec (https://surfer.nmr.mgh.harvard.edu/fswiki/Qdec [5].

  3. Longitudinal data on cortical thickness before and after working memory training.

    Science.gov (United States)

    Metzler-Baddeley, Claudia; Caeyenberghs, Karen; Foley, Sonya; Jones, Derek K

    2016-06-01

    The data and supplementary information provided in this article relate to our research article "Task complexity and location specific changes of cortical thickness in executive and salience networks after working memory training" (Metzler-Baddeley et al., 2016) [1]. We provide cortical thickness and subcortical volume data derived from parieto-frontal cortical regions and the basal ganglia with the FreeSurfer longitudinal analyses stream (http://surfer.nmr.mgh.harvard.edu [2]) before and after Cogmed working memory training (Cogmed and Cogmed Working Memory Training, 2012) [3]. This article also provides supplementary information to the research article, i.e., within-group comparisons between baseline and outcome cortical thickness and subcortical volume measures, between-group tests of performance changes in cognitive benchmark tests (www.cambridgebrainsciences.com [4]), correlation analyses between performance changes in benchmark tests and training-related structural changes, correlation analyses between the time spent training and structural changes, a scatterplot of the relationship between cortical thickness measures derived from the occipital lobe as control region and the chronological order of the MRI sessions to assess potential scanner drift effects and a post-hoc vertex-wise whole brain analysis with FreeSurfer Qdec (https://surfer.nmr.mgh.harvard.edu/fswiki/Qdec [5]).

  4. Inferring spatial and temporal behavioral patterns of free-ranging manatees using saltwater sensors of telemetry tags

    Science.gov (United States)

    Castelblanco-Martínez, Delma Nataly; Morales-Vela, Benjamin; Slone, Daniel H.; Padilla-Saldívar, Janneth Adriana; Reid, James P.; Hernández-Arana, Héctor Abuid

    2015-01-01

    Diving or respiratory behavior in aquatic mammals can be used as an indicator of physiological activity and consequently, to infer behavioral patterns. Five Antillean manatees, Trichechus manatus manatus, were captured in Chetumal Bay and tagged with GPS tracking devices. The radios were equipped with a micropower saltwater sensor (SWS), which records the times when the tag assembly was submerged. The information was analyzed to establish individual fine-scale behaviors. For each fix, we established the following variables: distance (D), sampling interval (T), movement rate (D/T), number of dives (N), and total diving duration (TDD). We used logic criteria and simple scatterplots to distinguish between behavioral categories: ‘Travelling’ (D/T ≥ 3 km/h), ‘Surface’ (↓TDD, ↓N), ‘Bottom feeding’ (↑TDD, ↑N) and ‘Bottom resting’ (↑TDD, ↓N). Habitat categories were qualitatively assigned: Lagoon, Channels, Caye shore, City shore, Channel edge, and Open areas. The instrumented individuals displayed a daily rhythm of bottom activities, with surfacing activities more frequent during the night and early in the morning. More investigation into those cycles and other individual fine-scale behaviors related to their proximity to concentrations of human activity would be informative

  5. Psychosocial wellbeing and physical health among Tamil schoolchildren in northern Sri Lanka.

    Science.gov (United States)

    Hamilton, Alexander; Foster, Charlie; Richards, Justin; Surenthirakumaran, Rajendra

    2016-01-01

    Mental disorders contribute to the global disease burden and have an increased prevalence among children in emergency settings. Good physical health is crucial for mental well-being, although physical health is multifactorial and the nature of this relationship is not fully understood. Using Sri Lanka as a case study, we assessed the baseline levels of, and the association between, mental health and physical health in Tamil school children. We conducted a cross sectional study of mental and physical health in 10 schools in Kilinochchi town in northern Sri Lanka. All Grade 8 children attending selected schools were eligible to participate in the study. Mental health was assessed using the Sri Lankan Index for Psychosocial Stress - Child Version. Physical health was assessed using Body Mass Index for age, height for age Z scores and the Multi-stage Fitness Test. Association between physical and mental health variables was assessed using scatterplots and correlation was assessed using Pearson's R. There were 461 participants included in the study. Girls significantly outperformed boys in the MH testing t (459) = 2.201, p Tamil school children.

  6. Can Confirmation Measures Reflect Statistically Sound Dependencies in Data? The Concordance-based Assessment

    Directory of Open Access Journals (Sweden)

    Susmaga Robert

    2018-03-01

    Full Text Available The paper considers particular interestingness measures, called confirmation measures (also known as Bayesian confirmation measures, used for the evaluation of “if evidence, then hypothesis” rules. The agreement of such measures with a statistically sound (significant dependency between the evidence and the hypothesis in data is thoroughly investigated. The popular confirmation measures were not defined to possess such form of agreement. However, in error-prone environments, potential lack of agreement may lead to undesired effects, e.g. when a measure indicates either strong confirmation or strong disconfirmation, while in fact there is only weak dependency between the evidence and the hypothesis. In order to detect and prevent such situations, the paper employs a coefficient allowing to assess the level of dependency between the evidence and the hypothesis in data, and introduces a method of quantifying the level of agreement (referred to as a concordance between this coefficient and the measure being analysed. The concordance is characterized and visualised using specialized histograms, scatter-plots, etc. Moreover, risk-related interpretations of the concordance are introduced. Using a set of 12 confirmation measures, the paper presents experiments designed to establish the actual concordance as well as other useful characteristics of the measures.

  7. Hardiness as a predictor of mental health and well-being of Australian army reservists on and after stability operations.

    Science.gov (United States)

    Orme, Geoffrey J; Kehoe, E James

    2014-04-01

    This study tested whether cognitive hardiness moderates the adverse effects of deployment-related stressors on health and well-being of soldiers on short-tour (4-7 months), peacekeeping operations. Australian Army reservists (N = 448) were surveyed at the start, end, and up to 24 months after serving as peacekeepers in Timor-Leste or the Solomon Islands. They retained sound mental health throughout (Kessler 10, Post-Traumatic Checklist-Civilian, Depression Anxiety Stress Scale 42). Ratings of either traumatic or nontraumatic stress were low. Despite range restrictions, scores on the Cognitive Hardiness Scale moderated the relationship between deployment stressors and a composite measure of psychological distress. Scatterplots revealed an asymmetric pattern for hardiness scores and measures of psychological distress. When hardiness scores were low, psychological distress scores were widely dispersed. However, when hardiness scores were higher, psychological distress scores became concentrated at a uniformly low level. Reprint & Copyright © 2014 Association of Military Surgeons of the U.S.

  8. Sampling and sensitivity analyses tools (SaSAT for computational modelling

    Directory of Open Access Journals (Sweden)

    Wilson David P

    2008-02-01

    Full Text Available Abstract SaSAT (Sampling and Sensitivity Analysis Tools is a user-friendly software package for applying uncertainty and sensitivity analyses to mathematical and computational models of arbitrary complexity and context. The toolbox is built in Matlab®, a numerical mathematical software package, and utilises algorithms contained in the Matlab® Statistics Toolbox. However, Matlab® is not required to use SaSAT as the software package is provided as an executable file with all the necessary supplementary files. The SaSAT package is also designed to work seamlessly with Microsoft Excel but no functionality is forfeited if that software is not available. A comprehensive suite of tools is provided to enable the following tasks to be easily performed: efficient and equitable sampling of parameter space by various methodologies; calculation of correlation coefficients; regression analysis; factor prioritisation; and graphical output of results, including response surfaces, tornado plots, and scatterplots. Use of SaSAT is exemplified by application to a simple epidemic model. To our knowledge, a number of the methods available in SaSAT for performing sensitivity analyses have not previously been used in epidemiological modelling and their usefulness in this context is demonstrated.

  9. Functional analysis and classification of phytoplankton based on data from an automated flow cytometer.

    Science.gov (United States)

    Malkassian, Anthony; Nerini, David; van Dijk, Mark A; Thyssen, Melilotus; Mante, Claude; Gregori, Gerald

    2011-04-01

    Analytical flow cytometry (FCM) is well suited for the analysis of phytoplankton communities in fresh and sea waters. The measurement of light scatter and autofluorescence properties of particles by FCM provides optical fingerprints, which enables different phytoplankton groups to be separated. A submersible version of the CytoSense flow cytometer (the CytoSub) has been designed for in situ autonomous sampling and analysis, making it possible to monitor phytoplankton at a short temporal scale and obtain accurate information about its dynamics. For data analysis, a manual clustering is usually performed a posteriori: data are displayed on histograms and scatterplots, and group discrimination is made by drawing and combining regions (gating). The purpose of this study is to provide greater objectivity in the data analysis by applying a nonmanual and consistent method to automatically discriminate clusters of particles. In other words, we seek for partitioning methods based on the optical fingerprints of each particle. As the CytoSense is able to record the full pulse shape for each variable, it quickly generates a large and complex dataset to analyze. The shape, length, and area of each curve were chosen as descriptors for the analysis. To test the developed method, numerical experiments were performed on simulated curves. Then, the method was applied and validated on phytoplankton cultures data. Promising results have been obtained with a mixture of various species whose optical fingerprints overlapped considerably and could not be accurately separated using manual gating. Copyright © 2011 International Society for Advancement of Cytometry.

  10. Data visualization, bar naked: A free tool for creating interactive graphics.

    Science.gov (United States)

    Weissgerber, Tracey L; Savic, Marko; Winham, Stacey J; Stanisavljevic, Dejana; Garovic, Vesna D; Milic, Natasa M

    2017-12-15

    Although bar graphs are designed for categorical data, they are routinely used to present continuous data in studies that have small sample sizes. This presentation is problematic, as many data distributions can lead to the same bar graph, and the actual data may suggest different conclusions from the summary statistics. To address this problem, many journals have implemented new policies that require authors to show the data distribution. This paper introduces a free, web-based tool for creating an interactive alternative to the bar graph (http://statistika.mfub.bg.ac.rs/interactive-dotplot/). This tool allows authors with no programming expertise to create customized interactive graphics, including univariate scatterplots, box plots, and violin plots, for comparing values of a continuous variable across different study groups. Individual data points may be overlaid on the graphs. Additional features facilitate visualization of subgroups or clusters of non-independent data. A second tool enables authors to create interactive graphics from data obtained with repeated independent experiments (http://statistika.mfub.bg.ac.rs/interactive-repeated-experiments-dotplot/). These tools are designed to encourage exploration and critical evaluation of the data behind the summary statistics and may be valuable for promoting transparency, reproducibility, and open science in basic biomedical research. © 2017 by The American Society for Biochemistry and Molecular Biology, Inc.

  11. Experimental and theoretical high energy physics research

    International Nuclear Information System (INIS)

    Cline, D.B.

    1993-01-01

    Progress on seven tasks is reported. (I)UCLA hadronization model, antiproton decay, PEP4/9 e + e - analysis: In addition to these topics, work on CP and CPT phenomenology at a φ factory and letters of support on the hadronization project are included. (II)ICARUS detector and rare B decays with hadron beams and colliders: Developments are summarized and some typcial events as shown; in addition, the RD5 collaboration at CERN and the asymmetric φ factory project are sketched. (III)Theoretical physics: Feynman diagram calculations in gauge theory; supersymmetric standard model; effects of quantum gravity in breaking of global symmetries; models of quark and lepton substructure; renormalized field theory; large-scale structure in the universe and particle-astrophysics/early universe cosmology. (IV)H dibaryon search at BNL, kaon experiments (E799/KTeV) at Fermilab: Project design and some scatterplots are given. (V)UCLA participation in the experiment CDF at Fermilab. (VI)Detectors for hadron physics at ultrahigh energy colliders: Scintillating fiber and visible light photon counter research. (VII)Administrative support and conference organization

  12. Grassroots Numeracy

    Directory of Open Access Journals (Sweden)

    H.L. Vacher

    2016-07-01

    Full Text Available The readers and authors of papers in Numeracy compose a multidisciplinary grassroots interest group that is defining and illustrating the meaning, content, and scope of quantitative literacy (QL and how it intersects with educational goals and practice. The 161 Numeracy papers that have been produced by this QL community were downloaded 42, 085 times in a total of 178 countries, including all 34 OECD countries, during 2015 and the first quarter of 2016. A scatterplot of normalized downloads per month vs. normalized total downloads for the eight years of Numeracy’s life allows identification of the 24 “most popular” of the 161 papers. These papers, which range over a wide landscape of subjects, were produced by a total of 41 authors, only nine of whom are mathematicians. The data clearly show that the QL community is not just a bunch of mathematicians talking amongst themselves. Rather the community is a vibrant mix of mathematicians and users and friends of mathematics. The heterogeneity of this grassroots community, and Numeracy’s commitment to serve it, dictates our mode of publication and the nature of our peer review. The journal is assertively open access for readers and free of page charges and processing fees to authors. The peer-review process is designed to provide constructive feedback to promote effective communication of the diverse activities and interests of a community that brings with it a multitude of publication cultures and experiences.

  13. Uncertainty and sensitivity analyses for gas and brine migration at the Waste Isolation Pilot Plant, May 1992

    International Nuclear Information System (INIS)

    Helton, J.C.; Bean, J.E.; Butcher, B.M.; Garner, J.W.; Vaughn, P.; Schreiber, J.D.; Swift, P.N.

    1993-08-01

    Uncertainty and sensitivity analysis techniques based on Latin hypercube sampling, partial correlation analysis, stepwise regression analysis and examination of scatterplots are used in conjunction with the BRAGFLO model to examine two phase flow (i.e., gas and brine) at the Waste Isolation Pilot Plant (WIPP), which is being developed by the US Department of Energy as a disposal facility for transuranic waste. The analyses consider either a single waste panel or the entire repository in conjunction with the following cases: (1) fully consolidated shaft, (2) system of shaft seals with panel seals, and (3) single shaft seal without panel seals. The purpose of this analysis is to develop insights on factors that are potentially important in showing compliance with applicable regulations of the US Environmental Protection Agency (i.e., 40 CFR 191, Subpart B; 40 CFR 268). The primary topics investigated are (1) gas production due to corrosion of steel, (2) gas production due to microbial degradation of cellulosics, (3) gas migration into anhydrite marker beds in the Salado Formation, (4) gas migration through a system of shaft seals to overlying strata, and (5) gas migration through a single shaft seal to overlying strata. Important variables identified in the analyses include initial brine saturation of the waste, stoichiometric terms for corrosion of steel and microbial degradation of cellulosics, gas barrier pressure in the anhydrite marker beds, shaft seal permeability, and panel seal permeability

  14. A new methodology of spatial cross-correlation analysis.

    Science.gov (United States)

    Chen, Yanguang

    2015-01-01

    Spatial correlation modeling comprises both spatial autocorrelation and spatial cross-correlation processes. The spatial autocorrelation theory has been well-developed. It is necessary to advance the method of spatial cross-correlation analysis to supplement the autocorrelation analysis. This paper presents a set of models and analytical procedures for spatial cross-correlation analysis. By analogy with Moran's index newly expressed in a spatial quadratic form, a theoretical framework is derived for geographical cross-correlation modeling. First, two sets of spatial cross-correlation coefficients are defined, including a global spatial cross-correlation coefficient and local spatial cross-correlation coefficients. Second, a pair of scatterplots of spatial cross-correlation is proposed, and the plots can be used to visually reveal the causality behind spatial systems. Based on the global cross-correlation coefficient, Pearson's correlation coefficient can be decomposed into two parts: direct correlation (partial correlation) and indirect correlation (spatial cross-correlation). As an example, the methodology is applied to the relationships between China's urbanization and economic development to illustrate how to model spatial cross-correlation phenomena. This study is an introduction to developing the theory of spatial cross-correlation, and future geographical spatial analysis might benefit from these models and indexes.

  15. Matisse: A Visual Analytics System for Exploring Emotion Trends in Social Media Text Streams

    Energy Technology Data Exchange (ETDEWEB)

    Steed, Chad A [ORNL; Drouhard, Margaret MEG G [ORNL; Beaver, Justin M [ORNL; Pyle, Joshua M [ORNL; BogenII, Paul L. [Google Inc.

    2015-01-01

    Dynamically mining textual information streams to gain real-time situational awareness is especially challenging with social media systems where throughput and velocity properties push the limits of a static analytical approach. In this paper, we describe an interactive visual analytics system, called Matisse, that aids with the discovery and investigation of trends in streaming text. Matisse addresses the challenges inherent to text stream mining through the following technical contributions: (1) robust stream data management, (2) automated sentiment/emotion analytics, (3) interactive coordinated visualizations, and (4) a flexible drill-down interaction scheme that accesses multiple levels of detail. In addition to positive/negative sentiment prediction, Matisse provides fine-grained emotion classification based on Valence, Arousal, and Dominance dimensions and a novel machine learning process. Information from the sentiment/emotion analytics are fused with raw data and summary information to feed temporal, geospatial, term frequency, and scatterplot visualizations using a multi-scale, coordinated interaction model. After describing these techniques, we conclude with a practical case study focused on analyzing the Twitter sample stream during the week of the 2013 Boston Marathon bombings. The case study demonstrates the effectiveness of Matisse at providing guided situational awareness of significant trends in social media streams by orchestrating computational power and human cognition.

  16. P-Splines Using Derivative Information

    KAUST Repository

    Calderon, Christopher P.

    2010-01-01

    Time series associated with single-molecule experiments and/or simulations contain a wealth of multiscale information about complex biomolecular systems. We demonstrate how a collection of Penalized-splines (P-splines) can be useful in quantitatively summarizing such data. In this work, functions estimated using P-splines are associated with stochastic differential equations (SDEs). It is shown how quantities estimated in a single SDE summarize fast-scale phenomena, whereas variation between curves associated with different SDEs partially reflects noise induced by motion evolving on a slower time scale. P-splines assist in "semiparametrically" estimating nonlinear SDEs in situations where a time-dependent external force is applied to a single-molecule system. The P-splines introduced simultaneously use function and derivative scatterplot information to refine curve estimates. We refer to the approach as the PuDI (P-splines using Derivative Information) method. It is shown how generalized least squares ideas fit seamlessly into the PuDI method. Applications demonstrating how utilizing uncertainty information/approximations along with generalized least squares techniques improve PuDI fits are presented. Although the primary application here is in estimating nonlinear SDEs, the PuDI method is applicable to situations where both unbiased function and derivative estimates are available.

  17. Can telemetry data obviate the need for sleep studies in Pierre Robin Sequence?

    Science.gov (United States)

    Aaronson, Nicole Leigh; Jabbour, Noel

    2017-09-01

    This study looks to correlate telemetry data gathered on patients with Pierre Robin Sequence (PRS) with sleep study data. Strong correlation might allow obstructive sleep apnea (OSA) to be reasonably predicted without the need for sleep study. Charts from forty-six infants with PRS who presented to our children's hospital between 2005 and 2015 and received a polysomnogram (PSG) prior to surgical intervention were retrospectively reviewed. Correlations and scatterplots were used to compare average daily oxygen nadir, overall oxygen nadir, and average number of daily desaturations from telemetry data with apnea-hypopnea index (AHI) and oxygen nadir on sleep study. Results were also categorized into groups of AHI ≥ or sleep study data. Patients with O2 nadir below 80% on telemetry were not more likely to have an O2 nadir below 80% on sleep study. Patients with an average O2 nadir below 80% did show some correlation with having an AHI greater than 10 on sleep study but this relationship did not reach significance. Of 22 patients who did not have any desaturations on telemetry below 80%, 16 (73%) had an AHI >10 on sleep study. In the workup of infants with PRS, the index of suspicion is high for OSA. In our series, telemetry data was not useful in ruling out severe OSA. Thus our data do not support forgoing sleep study in patients with PRS and concern for OSA despite normal telemetry patterns. Copyright © 2017 Elsevier B.V. All rights reserved.

  18. Learning Visualizations by Analogy: Promoting Visual Literacy through Visualization Morphing.

    Science.gov (United States)

    Ruchikachorn, Puripant; Mueller, Klaus

    2015-09-01

    We propose the concept of teaching (and learning) unfamiliar visualizations by analogy, that is, demonstrating an unfamiliar visualization method by linking it to another more familiar one, where the in-betweens are designed to bridge the gap of these two visualizations and explain the difference in a gradual manner. As opposed to a textual description, our morphing explains an unfamiliar visualization through purely visual means. We demonstrate our idea by ways of four visualization pair examples: data table and parallel coordinates, scatterplot matrix and hyperbox, linear chart and spiral chart, and hierarchical pie chart and treemap. The analogy is commutative i.e. any member of the pair can be the unfamiliar visualization. A series of studies showed that this new paradigm can be an effective teaching tool. The participants could understand the unfamiliar visualization methods in all of the four pairs either fully or at least significantly better after they observed or interacted with the transitions from the familiar counterpart. The four examples suggest how helpful visualization pairings be identified and they will hopefully inspire other visualization morphings and associated transition strategies to be identified.

  19. Monitoring gender remuneration inequalities in academia using biplots

    Directory of Open Access Journals (Sweden)

    IS Walters

    2008-06-01

    Full Text Available Gender remuneration inequalities at universities have been studied in various parts of the world. In South Africa, the responsibility largely rests with individual higher education institutions to establish levels of pay for male and female academic staff members. The multidimensional character of the gender wage gap includes gender differentials in research output, age, academic rank and qualifications. The aim in this paper is to demonstrate the use of modern biplot methodology for describing and monitoring changes in the gender remuneration gap over time. A biplot is considered as a multivariate extension of an ordinary scatterplot. Our case study includes the permanent fulltime academic staff at Stellenbosch University for the period 2002 to 2005. We constructed canonical variate analysis (CVA biplots with 90% alpha bags for the five-dimensional data collected for males and females in 2002 and 2005 aggregated over faculties as well as for each faculty separately. The biplots illustrate, for our case study, that rank, age, research output and qualifications are related to remuneration. The CVA biplots show narrowing, widening and constant gender remuneration gaps in different faculties.

  20. A New Methodology of Spatial Cross-Correlation Analysis

    Science.gov (United States)

    Chen, Yanguang

    2015-01-01

    Spatial correlation modeling comprises both spatial autocorrelation and spatial cross-correlation processes. The spatial autocorrelation theory has been well-developed. It is necessary to advance the method of spatial cross-correlation analysis to supplement the autocorrelation analysis. This paper presents a set of models and analytical procedures for spatial cross-correlation analysis. By analogy with Moran’s index newly expressed in a spatial quadratic form, a theoretical framework is derived for geographical cross-correlation modeling. First, two sets of spatial cross-correlation coefficients are defined, including a global spatial cross-correlation coefficient and local spatial cross-correlation coefficients. Second, a pair of scatterplots of spatial cross-correlation is proposed, and the plots can be used to visually reveal the causality behind spatial systems. Based on the global cross-correlation coefficient, Pearson’s correlation coefficient can be decomposed into two parts: direct correlation (partial correlation) and indirect correlation (spatial cross-correlation). As an example, the methodology is applied to the relationships between China’s urbanization and economic development to illustrate how to model spatial cross-correlation phenomena. This study is an introduction to developing the theory of spatial cross-correlation, and future geographical spatial analysis might benefit from these models and indexes. PMID:25993120

  1. New approaches for calculating Moran's index of spatial autocorrelation.

    Science.gov (United States)

    Chen, Yanguang

    2013-01-01

    Spatial autocorrelation plays an important role in geographical analysis; however, there is still room for improvement of this method. The formula for Moran's index is complicated, and several basic problems remain to be solved. Therefore, I will reconstruct its mathematical framework using mathematical derivation based on linear algebra and present four simple approaches to calculating Moran's index. Moran's scatterplot will be ameliorated, and new test methods will be proposed. The relationship between the global Moran's index and Geary's coefficient will be discussed from two different vantage points: spatial population and spatial sample. The sphere of applications for both Moran's index and Geary's coefficient will be clarified and defined. One of theoretical findings is that Moran's index is a characteristic parameter of spatial weight matrices, so the selection of weight functions is very significant for autocorrelation analysis of geographical systems. A case study of 29 Chinese cities in 2000 will be employed to validate the innovatory models and methods. This work is a methodological study, which will simplify the process of autocorrelation analysis. The results of this study will lay the foundation for the scaling analysis of spatial autocorrelation.

  2. New approaches for calculating Moran's index of spatial autocorrelation.

    Directory of Open Access Journals (Sweden)

    Yanguang Chen

    Full Text Available Spatial autocorrelation plays an important role in geographical analysis; however, there is still room for improvement of this method. The formula for Moran's index is complicated, and several basic problems remain to be solved. Therefore, I will reconstruct its mathematical framework using mathematical derivation based on linear algebra and present four simple approaches to calculating Moran's index. Moran's scatterplot will be ameliorated, and new test methods will be proposed. The relationship between the global Moran's index and Geary's coefficient will be discussed from two different vantage points: spatial population and spatial sample. The sphere of applications for both Moran's index and Geary's coefficient will be clarified and defined. One of theoretical findings is that Moran's index is a characteristic parameter of spatial weight matrices, so the selection of weight functions is very significant for autocorrelation analysis of geographical systems. A case study of 29 Chinese cities in 2000 will be employed to validate the innovatory models and methods. This work is a methodological study, which will simplify the process of autocorrelation analysis. The results of this study will lay the foundation for the scaling analysis of spatial autocorrelation.

  3. OpinionSeer: interactive visualization of hotel customer feedback.

    Science.gov (United States)

    Wu, Yingcai; Wei, Furu; Liu, Shixia; Au, Norman; Cui, Weiwei; Zhou, Hong; Qu, Huamin

    2010-01-01

    The rapid development of Web technology has resulted in an increasing number of hotel customers sharing their opinions on the hotel services. Effective visual analysis of online customer opinions is needed, as it has a significant impact on building a successful business. In this paper, we present OpinionSeer, an interactive visualization system that could visually analyze a large collection of online hotel customer reviews. The system is built on a new visualization-centric opinion mining technique that considers uncertainty for faithfully modeling and analyzing customer opinions. A new visual representation is developed to convey customer opinions by augmenting well-established scatterplots and radial visualization. To provide multiple-level exploration, we introduce subjective logic to handle and organize subjective opinions with degrees of uncertainty. Several case studies illustrate the effectiveness and usefulness of OpinionSeer on analyzing relationships among multiple data dimensions and comparing opinions of different groups. Aside from data on hotel customer feedback, OpinionSeer could also be applied to visually analyze customer opinions on other products or services.

  4. Respiratory alkalosis and primary hypocapnia in Labrador Retrievers participating in field trials in high-ambient-temperature conditions.

    Science.gov (United States)

    Steiss, Janet E; Wright, James C

    2008-10-01

    To determine whether Labrador Retrievers participating in field trials develop respiratory alkalosis and hypocapnia primarily in conditions of high ambient temperatures. 16 Labrador Retrievers. At each of 5 field trials, 5 to 10 dogs were monitored during a test (retrieval of birds over a variable distance on land [1,076 to 2,200 m]; 36 assessments); ambient temperatures ranged from 2.2 degrees to 29.4 degrees C. For each dog, rectal temperature was measured and a venous blood sample was collected in a heparinized syringe within 5 minutes of test completion. Blood samples were analyzed on site for Hct; pH; sodium, potassium, ionized calcium, glucose, lactate, bicarbonate, and total CO2 concentrations; and values of PvO2 and PvCO2. Scatterplots of each variable versus ambient temperature were reviewed. Regression analysis was used to evaluate the effect of ambient temperature ( 21 degrees C) on each variable. Compared with findings at ambient temperatures 21 degrees C; rectal temperature did not differ. Two dogs developed signs of heat stress in 1 test at an ambient temperature of 29 degrees C; their rectal temperatures were higher and PvCO2 values were lower than findings in other dogs. When running distances frequently encountered at field trials, healthy Labrador Retrievers developed hyperthermia regardless of ambient temperature. Dogs developed respiratory alkalosis and hypocapnia at ambient temperatures > 21 degrees C.

  5. Polar cap mesosphere wind observations: comparisons of simultaneous measurements with a Fabry-Perot interferometer and a field-widened Michelson interferometer.

    Science.gov (United States)

    Fisher, G M; Killeen, T L; Wu, Q; Reeves, J M; Hays, P B; Gault, W A; Brown, S; Shepherd, G G

    2000-08-20

    Polar cap mesospheric winds observed with a Fabry-Perot interferometer with a circle-to-line interferometer optical (FPI/CLIO) system have been compared with measurements from a field-widened Michelson interferometer optimized for E-region winds (ERWIN). Both instruments observed the Meinel OH emission emanating from the mesopause region (approximately 86 km) at Resolute Bay, Canada (74.9 degrees N, 94.9 degrees W). This is the first time, to our knowledge, that winds measured simultaneously from a ground-based Fabry-Perot interferometer and a ground-based Michelson interferometer have been compared at the same location. The FPI/CLIO and ERWIN instruments both have a capability for high temporal resolution (less than 10 min for a full scan in the four cardinal directions and the zenith). Statistical comparisons of hourly mean winds for both instruments by scatterplots show excellent agreement, indicating that the two optical techniques provide equivalent observations of mesopause winds. Small deviations in the measured wind can be ascribed to the different zenith angles used by the two instruments. The combined measurements illustrate the dominance of the 12-h wave in the mesopause winds at Resolute Bay, with additional evidence for strong gravity wave activity with much shorter periods (tens of minutes). Future operations of the two instruments will focus on observation of complementary emissions, providing a unique passive optical capability for the determination of neutral winds in the geomagnetic polar cap at various altitudes near the mesopause.

  6. Sequential simulation approach to modeling of multi-seam coal deposits with an application to the assessment of a Louisiana lignite

    Science.gov (United States)

    Olea, Ricardo A.; Luppens, James A.

    2012-01-01

    There are multiple ways to characterize uncertainty in the assessment of coal resources, but not all of them are equally satisfactory. Increasingly, the tendency is toward borrowing from the statistical tools developed in the last 50 years for the quantitative assessment of other mineral commodities. Here, we briefly review the most recent of such methods and formulate a procedure for the systematic assessment of multi-seam coal deposits taking into account several geological factors, such as fluctuations in thickness, erosion, oxidation, and bed boundaries. A lignite deposit explored in three stages is used for validating models based on comparing a first set of drill holes against data from infill and development drilling. Results were fully consistent with reality, providing a variety of maps, histograms, and scatterplots characterizing the deposit and associated uncertainty in the assessments. The geostatistical approach was particularly informative in providing a probability distribution modeling deposit wide uncertainty about total resources and a cumulative distribution of coal tonnage as a function of local uncertainty.

  7. Asthma Is More Severe in Older Adults

    Science.gov (United States)

    Dweik, Raed A.; Comhair, Suzy A.; Bleecker, Eugene R.; Moore, Wendy C.; Peters, Stephen P.; Busse, William W.; Jarjour, Nizar N.; Calhoun, William J.; Castro, Mario; Chung, K. Fan; Fitzpatrick, Anne; Israel, Elliot; Teague, W. Gerald; Wenzel, Sally E.; Love, Thomas E.; Gaston, Benjamin M.

    2015-01-01

    Background Severe asthma occurs more often in older adult patients. We hypothesized that the greater risk for severe asthma in older individuals is due to aging, and is independent of asthma duration. Methods This is a cross-sectional study of prospectively collected data from adult participants (N=1130; 454 with severe asthma) enrolled from 2002 – 2011 in the Severe Asthma Research Program. Results The association between age and the probability of severe asthma, which was performed by applying a Locally Weighted Scatterplot Smoother, revealed an inflection point at age 45 for risk of severe asthma. The probability of severe asthma increased with each year of life until 45 years and thereafter increased at a much slower rate. Asthma duration also increased the probability of severe asthma but had less effect than aging. After adjustment for most comorbidities of aging and for asthma duration using logistic regression, asthmatics older than 45 maintained the greater probability of severe asthma [OR: 2.73 (95 CI: 1.96; 3.81)]. After 45, the age-related risk of severe asthma continued to increase in men, but not in women. Conclusions Overall, the impact of age and asthma duration on risk for asthma severity in men and women is greatest over times of 18-45 years of age; age has a greater effect than asthma duration on risk of severe asthma. PMID:26200463

  8. Space-partition method for the variance-based sensitivity analysis: Optimal partition scheme and comparative study

    International Nuclear Information System (INIS)

    Zhai, Qingqing; Yang, Jun; Zhao, Yu

    2014-01-01

    Variance-based sensitivity analysis has been widely studied and asserted itself among practitioners. Monte Carlo simulation methods are well developed in the calculation of variance-based sensitivity indices but they do not make full use of each model run. Recently, several works mentioned a scatter-plot partitioning method to estimate the variance-based sensitivity indices from given data, where a single bunch of samples is sufficient to estimate all the sensitivity indices. This paper focuses on the space-partition method in the estimation of variance-based sensitivity indices, and its convergence and other performances are investigated. Since the method heavily depends on the partition scheme, the influence of the partition scheme is discussed and the optimal partition scheme is proposed based on the minimized estimator's variance. A decomposition and integration procedure is proposed to improve the estimation quality for higher order sensitivity indices. The proposed space-partition method is compared with the more traditional method and test cases show that it outperforms the traditional one

  9. Oral mucosal color changes as a clinical biomarker for cancer detection.

    Science.gov (United States)

    Latini, Giuseppe; De Felice, Claudio; Barducci, Alessandro; Chitano, Giovanna; Pignatelli, Antonietta; Grimaldi, Luca; Tramacere, Francesco; Laurini, Ricardo; Andreassi, Maria Grazia; Portaluri, Maurizio

    2012-07-01

    Screening is a key tool for early cancer detection/prevention and potentially saves lives. Oral mucosal vascular aberrations and color changes have been reported in hereditary nonpolyposis colorectal cancer patients, possibly reflecting a subclinical extracellular matrix abnormality implicated in the general process of cancer development. Reasoning that physicochemical changes of a tissue should affect its optical properties, we investigated the diagnostic ability of oral mucosal color to identify patients with several types of cancer. A total of 67 patients with several histologically proven malignancies at different stages were enrolled along with a group of 60 healthy controls of comparable age and sex ratio. Oral mucosal color was measured in selected areas, and then univariate, cluster, and principal component analyses were carried out. Lower red and green and higher blue values were significantly associated with evidence of cancer (all Pgreen coordinates. Likewise, the second principal component coordinate of the red-green clusters discriminated patients from controls with 98.2% sensitivity and 95% specificity (cut-off criterion≤0.4547; P=0.0001). The scatterplots of the chrominances revealed the formation of two well separated clusters, separating cancer patients from controls with a 99.4% probability of correct classification. These findings highlight the ability of oral color to encode clinically relevant biophysical information. In the near future, this low-cost and noninvasive method may become a useful tool for early cancer detection.

  10. Barnyard millet global core collection evaluation in the submontane Himalayan region of India using multivariate analysis

    Directory of Open Access Journals (Sweden)

    Salej Sood

    2015-12-01

    Full Text Available Barnyard millet (Echinochloa spp. is one of the most underresearched crops with respect to characterization of genetic resources and genetic enhancement. A total of 95 germplasm lines representing global collection were evaluated in two rainy seasons at Almora, Uttarakhand, India for qualitative and quantitative traits and the data were subjected to multivariate analysis. High variation was observed for days to maturity, five-ear grain weight, and yield components. The first three principal component axes explained 73% of the total multivariate variation. Three major groups were detected by projection of the accessions on the first two principal components. The separation of accessions was based mainly on trait morphology. Almost all Indian and origin-unknown accessions grouped together to form an Echinochloa frumentacea group. Japanese accessions grouped together except for a few outliers to form an Echinochloa esculenta group. The third group contained accessions from Russia, Japan, Cameroon, and Egypt. They formed a separate group on the scatterplot and represented accessions with lower values for all traits except basal tiller number. The interrelationships between the traits indicated that accessions with tall plants, long and broad leaves, longer inflorescences, and greater numbers of racemes should be given priority as donors or parents in varietal development initiatives. Cluster analysis identified two main clusters based on agro-morphological characters.

  11. Quantitative skeletal evaluation based on cervical vertebral maturation: a longitudinal study of adolescents with normal occlusion.

    Science.gov (United States)

    Chen, L; Liu, J; Xu, T; Long, X; Lin, J

    2010-07-01

    The study aims were to investigate the correlation between vertebral shape and hand-wrist maturation and to select characteristic parameters of C2-C5 (the second to fifth cervical vertebrae) for cervical vertebral maturation determination by mixed longitudinal data. 87 adolescents (32 males, 55 females) aged 8-18 years with normal occlusion were studied. Sequential lateral cephalograms and hand-wrist radiographs were taken annually for 6 consecutive years. Lateral cephalograms were divided into 11 maturation groups according to Fishman Skeletal Maturity Indicators (SMI). 62 morphological measurements of C2-C5 at 11 different developmental stages (SMI1-11) were measured and analysed. Locally weighted scatterplot smoothing, correlation coefficient analysis and variable cluster analysis were used for statistical analysis. Of the 62 cervical vertebral parameters, 44 were positively correlated with SMI, 6 were negatively correlated and 12 were not correlated. The correlation coefficients between cervical vertebral parameters and SMI were relatively high. Characteristic parameters for quantitative analysis of cervical vertebral maturation were selected. In summary, cervical vertebral maturation could be used reliably to evaluate the skeletal stage instead of the hand-wrist radiographic method. Selected characteristic parameters offered a simple and objective reference for the assessment of skeletal maturity and timing of orthognathic surgery. Copyright 2010 International Association of Oral and Maxillofacial Surgeons. Published by Elsevier Ltd. All rights reserved.

  12. Mobile gait analysis via eSHOEs instrumented shoe insoles: a pilot study for validation against the gold standard GAITRite®.

    Science.gov (United States)

    Jagos, Harald; Pils, Katharina; Haller, Michael; Wassermann, Claudia; Chhatwal, Christa; Rafolt, Dietmar; Rattay, Frank

    2017-07-01

    Clinical gait analysis contributes massively to rehabilitation support and improvement of in-patient care. The research project eSHOE aspires to be a useful addition to the rich variety of gait analysis systems. It was designed to fill the gap of affordable, reasonably accurate and highly mobile measurement devices. With the overall goal of enabling individual home-based monitoring and training for people suffering from chronic diseases, affecting the locomotor system. Motion and pressure sensors gather movement data directly on the (users) feet, store them locally and/or transmit them wirelessly to a PC. A combination of pattern recognition and feature extraction algorithms translates the motion data into standard gait parameters. Accuracy of eSHOE were evaluated against the reference system GAITRite in a clinical pilot study. Eleven hip fracture patients (78.4 ± 7.7 years) and twelve healthy subjects (40.8 ± 9.1 years) were included in these trials. All subjects performed three measurements at a comfortable walking speed over 8 m, including the 6-m long GAITRite mat. Six standard gait parameters were extracted from a total of 347 gait cycles. Agreement was analysed via scatterplots, histograms and Bland-Altman plots. In the patient group, the average differences between eSHOE and GAITRite range from -0.046 to 0.045 s and in the healthy group from -0.029 to 0.029 s. Therefore, it can be concluded that eSHOE delivers adequately accurate results. Especially with the prospect as an at home supplement or follow-up to clinical gait analysis and compared to other state of the art wearable motion analysis systems.

  13. Validating hospital antibiotic purchasing data as a metric of inpatient antibiotic use.

    Science.gov (United States)

    Tan, Charlie; Ritchie, Michael; Alldred, Jason; Daneman, Nick

    2016-02-01

    Antibiotic purchasing data are a widely used, but unsubstantiated, measure of antibiotic consumption. To validate this source, we compared purchasing data from hospitals and external medical databases with patient-level dispensing data. Antibiotic purchasing and dispensing data from internal hospital records and purchasing data from IMS Health were obtained for two hospitals between May 2013 and April 2015. Internal purchasing data were validated against dispensing data, and IMS data were compared with both internal metrics. Scatterplots of individual antimicrobial data points were generated; Pearson's correlation and linear regression coefficients were computed. A secondary analysis re-examined these correlations over shorter calendar periods. Internal purchasing data were strongly correlated with dispensing data, with correlation coefficients of 0.90 (95% CI = 0.83-0.95) and 0.98 (95% CI = 0.95-0.99) at hospitals A and B, respectively. Although dispensing data were consistently lower than purchasing data, this was attributed to a single antibiotic at both hospitals. IMS data were favourably correlated with, but underestimated, internal purchasing and dispensing data. This difference was accounted for by eight antibiotics for which direct sales from some manufacturers were not included in the IMS database. The correlation between purchasing and dispensing data was consistent across periods as short as 3 months, but not at monthly intervals. Both internal and external antibiotic purchasing data are strongly correlated with dispensing data. If outliers are accounted for appropriately, internal purchasing data could be used for cost-effective evaluation of antimicrobial stewardship programmes, and external data sets could be used for surveillance and research across geographical regions. © The Author 2015. Published by Oxford University Press on behalf of the British Society for Antimicrobial Chemotherapy. All rights reserved. For Permissions, please e

  14. Developing a source-receptor methodology for the characterization of VOC sources in ambient air

    International Nuclear Information System (INIS)

    Borbon, A.; Badol, C.; Locoge, N.

    2005-01-01

    Since 2001, in France, a continuous monitoring of about thirty ozone precursor non-methane hydrocarbons (NMHC) is led in some urban areas. The automated system for NMHC monitoring consists of sub-ambient preconcentration on a cooled multi-sorbent trap followed by thermal desorption and bidimensional Gas Chromatography/Flame Ionisation Detection analysis.The great number of data collected and their exploitation should provide a qualitative and quantitative assessment of hydrocarbon sources. This should help in the definition of relevant strategies of emission regulation as stated by the European Directive relative to ozone in ambient air (2002/3/EC). The purpose of this work is to present the bases and the contributions of an original methodology known as source-receptor in the characterization of NMHC sources. It is a statistical and diagnostic approach, adaptable and transposable in all urban sites, which integrates the spatial and temporal dynamics of the emissions. The methods for source identification combine descriptive or more complex complementary approaches: 1) univariate approach through the analysis of NMHC time series and concentration roses, 2) bivariate approach through a Graphical Ratio Analysis and a characterization of scatterplot distributions of hydrocarbon pairs, 3) multivariate approach with Principal Component Analyses on various time basis. A linear regression model is finally developed to estimate the spatial and temporal source contributions. Apart from vehicle exhaust emissions, sources of interest are: combustion and fossil fuel-related activities, petrol and/or solvent evaporation, the double anthropogenic and biogenic origin of isoprene and other industrial activities depending on local parameters. (author)

  15. Using decision analysis to determine the cost-effectiveness of intensity-modulated radiation therapy in the treatment of intermediate risk prostate cancer

    International Nuclear Information System (INIS)

    Konski, Andre; Watkins-Bruner, Deborah; Feigenberg, Steven; Hanlon, Alexandra; Kulkarni, Sachin M.S.; Beck, J. Robert; Horwitz, Eric M.; Pollack, Alan

    2006-01-01

    Background: The specific aim of this study is to evaluate the cost-effectiveness of intensity-modulated radiation therapy (IMRT) compared with three-dimensional conformal radiation therapy (3D-CRT) in the treatment of a 70-year-old with intermediate-risk prostate cancer. Methods: A Markov model was designed with the following states; posttreatment, hormone therapy, chemotherapy, and death. Transition probabilities from one state to another were calculated from rates derived from the literature for IMRT and 3D-CRT. Utility values for each health state were obtained from preliminary studies of preferences conducted at Fox Chase Cancer Center. The analysis took a payer's perspective. Expected mean costs, cost-effectiveness scatterplots, and cost acceptability curves were calculated with commercially available software. Results: The expected mean cost of patients undergoing IMRT was $47,931 with a survival of 6.27 quality-adjusted life years (QALYs). The expected mean cost of patients having 3D-CRT was $21,865 with a survival of 5.62 QALYs. The incremental cost-effectiveness comparing IMRT with CRT was $40,101/QALYs. Cost-effectiveness acceptability curve analysis revealed a 55.1% probability of IMRT being cost-effective at a $50,000/QALY willingness to pay. Conclusion: Intensity-modulated radiation therapy was found to be cost-effective, however, at the upper limits of acceptability. The results, however, are dependent on the assumptions of improved biochemical disease-free survival with fewer patients undergoing subsequent salvage therapy and improved quality of life after the treatment. In the absence of prospective randomized trials, decision analysis can help inform physicians and health policy experts on the cost-effectiveness of emerging technologies

  16. Phonation Quotient in Women: A Measure of Vocal Efficiency Using Three Aerodynamic Instruments.

    Science.gov (United States)

    Joshi, Ashwini; Watts, Christopher R

    2017-03-01

    The purpose of this study was to examine measures of vital capacity and phonation quotient across three age groups in women using three different aerodynamic instruments representing low-tech and high-tech options. This study has a prospective, repeated measures design. Fifteen women in each age group of 25-39 years, 40-59 years, and 60-79 years were assessed using maximum phonation time and vital capacity obtained from three aerodynamic instruments: a handheld analog windmill type spirometer, a handheld digital spirometer, and the Phonatory Aerodynamic System (PAS), Model 6600. Phonation quotient was calculated using vital capacity from each instrument. Analyses of variance were performed to test for main effects of the instruments and age on vital capacity and derived phonation quotient. Pearson product moment correlation was performed to assess measurement reliability (parallel forms) between the instruments. Regression equations, scatterplots, and coefficients of determination were also calculated. Statistically significant differences were found in vital capacity measures for the digital spirometer compared with the windmill-type spirometer and PAS across age groups. Strong positive correlations were present between all three instruments for both vital capacity and derived phonation quotient measurements. Measurement precision for the digital spirometer was lower than the windmill spirometer compared with the PAS. However, all three instruments had strong measurement reliability. Additionally, age did not have an effect on the measurement across instruments. These results are consistent with previous literature reporting data from male speakers and support the use of low-tech options for measurement of basic aerodynamic variables associated with voice production. Copyright © 2017 The Voice Foundation. Published by Elsevier Inc. All rights reserved.

  17. The dynamics of ant mosaics in tropical rainforests characterized using the Self-Organizing Map algorithm.

    Science.gov (United States)

    Dejean, Alain; Azémar, Frédéric; Céréghino, Régis; Leponce, Maurice; Corbara, Bruno; Orivel, Jérôme; Compin, Arthur

    2016-08-01

    Ants, the most abundant taxa among canopy-dwelling animals in tropical rainforests, are mostly represented by territorially dominant arboreal ants (TDAs) whose territories are distributed in a mosaic pattern (arboreal ant mosaics). Large TDA colonies regulate insect herbivores, with implications for forestry and agronomy. What generates these mosaics in vegetal formations, which are dynamic, still needs to be better understood. So, from empirical research based on 3 Cameroonian tree species (Lophira alata, Ochnaceae; Anthocleista vogelii, Gentianaceae; and Barteria fistulosa, Passifloraceae), we used the Self-Organizing Map (SOM, neural network) to illustrate the succession of TDAs as their host trees grow and age. The SOM separated the trees by species and by size for L. alata, which can reach 60 m in height and live several centuries. An ontogenic succession of TDAs from sapling to mature trees is shown, and some ecological traits are highlighted for certain TDAs. Also, because the SOM permits the analysis of data with many zeroes with no effect of outliers on the overall scatterplot distributions, we obtained ecological information on rare species. Finally, the SOM permitted us to show that functional groups cannot be selected at the genus level as congeneric species can have very different ecological niches, something particularly true for Crematogaster spp., which include a species specifically associated with B. fistulosa, nondominant species and TDAs. Therefore, the SOM permitted the complex relationships between TDAs and their growing host trees to be analyzed, while also providing new information on the ecological traits of the ant species involved. © 2015 Institute of Zoology, Chinese Academy of Sciences.

  18. The mitochondrial DNA makeup of Romanians: A forensic mtDNA control region database and phylogenetic characterization.

    Science.gov (United States)

    Turchi, Chiara; Stanciu, Florin; Paselli, Giorgia; Buscemi, Loredana; Parson, Walther; Tagliabracci, Adriano

    2016-09-01

    To evaluate the pattern of Romanian population from a mitochondrial perspective and to establish an appropriate mtDNA forensic database, we generated a high-quality mtDNA control region dataset from 407 Romanian subjects belonging to four major historical regions: Moldavia, Transylvania, Wallachia and Dobruja. The entire control region (CR) was analyzed by Sanger-type sequencing assays and the resulting 306 different haplotypes were classified into haplogroups according to the most updated mtDNA phylogeny. The Romanian gene pool is mainly composed of West Eurasian lineages H (31.7%), U (12.8%), J (10.8%), R (10.1%), T (9.1%), N (8.1%), HV (5.4%),K (3.7%), HV0 (4.2%), with exceptions of East Asian haplogroup M (3.4%) and African haplogroup L (0.7%). The pattern of mtDNA variation observed in this study indicates that the mitochondrial DNA pool is geographically homogeneous across Romania and that the haplogroup composition reveals signals of admixture of populations of different origin. The PCA scatterplot supported this scenario, with Romania located in southeastern Europe area, close to Bulgaria and Hungary, and as a borderland with respect to east Mediterranean and other eastern European countries. High haplotype diversity (0.993) and nucleotide diversity indices (0.00838±0.00426), together with low random match probability (0.0087) suggest the usefulness of this control region dataset as a forensic database in routine forensic mtDNA analysis and in the investigation of maternal genetic lineages in the Romanian population. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  19. Metrics for Identifying Food Security Status and the Population with Potential to Benefit from Nutrition Interventions in the Lives Saved Tool (LiST).

    Science.gov (United States)

    Jackson, Bianca D; Walker, Neff; Heidkamp, Rebecca

    2017-11-01

    Background: The Lives Saved Tool (LiST) uses the poverty head-count ratio at $1.90/d as a proxy for food security to identify the percentage of the population with the potential to benefit from balanced energy supplementation and complementary feeding (CF) interventions, following the approach used for the Lancet 's 2008 series on Maternal and Child Undernutrition. Because much work has been done in the development of food security indicators, a re-evaluation of the use of this indicator was warranted. Objective: The aim was to re-evaluate the use of the poverty head-count ratio at $1.90/d as the food security proxy indicator in LiST. Methods: We carried out a desk review to identify available indicators of food security. We identified 3 indicators and compared them by using scatterplots, Spearman's correlations, and Bland-Altman plot analysis. We generated LiST projections to compare the modeled impact results with the use of the different indicators. Results: There are many food security indicators available, but only 3 additional indicators were identified with the data availability requirements to be used as the food security indicator in LiST. As expected, analyzed food security indicators were significantly positively correlated ( P security indicators that were used in the meta-analyses that produced the effect estimates. These are the poverty head-count ratio at $1.90/d for CF interventions and the prevalence of a low body mass index in women of reproductive age for balanced energy supplementation interventions. © 2017 American Society for Nutrition.

  20. PIIKA 2: an expanded, web-based platform for analysis of kinome microarray data.

    Directory of Open Access Journals (Sweden)

    Brett Trost

    Full Text Available Kinome microarrays are comprised of peptides that act as phosphorylation targets for protein kinases. This platform is growing in popularity due to its ability to measure phosphorylation-mediated cellular signaling in a high-throughput manner. While software for analyzing data from DNA microarrays has also been used for kinome arrays, differences between the two technologies and associated biologies previously led us to develop Platform for Intelligent, Integrated Kinome Analysis (PIIKA, a software tool customized for the analysis of data from kinome arrays. Here, we report the development of PIIKA 2, a significantly improved version with new features and improvements in the areas of clustering, statistical analysis, and data visualization. Among other additions to the original PIIKA, PIIKA 2 now allows the user to: evaluate statistically how well groups of samples cluster together; identify sets of peptides that have consistent phosphorylation patterns among groups of samples; perform hierarchical clustering analysis with bootstrapping; view false negative probabilities and positive and negative predictive values for t-tests between pairs of samples; easily assess experimental reproducibility; and visualize the data using volcano plots, scatterplots, and interactive three-dimensional principal component analyses. Also new in PIIKA 2 is a web-based interface, which allows users unfamiliar with command-line tools to easily provide input and download the results. Collectively, the additions and improvements described here enhance both the breadth and depth of analyses available, simplify the user interface, and make the software an even more valuable tool for the analysis of kinome microarray data. Both the web-based and stand-alone versions of PIIKA 2 can be accessed via http://saphire.usask.ca.

  1. PIIKA 2: an expanded, web-based platform for analysis of kinome microarray data.

    Science.gov (United States)

    Trost, Brett; Kindrachuk, Jason; Määttänen, Pekka; Napper, Scott; Kusalik, Anthony

    2013-01-01

    Kinome microarrays are comprised of peptides that act as phosphorylation targets for protein kinases. This platform is growing in popularity due to its ability to measure phosphorylation-mediated cellular signaling in a high-throughput manner. While software for analyzing data from DNA microarrays has also been used for kinome arrays, differences between the two technologies and associated biologies previously led us to develop Platform for Intelligent, Integrated Kinome Analysis (PIIKA), a software tool customized for the analysis of data from kinome arrays. Here, we report the development of PIIKA 2, a significantly improved version with new features and improvements in the areas of clustering, statistical analysis, and data visualization. Among other additions to the original PIIKA, PIIKA 2 now allows the user to: evaluate statistically how well groups of samples cluster together; identify sets of peptides that have consistent phosphorylation patterns among groups of samples; perform hierarchical clustering analysis with bootstrapping; view false negative probabilities and positive and negative predictive values for t-tests between pairs of samples; easily assess experimental reproducibility; and visualize the data using volcano plots, scatterplots, and interactive three-dimensional principal component analyses. Also new in PIIKA 2 is a web-based interface, which allows users unfamiliar with command-line tools to easily provide input and download the results. Collectively, the additions and improvements described here enhance both the breadth and depth of analyses available, simplify the user interface, and make the software an even more valuable tool for the analysis of kinome microarray data. Both the web-based and stand-alone versions of PIIKA 2 can be accessed via http://saphire.usask.ca.

  2. Newly defined landmarks for a three-dimensionally based cephalometric analysis: a retrospective cone-beam computed tomography scan review.

    Science.gov (United States)

    Lee, Moonyoung; Kanavakis, Georgios; Miner, R Matthew

    2015-01-01

    To identify two novel three-dimensional (3D) cephalometric landmarks and create a novel three-dimensionally based anteroposterior skeletal measurement that can be compared with traditional two-dimensional (2D) cephalometric measurements in patients with Class I and Class II skeletal patterns. Full head cone-beam computed tomography (CBCT) scans of 100 patients with all first molars in occlusion were obtained from a private practice. InvivoDental 3D (version 5.1.6, Anatomage, San Jose, Calif) was used to analyze the CBCT scans in the sagittal and axial planes to create new landmarks and a linear 3D analysis (M measurement) based on maxillary and mandibular centroids. Independent samples t-test was used to compare the mean M measurement to traditional 2D cephalometric measurements, ANB and APDI. Interexaminer and intraexaminer reliability were evaluated using 2D and 3D scatterplots. The M measurement, ANB, and APDI could statistically differentiate between patients with Class I and Class II skeletal patterns (P < .001). The M measurement exhibited a correlation coefficient (r) of -0.79 and 0.88 with APDI and ANB, respectively. The overall centroid landmarks and the M measurement combine 2D and 3D methods of imaging; the measurement itself can distinguish between patients with Class I and Class II skeletal patterns and can serve as a potential substitute for ANB and APDI. The new three-dimensionally based landmarks and measurements are reliable, and there is great potential for future use of 3D analyses for diagnosis and research.

  3. Relationship between haemoglobin concentration and packed cell volume in cattle blood samples

    Directory of Open Access Journals (Sweden)

    Paa-Kobina Turkson

    2015-02-01

    Full Text Available A convention that has been adopted in medicine is to estimate haemoglobin (HB concentration as a third of packed cell volume (PCV or vice versa. The present research set out to determine whether a proportional relationship exists between PCV and Hb concentration in cattle blood samples, and to assess the validity of the convention of estimating Hb concentration as a third of PCV. A total of 440 cattle in Ghana from four breeds (Ndama, 110; West African Short Horn, 110; Zebu, 110 and Sanga, 110 were bled for haematological analysis, specifically packed cell volume, using the microhaematocrit technique and haemoglobin concentration using the cyanmethaemoglobin method. Means, standard deviations, standard errors of mean and 95% confidence intervals were calculated. Trendline analyses generated linear regression equations from scatterplots. For all the cattle, a significant and consistent relationship (r = 0.74 was found between Hb concentration and PCV (%. This was expressed as Hb concentration (g/dL = 0.28 PCV + 3.11. When the Hb concentration was estimated by calculating it as a third of PCV, the relationship was expressed in linear regression as Hb concentration (g/dL = 0.83 calculated Hb + 3.11. The difference in the means of determined (12.2 g/dL and calculated (10.9 g/dL Hb concentrations for all cattle was significant (p < 0.001, whereas the difference in the means of determined Hb and corrected calculated Hb was not significant. In conclusion, a simplified relationship of Hb (g/dL = (0.3 PCV + 3 may provide a better estimate of Hb concentration from the PCV of cattle.

  4. An analysis of high fine aerosol loading episodes in north-central Spain in the summer 2013 - Impact of Canadian biomass burning episode and local emissions

    Science.gov (United States)

    Burgos, M. A.; Mateos, D.; Cachorro, V. E.; Toledano, C.; de Frutos, A. M.; Calle, A.; Herguedas, A.; Marcos, J. L.

    2018-07-01

    This work presents an evaluation of a surprising and unusual high turbidity summer period in 2013 recorded in the north-central Iberian Peninsula (IP). The study is made up of three main pollution episodes characterized by very high aerosol optical depth (AOD) values with the presence of fine aerosol particles: the strongest long-range transport Canadian Biomass Burning (BB) event recorded, one of the longest-lasting European Anthropogenic (A) episodes and an extremely strong regional BB. The Canadian BB episode was unusually strong with maximum values of AOD(440 nm) ∼ 0.8, giving rise to the highest value recorded by photometer data in the IP with a clearly established Canadian origin. The anthropogenic pollution episode originated in Europe is mainly a consequence of the strong impact of Canadian BB events over north-central Europe. As regards the local episode, a forest fire in the nature reserve near the Duero River (north-central IP) impacted on the population over 200 km away from its source. These three episodes exhibited fingerprints in different aerosol columnar properties retrieved by sun-photometers of the AErosol RObotic NETwork (AERONET) as well as in particle mass surface concentrations, PMx, measured by the European Monitoring and Evaluation Programme (EMEP). Main statistics, time series and scatterplots relate aerosol loads (aerosol optical depth, AOD and particulate matter, PM) with aerosol size quantities (Ångström Exponent and PM ratio). More detailed microphysical/optical properties retrieved by AERONET inversion products are analysed in depth to describe these events: contribution of fine and coarse particles to AOD and its ratio (the fine mode fraction), volume particle size distribution, fine volume fraction, effective radius, sphericity fraction, single scattering albedo and absorption optical depth. Due to its relevance in climate studies, the aerosol radiative effect has been quantified for the top and bottom of the atmosphere

  5. Clinical utility of breath ammonia for evaluation of ammonia physiology in healthy and cirrhotic adults

    Science.gov (United States)

    Spacek, Lisa A; Mudalel, Matthew; Tittel, Frank; Risby, Terence H; Solga, Steven F

    2016-01-01

    Blood ammonia is routinely used in clinical settings to assess systemic ammonia in hepatic encephalopathy and urea cycle disorders. Despite its drawbacks, blood measurement is often used as a comparator in breath studies because it is a standard clinical test. We sought to evaluate sources of measurement error and potential clinical utility of breath ammonia compared to blood ammonia. We measured breath ammonia in real time by quartz enhanced photoacoustic spectrometry and blood ammonia in 10 healthy and 10 cirrhotic participants. Each participant contributed 5 breath samples and blood for ammonia measurement within 1 h. We calculated the coefficient of variation (CV) for 5 breath ammonia values, reported medians of healthy and cirrhotic participants, and used scatterplots to display breath and blood ammonia. For healthy participants, mean age was 22 years (±4), 70% were men, and body mass index (BMI) was 27 (±5). For cirrhotic participants, mean age was 61 years (±8), 60% were men, and BMI was 31 (±7). Median blood ammonia for healthy participants was within normal range, 10 μmol L−1 (interquartile range (IQR), 3–18) versus 46 μmol L−1 (IQR, 23–66) for cirrhotic participants. Median breath ammonia was 379 pmol mL−1 CO2 (IQR, 265–765) for healthy versus 350 pmol mL−1 CO2 (IQR, 180–1013) for cirrhotic participants. CV was 17 ± 6%. There remains an important unmet need in the evaluation of systemic ammonia, and breath measurement continues to demonstrate promise to fulfill this need. Given the many differences between breath and blood ammonia measurement, we examined biological explanations for our findings in healthy and cirrhotic participants. We conclude that based upon these preliminary data breath may offer clinically important information this is not provided by blood ammonia. PMID:26658550

  6. Prevalence of hyperuricemia and relation of serum uric acid with cardiovascular risk factors in a developing country

    Directory of Open Access Journals (Sweden)

    Shamlaye C

    2004-03-01

    Full Text Available Abstract Background The prevalence of hyperuricemia has rarely been investigated in developing countries. The purpose of the present study was to investigate the prevalence of hyperuricemia and the association between uric acid levels and the various cardiovascular risk factors in a developing country with high average blood pressures (the Seychelles, Indian Ocean, population mainly of African origin. Methods This cross-sectional health examination survey was based on a population random sample from the Seychelles. It included 1011 subjects aged 25 to 64 years. Blood pressure (BP, body mass index (BMI, waist circumference, waist-to-hip ratio, total and HDL cholesterol, serum triglycerides and serum uric acid were measured. Data were analyzed using scatterplot smoothing techniques and gender-specific linear regression models. Results The prevalence of a serum uric acid level >420 μmol/L in men was 35.2% and the prevalence of a serum uric acid level >360 μmol/L was 8.7% in women. Serum uric acid was strongly related to serum triglycerides in men as well as in women (r = 0.73 in men and r = 0.59 in women, p Conclusions This study shows that the prevalence of hyperuricemia can be high in a developing country such as the Seychelles. Besides alcohol consumption and the use of antihypertensive therapy, mainly diuretics, serum uric acid is markedly associated with parameters of the metabolic syndrome, in particular serum triglycerides. Considering the growing incidence of obesity and metabolic syndrome worldwide and the potential link between hyperuricemia and cardiovascular complications, more emphasis should be put on the evolving prevalence of hyperuricemia in developing countries.

  7. Genetic diversity and relationships among different tomato varieties revealed by EST-SSR markers.

    Science.gov (United States)

    Korir, N K; Diao, W; Tao, R; Li, X; Kayesh, E; Li, A; Zhen, W; Wang, S

    2014-01-08

    The genetic diversity and relationship of 42 tomato varieties sourced from different geographic regions was examined with EST-SSR markers. The genetic diversity was between 0.18 and 0.77, with a mean of 0.49; the polymorphic information content ranged from 0.17 to 0.74, with a mean of 0.45. This indicates a fairly high degree of diversity among these tomato varieties. Based on the cluster analysis using unweighted pair-group method with arithmetic average (UPGMA), all the tomato varieties fell into 5 groups, with no obvious geographical distribution characteristics despite their diverse sources. The principal component analysis (PCA) supported the clustering result; however, relationships among varieties were more complex in the PCA scatterplot than in the UPGMA dendrogram. This information about the genetic relationships between these tomato lines helps distinguish these 42 varieties and will be useful for tomato variety breeding and selection. We confirm that the EST-SSR marker system is useful for studying genetic diversity among tomato varieties. The high degree of polymorphism and the large number of bands obtained per assay shows that SSR is the most informative marker system for tomato genotyping for purposes of rights/protection and for the tomato industry in general. It is recommended that these varieties be subjected to identification using an SSR-based manual cultivar identification diagram strategy or other easy-to-use and referable methods so as to provide a complete set of information concerning genetic relationships and a readily usable means of identifying these varieties.

  8. Cerebral blood flow SPET in transient global amnesia with automated ROI analysis by 3DSRT

    Energy Technology Data Exchange (ETDEWEB)

    Takeuchi, Ryo [Division of Nuclear Medicine, Nishi-Kobe Medical Center, Kohjidai 5-7-1, 651-2273, Nishi-ku, Kobe-City, Hyogo (Japan); Matsuda, Hiroshi [Department of Radiology, National Center Hospital for Mental, Nervous and Muscular Disorders, National Center of Neurology and Psychiatry, Tokyo (Japan); Yoshioka, Katsunori [Daiichi Radioisotope Laboratories, Ltd., Tokyo (Japan); Yonekura, Yoshiharu [Biomedical Imaging Research Center, University of Fukui, Fukui (Japan)

    2004-04-01

    The aim of this study was to determine the areas involved in episodes of transient global amnesia (TGA) by calculation of cerebral blood flow (CBF) using 3DSRT, fully automated ROI analysis software which we recently developed. Technetium-99m l,l-ethyl cysteinate dimer single-photon emission tomography ({sup 99m}Tc-ECD SPET) was performed during and after TGA attacks on eight patients (four men and four women; mean study interval, 34 days). The SPET images were anatomically standardized using SPM99 followed by quantification of 318 constant ROIs, grouped into 12 segments (callosomarginal, precentral, central, parietal, angular, temporal, posterior cerebral, pericallosal, lenticular nucleus, thalamus, hippocampus and cerebellum), in each hemisphere to calculate segmental CBF (sCBF) as the area-weighted mean value for each of the respective 12 segments based on the regional CBF in each ROI. Correlation of the intra- and post-episodic sCBF of each of the 12 segments of the eight patients was estimated by scatter-plot graphical analysis and Pearson's correlation test with Fisher's Z-transformation. For the control, {sup 99m}Tc-ECD SPET was performed on eight subjects (three men and five women) and repeated within 1 month; the correlation between the first and second sCBF values of each of the 12 segments was evaluated in the same way as for patients with TGA. Excellent reproducibility between the two sCBF values was found in all 12 segments of the control subjects. However, a significant correlation between intra- and post-episodic sCBF was not shown in the thalamus or angular segments of TGA patients. The present study was preliminary, but at least suggested that thalamus and angular regions are closely involved in the symptoms of TGA. (orig.)

  9. Cerebral blood flow SPET in transient global amnesia with automated ROI analysis by 3DSRT

    International Nuclear Information System (INIS)

    Takeuchi, Ryo; Matsuda, Hiroshi; Yoshioka, Katsunori; Yonekura, Yoshiharu

    2004-01-01

    The aim of this study was to determine the areas involved in episodes of transient global amnesia (TGA) by calculation of cerebral blood flow (CBF) using 3DSRT, fully automated ROI analysis software which we recently developed. Technetium-99m l,l-ethyl cysteinate dimer single-photon emission tomography ( 99m Tc-ECD SPET) was performed during and after TGA attacks on eight patients (four men and four women; mean study interval, 34 days). The SPET images were anatomically standardized using SPM99 followed by quantification of 318 constant ROIs, grouped into 12 segments (callosomarginal, precentral, central, parietal, angular, temporal, posterior cerebral, pericallosal, lenticular nucleus, thalamus, hippocampus and cerebellum), in each hemisphere to calculate segmental CBF (sCBF) as the area-weighted mean value for each of the respective 12 segments based on the regional CBF in each ROI. Correlation of the intra- and post-episodic sCBF of each of the 12 segments of the eight patients was estimated by scatter-plot graphical analysis and Pearson's correlation test with Fisher's Z-transformation. For the control, 99m Tc-ECD SPET was performed on eight subjects (three men and five women) and repeated within 1 month; the correlation between the first and second sCBF values of each of the 12 segments was evaluated in the same way as for patients with TGA. Excellent reproducibility between the two sCBF values was found in all 12 segments of the control subjects. However, a significant correlation between intra- and post-episodic sCBF was not shown in the thalamus or angular segments of TGA patients. The present study was preliminary, but at least suggested that thalamus and angular regions are closely involved in the symptoms of TGA. (orig.)

  10. Modeling pollen time series using seasonal-trend decomposition procedure based on LOESS smoothing.

    Science.gov (United States)

    Rojo, Jesús; Rivero, Rosario; Romero-Morte, Jorge; Fernández-González, Federico; Pérez-Badia, Rosa

    2017-02-01

    Analysis of airborne pollen concentrations provides valuable information on plant phenology and is thus a useful tool in agriculture-for predicting harvests in crops such as the olive and for deciding when to apply phytosanitary treatments-as well as in medicine and the environmental sciences. Variations in airborne pollen concentrations, moreover, are indicators of changing plant life cycles. By modeling pollen time series, we can not only identify the variables influencing pollen levels but also predict future pollen concentrations. In this study, airborne pollen time series were modeled using a seasonal-trend decomposition procedure based on LOcally wEighted Scatterplot Smoothing (LOESS) smoothing (STL). The data series-daily Poaceae pollen concentrations over the period 2006-2014-was broken up into seasonal and residual (stochastic) components. The seasonal component was compared with data on Poaceae flowering phenology obtained by field sampling. Residuals were fitted to a model generated from daily temperature and rainfall values, and daily pollen concentrations, using partial least squares regression (PLSR). This method was then applied to predict daily pollen concentrations for 2014 (independent validation data) using results for the seasonal component of the time series and estimates of the residual component for the period 2006-2013. Correlation between predicted and observed values was r = 0.79 (correlation coefficient) for the pre-peak period (i.e., the period prior to the peak pollen concentration) and r = 0.63 for the post-peak period. Separate analysis of each of the components of the pollen data series enables the sources of variability to be identified more accurately than by analysis of the original non-decomposed data series, and for this reason, this procedure has proved to be a suitable technique for analyzing the main environmental factors influencing airborne pollen concentrations.

  11. Normal development of human brain white matter from infancy to early adulthood: a diffusion tensor imaging study.

    Science.gov (United States)

    Uda, Satoshi; Matsui, Mie; Tanaka, Chiaki; Uematsu, Akiko; Miura, Kayoko; Kawana, Izumi; Noguchi, Kyo

    2015-01-01

    Diffusion tensor imaging (DTI), which measures the magnitude of anisotropy of water diffusion in white matter, has recently been used to visualize and quantify parameters of neural tracts connecting brain regions. In order to investigate the developmental changes and sex and hemispheric differences of neural fibers in normal white matter, we used DTI to examine 52 healthy humans ranging in age from 2 months to 25 years. We extracted the following tracts of interest (TOIs) using the region of interest method: the corpus callosum (CC), cingulum hippocampus (CGH), inferior longitudinal fasciculus (ILF), and superior longitudinal fasciculus (SLF). We measured fractional anisotropy (FA), apparent diffusion coefficient (ADC), axial diffusivity (AD), and radial diffusivity (RD). Approximate values and changes in growth rates of all DTI parameters at each age were calculated and analyzed using LOESS (locally weighted scatterplot smoothing). We found that for all TOIs, FA increased with age, whereas ADC, AD and RD values decreased with age. The turning point of growth rates was at approximately 6 years. FA in the CC was greater than that in the SLF, ILF and CGH. Moreover, FA, ADC and AD of the splenium of the CC (sCC) were greater than in the genu of the CC (gCC), whereas the RD of the sCC was lower than the RD of the gCC. The FA of right-hemisphere TOIs was significantly greater than that of left-hemisphere TOIs. In infants, growth rates of both FA and RD were larger than those of AD. Our data show that developmental patterns differ by TOIs and myelination along with the development of white matter, which can be mainly expressed as an increase in FA together with a decrease in RD. These findings clarify the long-term normal developmental characteristics of white matter microstructure from infancy to early adulthood. © 2015 S. Karger AG, Basel.

  12. Tweeting PP: an analysis of the 2015-2016 Planned Parenthood controversy on Twitter.

    Science.gov (United States)

    Han, Leo; Han, Lisa; Darney, Blair; Rodriguez, Maria I

    2017-12-01

    We analyzed Twitter tweets and Twitter-provided user data to give geographical, temporal and content insight into the use of social media in the Planned Parenthood video controversy. We randomly sampled the full Twitter repository (also known as the Firehose) (n=30,000) for tweets containing the phrase "planned parenthood" as well as group-defining hashtags "#defundpp" and "#standwithpp." We used demographic content provided by the user and word analysis to generate charts, maps and timeline visualizations. Chi-square and t tests were used to compare differences in content, statistical references and dissemination strategies. From July 14, 2015, to January 30, 2016, 1,364,131 and 795,791 tweets contained "#defundpp" and "#standwithpp," respectively. Geographically, #defundpp and #standwithpp were disproportionally distributed to the US South and West, respectively. Word analysis found that early tweets predominantly used "sensational" words and that the proportion of "political" and "call to action" words increased over time. Scatterplots revealed that #standwithpp tweets were clustered and episodic compared to #defundpp. #standwithpp users were more likely to be female [odds ratio (OR) 2.2, confidence interval (CI) 2.0-2.4] and have fewer followers (median 544 vs. 1578, panalysis can be used to characterize and understand the content, tempo and location of abortion-related messages in today's public spheres. Further research may inform proabortion efforts in terms of how information can be more effectively conveyed to the public. This study has implications for how the medical community interfaces with the public with regards to abortion. It highlights how social media are actively exploited instruments for information and message dissemination. Researchers, providers and advocates should be monitoring social media and addressing the public through these modern channels. Copyright © 2017 Elsevier Inc. All rights reserved.

  13. Hippocampal volumes are important predictors for memory function in elderly women

    Directory of Open Access Journals (Sweden)

    Adolfsdottir Steinunn

    2009-08-01

    Full Text Available Abstract Background Normal aging involves a decline in cognitive function that has been shown to correlate with volumetric change in the hippocampus, and with genetic variability in the APOE-gene. In the present study we utilize 3D MR imaging, genetic analysis and assessment of verbal memory function to investigate relationships between these factors in a sample of 170 healthy volunteers (age range 46–77 years. Methods Brain morphometric analysis was performed with the automated segmentation work-flow implemented in FreeSurfer. Genetic analysis of the APOE genotype was determined with polymerase chain reaction (PCR on DNA from whole-blood. All individuals were subjected to extensive neuropsychological testing, including the California Verbal Learning Test-II (CVLT. To obtain robust and easily interpretable relationships between explanatory variables and verbal memory function we applied the recent method of conditional inference trees in addition to scatterplot matrices and simple pairwise linear least-squares regression analysis. Results APOE genotype had no significant impact on the CVLT results (scores on long delay free recall, CVLT-LD or the ICV-normalized hippocampal volumes. Hippocampal volumes were found to decrease with age and a right-larger-than-left hippocampal asymmetry was also found. These findings are in accordance with previous studies. CVLT-LD score was shown to correlate with hippocampal volume. Multivariate conditional inference analysis showed that gender and left hippocampal volume largely dominated predictive values for CVLT-LD scores in our sample. Left hippocampal volume dominated predictive values for females but not for males. APOE genotype did not alter the model significantly, and age was only partly influencing the results. Conclusion Gender and left hippocampal volumes are main predictors for verbal memory function in normal aging. APOE genotype did not affect the results in any part of our analysis.

  14. Tweeting PP: an analysis of the 2015–2016 Planned Parenthood controversy on Twitter

    Science.gov (United States)

    Han, Leo; Han, Lisa; Darney, Blair; Rodriguez, Maria I.

    2018-01-01

    Objectives We analyzed Twitter tweets and Twitter-provided user data to give geographical, temporal and content insight into the use of social media in the Planned Parenthood video controversy. Methodology We randomly sampled the full Twitter repository (also known as the Firehose) (n=30,000) for tweets containing the phrase “planned parenthood” as well as group-defining hashtags “#defundpp” and “#standwithpp.” We used demographic content provided by the user and word analysis to generate charts, maps and timeline visualizations. Chi-square and t tests were used to compare differences in content, statistical references and dissemination strategies. Results From July 14, 2015, to January 30, 2016, 1,364,131 and 795,791 tweets contained “#defundpp” and “#standwithpp,” respectively. Geographically, #defundpp and #standwithpp were disproportionally distributed to the US South and West, respectively. Word analysis found that early tweets predominantly used “sensational” words and that the proportion of “political” and “call to action” words increased over time. Scatterplots revealed that #standwithpp tweets were clustered and episodic compared to #defundpp. #standwithpp users were more likely to be female [odds ratio (OR) 2.2, confidence interval (CI) 2.0–2.4] and have fewer followers (median 544 vs. 1578, pSocial media analysis can be used to characterize and understand the content, tempo and location of abortion-related messages in today’s public spheres. Further research may inform proabortion efforts in terms of how information can be more effectively conveyed to the public. Implications This study has implications for how the medical community interfaces with the public with regards to abortion. It highlights how social media are actively exploited instruments for information and message dissemination. Researchers, providers and advocates should be monitoring social media and addressing the public through these modern channels

  15. Weak cation magnetic separation technology and MALDI-TOF-MS in screening serum protein markers in primary type I osteoporosis.

    Science.gov (United States)

    Shi, X L; Li, C W; Liang, B C; He, K H; Li, X Y

    2015-11-30

    We investigated weak cation magnetic separation technology and matrix-assisted laser desorption ionization-time of flight-mass spectrometry (MALDI-TOF-MS) in screening serum protein markers of primary type I osteoporosis. We selected 16 postmenopausal women with osteoporosis and nine postmenopausal women as controls to find a new method for screening biomarkers and establishing a diagnostic model for primary type I osteoporosis. Serum samples were obtained from controls and patients. Serum protein was extracted with the WCX protein chip system; protein fingerprints were examined using MALDI-TOF-MS. The preprocessed and model construction data were handled by the ProteinChip system. The diagnostic models were established using a genetic arithmetic model combined with a support vector machine (SVM). The SVM model with the highest Youden index was selected. Combinations with the highest accuracy in distinguishing different groups of data were selected as potential biomarkers. From the two groups of serum proteins, 123 cumulative MS protein peaks were selected. Significant intensity differences in the protein peaks of 16 postmenopausal women with osteoporosis were screened. The difference in Youden index between the four groups of protein peaks showed that the highest peaks had mass-to-charge ratios of 8909.047, 8690.658, 13745.48, and 15114.52. A diagnosis model was established with these four markers as the candidates, and the model specificity and sensitivity were found to be 100%. Two groups of specimens in the SVM results on the scatterplot were distinguishable. We established a diagnosis model, and provided a new serological method for screening and diagnosis of osteoporosis with high sensitivity and specificity.

  16. Situational judgment test as an additional tool in a medical admission test: an observational investigation.

    Science.gov (United States)

    Luschin-Ebengreuth, Marion; Dimai, Hans P; Ithaler, Daniel; Neges, Heide M; Reibnegger, Gilbert

    2015-03-14

    In the framework of medical university admission procedures the assessment of non-cognitive abilities is increasingly demanded. As tool for assessing personal qualities or the ability to handle theoretical social constructs in complex situations, the Situational Judgment Test (SJT), among other measurement instruments, is discussed in the literature. This study focuses on the development and the results of the SJT as part of the admission test for the study of human medicine and dentistry at one medical university in Austria. Observational investigation focusing on the results of the SJT. 4741 applicants were included in the study. To yield comparable results for the different test parts, "relative scores" for each test part were calculated. Performance differences between women and men in the various test parts are analyzed using effect sizes based on comparison of mean values (Cohen's d). The associations between the relative scores achieved in the various test parts were assessed by computing pairwise linear correlation coefficients between all test parts and visualized by bivariate scatterplots. Among successful candidates, men consistently outperform women. Men perform better in physics and mathematics. Women perform better in the SJT part. The least discriminatory test part was the SJT. A strong correlation between biology and chemistry and moderate correlations between the other test parts except SJT is obvious. The relative scores are not symmetrically distributed. The cognitive loading of the performed SJTs points to the low correlation between the SJTs and cognitive abilities. Adding the SJT part into the admission test, in order to cover more than only knowledge and understanding of natural sciences among the applicants has been quite successful.

  17. Validation of the IHE Cohort Model of Type 2 Diabetes and the impact of choice of macrovascular risk equations.

    Directory of Open Access Journals (Sweden)

    Adam Lundqvist

    Full Text Available Health-economic models of diabetes are complex since the disease is chronic, progressive and there are many diabetic complications. External validation of these models helps building trust and satisfies demands from decision makers. We evaluated the external validity of the IHE Cohort Model of Type 2 Diabetes; the impact of using alternative macrovascular risk equations; and compared the results to those from microsimulation models.The external validity of the model was analysed from 12 clinical trials and observational studies by comparing 167 predicted microvascular, macrovascular and mortality outcomes to the observed study outcomes. Concordance was examined using visual inspection of scatterplots and regression-based analysis, where an intercept of 0 and a slope of 1 indicate perfect concordance. Additional subgroup analyses were conducted on 'dependent' vs. 'independent' endpoints and microvascular vs. macrovascular vs. mortality endpoints.Visual inspection indicates that the model predicts outcomes well. The UKPDS-OM1 equations showed almost perfect concordance with observed values (slope 0.996, whereas Swedish NDR (0.952 and UKPDS-OM2 (0.899 had a slight tendency to underestimate. The R2 values were uniformly high (>0.96. There were no major differences between 'dependent' and 'independent' outcomes, nor for microvascular and mortality outcomes. Macrovascular outcomes tended to be underestimated, most so for UKPDS-OM2 and least so for NDR risk equations.External validation indicates that the IHE Cohort Model of Type 2 Diabetes has predictive accuracy in line with microsimulation models, indicating that the trade-off in accuracy using cohort simulation might not be that large. While the choice of risk equations was seen to matter, each were associated with generally reasonable results, indicating that the choice must reflect the specifics of the application. The largest variation was observed for macrovascular outcomes. There, NDR

  18. Alterations in plasma phosphorus, red cell 2,3-diphosphoglycerate and P50 following open heart surgery.

    Science.gov (United States)

    Hasan, R A; Sarnaik, A P; Meert, K L; Dabbagh, S; Simpson, P; Makimi, M

    1994-12-01

    To evaluate changes in and the correlation between plasma phosphorus, red cell 2,3-diphosphoglycerate (DPG) and adenosine triphosphate (ATP), and P50 in children following heart surgery. Prospective, observational study with factorial design. A pediatric intensive care unit in a university hospital. Twenty children undergoing open heart surgery for congenital heart defects. None. Red cell 2,3-DPG and ATP, P50, plasma phosphorus, and arterial lactate were obtained before and at 1, 8, 16, 24, 48, and 72 hours after surgery. The amount of intravenous fluid and glucose administered, and age of blood utilized were documented. Variables were analyzed by repeated measure analysis of variance followed by paired t-tests. To investigate the relationship between variables at each time point, scatterplot matrices and correlation coefficients were obtained. There was a reduction in plasma phosphorus, red cell 2,3-DPG, and P50 and an increase in arterial lactate at 1, 8, 16, 24, 48, and 72 hours after surgery. Red cell 2,3-DPG correlated with P50 at 1, 8 and 16 hours. The decrease in the plasma phosphorus correlated with the amounts of intravenous fluid and glucose administered on the day of surgery and on the first and second postoperative days. The age of the blood utilized correlated with the decrease in red cell 2,3-DPG on the day of surgery. Reduction in red cell 2,3-DPG, P50, and plasma phosphorus occurs after open heart surgery in children. These changes can potentially contribute to impaired oxygen utilization in the postoperative period, when adequacy of tissue oxygenation is critical.

  19. The correlation between preoperative volumetry and real graft weight: comparison of two volumetry programs.

    Science.gov (United States)

    Mussin, Nadiar; Sumo, Marco; Lee, Kwang-Woong; Choi, YoungRok; Choi, Jin Yong; Ahn, Sung-Woo; Yoon, Kyung Chul; Kim, Hyo-Sin; Hong, Suk Kyun; Yi, Nam-Joon; Suh, Kyung-Suk

    2017-04-01

    Liver volumetry is a vital component in living donor liver transplantation to determine an adequate graft volume that meets the metabolic demands of the recipient and at the same time ensures donor safety. Most institutions use preoperative contrast-enhanced CT image-based software programs to estimate graft volume. The objective of this study was to evaluate the accuracy of 2 liver volumetry programs (Rapidia vs . Dr. Liver) in preoperative right liver graft estimation compared with real graft weight. Data from 215 consecutive right lobe living donors between October 2013 and August 2015 were retrospectively reviewed. One hundred seven patients were enrolled in Rapidia group and 108 patients were included in the Dr. Liver group. Estimated graft volumes generated by both software programs were compared with real graft weight measured during surgery, and further classified into minimal difference (≤15%) and big difference (>15%). Correlation coefficients and degree of difference were determined. Linear regressions were calculated and results depicted as scatterplots. Minimal difference was observed in 69.4% of cases from Dr. Liver group and big difference was seen in 44.9% of cases from Rapidia group (P = 0.035). Linear regression analysis showed positive correlation in both groups (P < 0.01). However, the correlation coefficient was better for the Dr. Liver group (R 2 = 0.719), than for the Rapidia group (R 2 = 0.688). Dr. Liver can accurately predict right liver graft size better and faster than Rapidia, and can facilitate preoperative planning in living donor liver transplantation.

  20. Genetic perspective of uniparental mitochondrial DNA landscape on the Punjabi population, Pakistan.

    Science.gov (United States)

    Bhatti, Shahzad; Abbas, Sana; Aslamkhan, Muhammad; Attimonelli, Marcella; Trinidad, Magali Segundo; Aydin, Hikmet Hakan; de Souza, Erica Martinha Silva; Gonzalez, Gerardo Rodriguez

    2017-07-26

    To investigate the uniparental genetic structure of the Punjabi population from mtDNA aspect and to set up an appropriate mtDNA forensic database, we studied maternally unrelated Punjabi (N = 100) subjects from two caste groups (i.e. Arain and Gujar) belonging to territory of Punjab. The complete control region was elucidated by Sanger sequencing and the subsequent 58 different haplotypes were designated into appropriate haplogroups according to the most recently updated mtDNA phylogeny. We found a homogenous dispersal of Eurasian haplogroup uniformity among the Punjab Province and exhibited a strong connotation with the European populations. Punjabi castes are primarily a composite of substantial South Asian, East Asian and West Eurasian lineages. Moreover, for the first time we have defined the newly sub-haplogroup M52b1 characterized by 16223 T, 16275 G and 16438 A in Gujar caste. The vast array of mtDNA variants displayed in this study suggested that the haplogroup composition radiates signals of extensive genetic conglomeration, population admixture and demographic expansion that was equipped with diverse origin, whereas matrilineal gene pool was phylogeographically homogenous across the Punjab. This context was further fully acquainted with the facts supported by PCA scatterplot that Punjabi population clustered with South Asian populations. Finally, the high power of discrimination (0.8819) and low random match probability (0.0085%) proposed a worthy contribution of mtDNA control region dataset as a forensic database that considered a gold standard of today to get deeper insight into the genetic ancestry of contemporary matrilineal phylogeny.

  1. TripAdvisor^{N-D}: A Tourism-Inspired High-Dimensional Space Exploration Framework with Overview and Detail.

    Science.gov (United States)

    Nam, Julia EunJu; Mueller, Klaus

    2013-02-01

    Gaining a true appreciation of high-dimensional space remains difficult since all of the existing high-dimensional space exploration techniques serialize the space travel in some way. This is not so foreign to us since we, when traveling, also experience the world in a serial fashion. But we typically have access to a map to help with positioning, orientation, navigation, and trip planning. Here, we propose a multivariate data exploration tool that compares high-dimensional space navigation with a sightseeing trip. It decomposes this activity into five major tasks: 1) Identify the sights: use a map to identify the sights of interest and their location; 2) Plan the trip: connect the sights of interest along a specifyable path; 3) Go on the trip: travel along the route; 4) Hop off the bus: experience the location, look around, zoom into detail; and 5) Orient and localize: regain bearings in the map. We describe intuitive and interactive tools for all of these tasks, both global navigation within the map and local exploration of the data distributions. For the latter, we describe a polygonal touchpad interface which enables users to smoothly tilt the projection plane in high-dimensional space to produce multivariate scatterplots that best convey the data relationships under investigation. Motion parallax and illustrative motion trails aid in the perception of these transient patterns. We describe the use of our system within two applications: 1) the exploratory discovery of data configurations that best fit a personal preference in the presence of tradeoffs and 2) interactive cluster analysis via cluster sculpting in N-D.

  2. Moving towards Hyper-Resolution Hydrologic Modeling

    Science.gov (United States)

    Rouf, T.; Maggioni, V.; Houser, P.; Mei, Y.

    2017-12-01

    Developing a predictive capability for terrestrial hydrology across landscapes, with water, energy and nutrients as the drivers of these dynamic systems, faces the challenge of scaling meter-scale process understanding to practical modeling scales. Hyper-resolution land surface modeling can provide a framework for addressing science questions that we are not able to answer with coarse modeling scales. In this study, we develop a hyper-resolution forcing dataset from coarser resolution products using a physically based downscaling approach. These downscaling techniques rely on correlations with landscape variables, such as topography, roughness, and land cover. A proof-of-concept has been implemented over the Oklahoma domain, where high-resolution observations are available for validation purposes. Hourly NLDAS (North America Land Data Assimilation System) forcing data (i.e., near-surface air temperature, pressure, and humidity) have been downscaled to 500m resolution over the study area for 2015-present. Results show that correlation coefficients between the downscaled temperature dataset and ground observations are consistently higher than the ones between the NLDAS temperature data at their native resolution and ground observations. Not only correlation coefficients are higher, but also the deviation around the 1:1 line in the density scatterplots is smaller for the downscaled dataset than the original one with respect to the ground observations. Results are therefore encouraging as they demonstrate that the 500m temperature dataset has a good agreement with the ground information and can be adopted to force the land surface model for soil moisture estimation. The study has been expanded to wind speed and direction, incident longwave and shortwave radiation, pressure, and precipitation. Precipitation is well known to vary dramatically with elevation and orography. Therefore, we are pursuing a downscaling technique based on both topographical and vegetation

  3. Season of sampling and season of birth influence serotonin metabolite levels in human cerebrospinal fluid.

    Directory of Open Access Journals (Sweden)

    Jurjen J Luykx

    Full Text Available BACKGROUND: Animal studies have revealed seasonal patterns in cerebrospinal fluid (CSF monoamine (MA turnover. In humans, no study had systematically assessed seasonal patterns in CSF MA turnover in a large set of healthy adults. METHODOLOGY/PRINCIPAL FINDINGS: Standardized amounts of CSF were prospectively collected from 223 healthy individuals undergoing spinal anesthesia for minor surgical procedures. The metabolites of serotonin (5-hydroxyindoleacetic acid, 5-HIAA, dopamine (homovanillic acid, HVA and norepinephrine (3-methoxy-4-hydroxyphenylglycol, MPHG were measured using high performance liquid chromatography (HPLC. Concentration measurements by sampling and birth dates were modeled using a non-linear quantile cosine function and locally weighted scatterplot smoothing (LOESS, span = 0.75. The cosine model showed a unimodal season of sampling 5-HIAA zenith in April and a nadir in October (p-value of the amplitude of the cosine = 0.00050, with predicted maximum (PC(max and minimum (PC(min concentrations of 173 and 108 nmol/L, respectively, implying a 60% increase from trough to peak. Season of birth showed a unimodal 5-HIAA zenith in May and a nadir in November (p = 0.00339; PC(max = 172 and PC(min = 126. The non-parametric LOESS showed a similar pattern to the cosine in both season of sampling and season of birth models, validating the cosine model. A final model including both sampling and birth months demonstrated that both sampling and birth seasons were independent predictors of 5-HIAA concentrations. CONCLUSION: In subjects without mental illness, 5-HT turnover shows circannual variation by season of sampling as well as season of birth, with peaks in spring and troughs in fall.

  4. Characterizing China's energy consumption with selective economic factors and energy-resource endowment: a spatial econometric approach

    Science.gov (United States)

    Jiang, Lei; Ji, Minhe; Bai, Ling

    2015-06-01

    Coupled with intricate regional interactions, the provincial disparity of energy-resource endowment and other economic conditions in China have created spatially complex energy consumption patterns that require analyses beyond the traditional ones. To distill the spatial effect out of the resource and economic factors on China's energy consumption, this study recast the traditional econometric model in a spatial context. Several analytic steps were taken to reveal different aspects of the issue. Per capita energy consumption (AVEC) at the provincial level was first mapped to reveal spatial clusters of high energy consumption being located in either well developed or energy resourceful regions. This visual spatial autocorrelation pattern of AVEC was quantitatively tested to confirm its existence among Chinese provinces. A Moran scatterplot was employed to further display a relatively centralized trend occurring in those provinces that had parallel AVEC, revealing a spatial structure with attraction among high-high or low-low regions and repellency among high-low or low-high regions. By a comparison between the ordinary least square (OLS) model and its spatial econometric counterparts, a spatial error model (SEM) was selected to analyze the impact of major economic determinants on AVEC. While the analytic results revealed a significant positive correlation between AVEC and economic development, other determinants showed some intricate influential patterns. The provinces endowed with rich energy reserves were inclined to consume much more energy than those otherwise, whereas changing the economic structure by increasing the proportion of secondary and tertiary industries also tended to consume more energy. Both situations seem to underpin the fact that these provinces were largely trapped in the economies that were supported by technologies of low energy efficiency during the period, while other parts of the country were rapidly modernized by adopting advanced

  5. Psychosocial family factors and glycemic control among children aged 1-15 years with type 1 diabetes: a population-based survey

    Directory of Open Access Journals (Sweden)

    Haugstvedt Anne

    2011-12-01

    Full Text Available Abstract Background Being the parents of children with diabetes is demanding. Jay Belsky's determinants of parenting model emphasizes both the personal psychological resources, the characteristics of the child and contextual sources such as parents' work, marital relations and social network support as important determinants for parenting. To better understand the factors influencing parental functioning among parents of children with type 1 diabetes, we aimed to investigate associations between the children's glycated hemoglobin (HbA1c and 1 variables related to the parents' psychological and contextual resources, and 2 frequency of blood glucose measurement as a marker for diabetes-related parenting behavior. Methods Mothers (n = 103 and fathers (n = 97 of 115 children younger than 16 years old participated in a population-based survey. The questionnaire comprised the Life Orientation Test, the Oslo 3-item Social Support Scale, a single question regarding perceived social limitation because of the child's diabetes, the Relationship Satisfaction Scale and demographic and clinical variables. We investigated associations by using regression analysis. Related to the second aim hypoglycemic events, child age, diabetes duration, insulin regimen and comorbid diseases were included as covariates. Results The mean HbA1c was 8.1%, and 29% had HbA1c ≤ 7.5%. In multiple regression analysis, lower HbA1c was associated with higher education and stronger perceptions of social limitation among the mothers. A higher frequency of blood glucose measurement was significantly associated with lower HbA1c in bivariate analysis. Higher child age was significantly associated with higher HbA1c both in bivariate and multivariate analysis. A scatterplot indicated this association to be linear. Conclusions Most families do not reach recommended treatment goals for their child with type 1 diabetes. Concerning contextual sources of stress and support, the families who

  6. Is BMI a valid measure of obesity in postmenopausal women?

    Science.gov (United States)

    Banack, Hailey R; Wactawski-Wende, Jean; Hovey, Kathleen M; Stokes, Andrew

    2018-03-01

    Body mass index (BMI) is a widely used indicator of obesity status in clinical settings and population health research. However, there are concerns about the validity of BMI as a measure of obesity in postmenopausal women. Unlike BMI, which is an indirect measure of obesity and does not distinguish lean from fat mass, dual-energy x-ray absorptiometry (DXA) provides a direct measure of body fat and is considered a gold standard of adiposity measurement. The goal of this study is to examine the validity of using BMI to identify obesity in postmenopausal women relative to total body fat percent measured by DXA scan. Data from 1,329 postmenopausal women participating in the Buffalo OsteoPerio Study were used in this analysis. At baseline, women ranged in age from 53 to 85 years. Obesity was defined as BMI ≥ 30 kg/m and body fat percent (BF%) greater than 35%, 38%, or 40%. We calculated sensitivity, specificity, positive predictive value, and negative predictive value to evaluate the validity of BMI-defined obesity relative BF%. We further explored the validity of BMI relative to BF% using graphical tools, such as scatterplots and receiver-operating characteristic curves. Youden's J index was used to determine the empirical optimal BMI cut-point for each level of BF% defined obesity. The sensitivity of BMI-defined obesity was 32.4% for 35% body fat, 44.6% for 38% body fat, and 55.2% for 40% body fat. Corresponding specificity values were 99.3%, 97.1%, and 94.6%, respectively. The empirical optimal BMI cut-point to define obesity is 24.9 kg/m for 35% BF, 26.49 kg/m for 38% BF, and 27.05 kg/m for 40% BF according to the Youden's index. Results demonstrate that a BMI cut-point of 30 kg/m does not appear to be an appropriate indicator of true obesity status in postmenopausal women. Empirical estimates of the validity of BMI from this study may be used by other investigators to account for BMI-related misclassification in older women.

  7. Vcs.js - Visualization Control System for the Web

    Science.gov (United States)

    Chaudhary, A.; Lipsa, D.; Doutriaux, C.; Beezley, J. D.; Williams, D. N.; Fries, S.; Harris, M. B.

    2016-12-01

    VCS is a general purpose visualization library, optimized for climate data, which is part of the UV-CDAT system. It provides a Python API for drawing 2D plots such as lineplots, scatter plots, Taylor diagrams, data colored by scalar values, vector glyphs, isocontours and map projections. VCS is based on the VTK library. Vcs.js is the corresponding JavaScript API, designed to be as close as possible to the original VCS Python API and to provide similar functionality for the Web. Vcs.js includes additional functionality when compared with VCS. This additional API is used to introspect data files available on the server and variables available in a data file. Vcs.js can display plots in the browser window. It always works with a server that reads a data file, extracts variables from the file and subsets the data. From this point, two alternate paths are possible. First the system can render the data on the server using VCS producing an image which is send to the browser to be displayed. This path works for for all plot types and produces a reference image identical with the images produced by VCS. This path uses the VTK-Web library. As an optimization, usable in certain conditions, a second path is possible. Data is packed, and sent to the browser which uses a JavaScript plotting library, such as plotly, to display the data. Plots that work well in the browser are line-plots, scatter-plots for any data and many other plot types for small data and supported grid types. As web technology matures, more plots could be supported for rendering in the browser. Rendering can be done either on the client or on the server and we expect that the best place to render will change depending on the available web technology, data transfer costs, server management costs and value provided to users. We intend to provide a flexible solution that allows for both client and server side rendering and a meaningful way to choose between the two. We provide a web-based user interface called v

  8. Seasonality, water quality trends and biological responses in four streams in the Cairngorm Mountains, Scotland

    Directory of Open Access Journals (Sweden)

    C. Soulsby

    2001-01-01

    Full Text Available The chemical composition and invertebrate communities found in four streams in the Cairngorms, Scotland, were monitored between 1985-1997. Stream waters were mildly acidic (mean pH ca. 6.5, with low alkalinity (mean acid neutralising capacity varying from 35-117 meq l-1 and low ionic strength. Subtle differences in the chemistry of each stream were reflected in their invertebrate faunas. Strong seasonality in water chemistry occurred, with the most acid, low alkalinity waters observed during the winter and early spring. This was particularly marked during snowmelt between January and April. In contrast, summer flows were usually groundwater dominated and characterised by higher alkalinity and higher concentrations of most other weathering-derived solutes. Seasonality was also clear in the invertebrate data, with Canonical Correspondence Analysis (CCA separating seasonal samples along axes related to water temperature and discharge characteristics. Inter-annual hydrological and chemical differences were marked, particularly with respect to the winter period. Invertebrate communities found in each of the streams also varied from year to year, with spring communities significantly more variable (PHydrochemical trends over the study period were analysed using a seasonal Kendall test, LOcally WEighted Scatterplot Smoothing (LOWESS and graphical techniques. These indicated that a reduction in sulphate concentrations in stream water is occurring, consistent with declining levels of atmospheric deposition. This may be matched by increases in pH and declining calcium concentrations, though available evidence is inconclusive. Other parameters, such as chloride, total organic carbon and zinc, reveal somewhat random patterns, probably reflecting irregular variations in climatic factors and/or atmospheric deposition. Previous studies have shown that the stream invertebrate communities have remained stable over this period (i.e. no significant linear trends

  9. WE-FG-206-12: Enhanced Laws Textures: A Potential MRI Surrogate Marker of Hepatic Fibrosis in a Murine Model

    International Nuclear Information System (INIS)

    Li, B; Yu, H; Jara, H; Soto, J; Anderson, S

    2016-01-01

    Purpose: To compare enhanced Laws texture derived from parametric proton density (PD) maps to other MRI-based surrogate markers (T2, PD, ADC) in assessing degrees of liver fibrosis in a murine model of hepatic fibrosis using 11.7T scanner. Methods: This animal study was IACUC approved. Fourteen mice were divided into control (n=1) and experimental (n=13). The latter were fed a DDC-supplemented diet to induce hepatic fibrosis. Liver specimens were imaged using an 11.7T scanner; the parametric PD, T2, and ADC maps were generated from spin-echo pulsed field gradient and multi-echo spin-echo acquisitions. Enhanced Laws texture analysis was applied to the PD maps: first, hepatic blood vessels and liver margins were segmented/removed using an automated dual-clustering algorithm; secondly, an optimal thresholding algorithm was applied to reduce the partial volume artifact; next, mean and stdev were corrected to minimize grayscale variation across images; finally, Laws texture was extracted. Degrees of fibrosis was assessed by an experienced pathologist and digital image analysis (%Area Fibrosis). Scatterplots comparing enhanced Laws texture, T2, PD, and ADC values to degrees of fibrosis were generated and correlation coefficients were calculated. Unenhanced Laws texture was also compared to assess the effectiveness of the proposed enhancements. Results: Hepatic fibrosis and the enhanced Laws texture were strongly correlated with higher %Area Fibrosis associated with higher Laws texture (r=0.89). Only a moderate correlation was detected between %Area Fibrosis and unenhanced Laws texture (r=0.70). Strong correlation also existed between ADC and %Area Fibrosis (r=0.86). Moderate correlations were seen between %Area Fibrosis and PD (r=0.65) and T2 (r=0.66). Conclusions: Higher degrees of hepatic fibrosis are associated with increased Laws texture. The proposed enhancements improve the accuracy of Laws texture. Enhanced Laws texture features are more accurate than PD and T2 in

  10. [Hyperspectral Estimation of Apple Tree Canopy LAI Based on SVM and RF Regression].

    Science.gov (United States)

    Han, Zhao-ying; Zhu, Xi-cun; Fang, Xian-yi; Wang, Zhuo-yuan; Wang, Ling; Zhao, Geng-Xing; Jiang, Yuan-mao

    2016-03-01

    Leaf area index (LAI) is the dynamic index of crop population size. Hyperspectral technology can be used to estimate apple canopy LAI rapidly and nondestructively. It can be provide a reference for monitoring the tree growing and yield estimation. The Red Fuji apple trees of full bearing fruit are the researching objects. Ninety apple trees canopies spectral reflectance and LAI values were measured by the ASD Fieldspec3 spectrometer and LAI-2200 in thirty orchards in constant two years in Qixia research area of Shandong Province. The optimal vegetation indices were selected by the method of correlation analysis of the original spectral reflectance and vegetation indices. The models of predicting the LAI were built with the multivariate regression analysis method of support vector machine (SVM) and random forest (RF). The new vegetation indices, GNDVI527, ND-VI676, RVI682, FD-NVI656 and GRVI517 and the previous two main vegetation indices, NDVI670 and NDVI705, are in accordance with LAI. In the RF regression model, the calibration set decision coefficient C-R2 of 0.920 and validation set decision coefficient V-R2 of 0.889 are higher than the SVM regression model by 0.045 and 0.033 respectively. The root mean square error of calibration set C-RMSE of 0.249, the root mean square error validation set V-RMSE of 0.236 are lower than that of the SVM regression model by 0.054 and 0.058 respectively. Relative analysis of calibrating error C-RPD and relative analysis of validation set V-RPD reached 3.363 and 2.520, 0.598 and 0.262, respectively, which were higher than the SVM regression model. The measured and predicted the scatterplot trend line slope of the calibration set and validation set C-S and V-S are close to 1. The estimation result of RF regression model is better than that of the SVM. RF regression model can be used to estimate the LAI of red Fuji apple trees in full fruit period.

  11. SeeSway - A free web-based system for analysing and exploring standing balance data.

    Science.gov (United States)

    Clark, Ross A; Pua, Yong-Hao

    2018-06-01

    Computerised posturography can be used to assess standing balance, and can predict poor functional outcomes in many clinical populations. A key limitation is the disparate signal filtering and analysis techniques, with many methods requiring custom computer programs. This paper discusses the creation of a freely available web-based software program, SeeSway (www.rehabtools.org/seesway), which was designed to provide powerful tools for pre-processing, analysing and visualising standing balance data in an easy to use and platform independent website. SeeSway links an interactive web platform with file upload capability to software systems including LabVIEW, Matlab, Python and R to perform the data filtering, analysis and visualisation of standing balance data. Input data can consist of any signal that comprises an anterior-posterior and medial-lateral coordinate trace such as center of pressure or mass displacement. This allows it to be used with systems including criterion reference commercial force platforms and three dimensional motion analysis, smartphones, accelerometers and low-cost technology such as Nintendo Wii Balance Board and Microsoft Kinect. Filtering options include Butterworth, weighted and unweighted moving average, and discrete wavelet transforms. Analysis methods include standard techniques such as path length, amplitude, and root mean square in addition to less common but potentially promising methods such as sample entropy, detrended fluctuation analysis and multiresolution wavelet analysis. These data are visualised using scalograms, which chart the change in frequency content over time, scatterplots and standard line charts. This provides the user with a detailed understanding of their results, and how their different pre-processing and analysis method selections affect their findings. An example of the data analysis techniques is provided in the paper, with graphical representation of how advanced analysis methods can better discriminate

  12. A Comparison of Self-Reported and Objective Physical Activity Measures in Young Australian Women.

    Science.gov (United States)

    Hartley, Stefanie; Garland, Suzanne; Young, Elisa; Bennell, Kim Louise; Tay, Ilona; Gorelik, Alexandra; Wark, John Dennis

    2015-01-01

    The evidence for beneficial effects of recommended levels of physical activity is overwhelming. However, 70% of Australians fail to meet these levels. In particular, physical activity participation by women falls sharply between ages 16 to 25 years. Further information about physical activity measures in young women is needed. Self-administered questionnaires are often used to measure physical activity given their ease of application, but known limitations, including recall bias, compromise the accuracy of data. Alternatives such as objective measures are commonly used to overcome this problem, but are more costly and time consuming. To compare the output between the Modified Active Australia Survey (MAAS), the International Physical Activity Questionnaire (IPAQ), and an objective physical activity measure-the SenseWear Armband (SWA)-to evaluate the test-retest reliability of the MAAS and to determine the acceptability of the SWA among young women. Young women from Victoria, Australia, aged 18 to 25 years who had participated in previous studies via Facebook advertising were recruited. Participants completed the two physical activity questionnaires online, immediately before and after wearing the armband for 7 consecutive days. Data from the SWA was blocked into 10-minute activity times. Follow-up IPAQ, MAAS, and SWA data were analyzed by comparing the total continuous and categorical activity scores, while concurrent validity of IPAQ and MAAS were analyzed by comparing follow-up scores. Test-retest reliability of MAAS was analyzed by comparing MAAS total physical activity scores at baseline and follow-up. Participants provided feedback in the follow-up questionnaire about their experience of wearing the armband to determine acceptability of the SWA. Data analyses included graphical (ie, Bland-Altman plot, scatterplot) and analytical (ie, canonical correlation, kappa statistic) methods to determine agreement between MAAS, IPAQ, and SWA data. A total of 58

  13. A robotic system for 18F-FMISO PET-guided intratumoral pO2 measurements.

    Science.gov (United States)

    Chang, Jenghwa; Wen, Bixiu; Kazanzides, Peter; Zanzonico, Pat; Finn, Ronald D; Fichtinger, Gabor; Ling, C Clifton

    2009-11-01

    An image-guided robotic system was used to measure the oxygen tension (pO2) in rodent tumor xenografts using interstitial probes guided by tumor hypoxia PET images. Rats with approximately 1 cm diameter tumors were anesthetized and immobilized in a custom-fabricated whole-body mold. Imaging was performed using a dedicated small-animal PET scanner (R4 or Focus 120 microPET) approximately 2 h after the injection of the hypoxia tracer 18F-fluoromisonidazole (18F-FMISO). The coordinate systems of the robot and PET were registered based on fiducial markers in the rodent bed visible on the PET images. Guided by the 3D microPET image set, measurements were performed at various locations in the tumor and compared to the corresponding 18F-FMISO image intensity at the respective measurement points. Experiments were performed on four tumor-bearing rats with 4 (86), 3 (80), 7 (162), and 8 (235) measurement tracks (points) for each experiment. The 18F-FMISO image intensities were inversely correlated with the measured pO2, with a Pearson coefficient ranging from -0.14 to -0.97 for the 22 measurement tracks. The cumulative scatterplots of pO2 versus image intensity yielded a hyperbolic relationship, with correlation coefficients of 0.52, 0.48, 0.64, and 0.73, respectively, for the four tumors. In conclusion, PET image-guided pO2 measurement is feasible with this robot system and, more generally, this system will permit point-by-point comparison of physiological probe measurements and image voxel values as a means of validating molecularly targeted radiotracers. Although the overall data fitting suggested that 18F-FMISO may be an effective hypoxia marker, the use of static 18F-FMISO PET postinjection scans to guide radiotherapy might be problematic due to the observed high variation in some individual data pairs from the fitted curve, indicating potential temporal fluctuation of oxygen tension in individual voxels or possible suboptimal imaging time postadministration of hypoxia

  14. Discovery of dominant and dormant genes from expression data using a novel generalization of SNR for multi-class problems

    Directory of Open Access Journals (Sweden)

    Chung I-Fang

    2008-10-01

    Full Text Available Abstract Background The Signal-to-Noise-Ratio (SNR is often used for identification of biomarkers for two-class problems and no formal and useful generalization of SNR is available for multiclass problems. We propose innovative generalizations of SNR for multiclass cancer discrimination through introduction of two indices, Gene Dominant Index and Gene Dormant Index (GDIs. These two indices lead to the concepts of dominant and dormant genes with biological significance. We use these indices to develop methodologies for discovery of dominant and dormant biomarkers with interesting biological significance. The dominancy and dormancy of the identified biomarkers and their excellent discriminating power are also demonstrated pictorially using the scatterplot of individual gene and 2-D Sammon's projection of the selected set of genes. Using information from the literature we have shown that the GDI based method can identify dominant and dormant genes that play significant roles in cancer biology. These biomarkers are also used to design diagnostic prediction systems. Results and discussion To evaluate the effectiveness of the GDIs, we have used four multiclass cancer data sets (Small Round Blue Cell Tumors, Leukemia, Central Nervous System Tumors, and Lung Cancer. For each data set we demonstrate that the new indices can find biologically meaningful genes that can act as biomarkers. We then use six machine learning tools, Nearest Neighbor Classifier (NNC, Nearest Mean Classifier (NMC, Support Vector Machine (SVM classifier with linear kernel, and SVM classifier with Gaussian kernel, where both SVMs are used in conjunction with one-vs-all (OVA and one-vs-one (OVO strategies. We found GDIs to be very effective in identifying biomarkers with strong class specific signatures. With all six tools and for all data sets we could achieve better or comparable prediction accuracies usually with fewer marker genes than results reported in the literature using the

  15. Evaluation of the MODIS Aerosol Retrievals over Ocean and Land during CLAMS.

    Science.gov (United States)

    Levy, R. C.; Remer, L. A.; Martins, J. V.; Kaufman, Y. J.; Plana-Fattori, A.; Redemann, J.; Wenny, B.

    2005-04-01

    The Chesapeake Lighthouse Aircraft Measurements for Satellites (CLAMS) experiment took place from 10 July to 2 August 2001 in a combined ocean-land region that included the Chesapeake Lighthouse [Clouds and the Earth's Radiant Energy System (CERES) Ocean Validation Experiment (COVE)] and the Wallops Flight Facility (WFF), both along coastal Virginia. This experiment was designed mainly for validating instruments and algorithms aboard the Terra satellite platform, including the Moderate Resolution Imaging Spectroradiometer (MODIS). Over the ocean, MODIS retrieved aerosol optical depths (AODs) at seven wavelengths and an estimate of the aerosol size distribution. Over the land, MODIS retrieved AOD at three wavelengths plus qualitative estimates of the aerosol size. Temporally coincident measurements of aerosol properties were made with a variety of sun photometers from ground sites and airborne sites just above the surface. The set of sun photometers provided unprecedented spectral coverage from visible (VIS) to the solar near-infrared (NIR) and infrared (IR) wavelengths. In this study, AOD and aerosol size retrieved from MODIS is compared with similar measurements from the sun photometers. Over the nearby ocean, the MODIS AOD in the VIS and NIR correlated well with sun-photometer measurements, nearly fitting a one-to-one line on a scatterplot. As one moves from ocean to land, there is a pronounced discontinuity of the MODIS AOD, where MODIS compares poorly to the sun-photometer measurements. Especially in the blue wavelength, MODIS AOD is too high in clean aerosol conditions and too low under larger aerosol loadings. Using the Second Simulation of the Satellite Signal in the Solar Spectrum (6S) radiative code to perform atmospheric correction, the authors find inconsistency in the surface albedo assumptions used by the MODIS lookup tables. It is demonstrated how the high bias at low aerosol loadings can be corrected. By using updated urban/industrial aerosol

  16. Local indicators of geocoding accuracy (LIGA: theory and application

    Directory of Open Access Journals (Sweden)

    Jacquez Geoffrey M

    2009-10-01

    Full Text Available Abstract Background Although sources of positional error in geographic locations (e.g. geocoding error used for describing and modeling spatial patterns are widely acknowledged, research on how such error impacts the statistical results has been limited. In this paper we explore techniques for quantifying the perturbability of spatial weights to different specifications of positional error. Results We find that a family of curves describes the relationship between perturbability and positional error, and use these curves to evaluate sensitivity of alternative spatial weight specifications to positional error both globally (when all locations are considered simultaneously and locally (to identify those locations that would benefit most from increased geocoding accuracy. We evaluate the approach in simulation studies, and demonstrate it using a case-control study of bladder cancer in south-eastern Michigan. Conclusion Three results are significant. First, the shape of the probability distributions of positional error (e.g. circular, elliptical, cross has little impact on the perturbability of spatial weights, which instead depends on the mean positional error. Second, our methodology allows researchers to evaluate the sensitivity of spatial statistics to positional accuracy for specific geographies. This has substantial practical implications since it makes possible routine sensitivity analysis of spatial statistics to positional error arising in geocoded street addresses, global positioning systems, LIDAR and other geographic data. Third, those locations with high perturbability (most sensitive to positional error and high leverage (that contribute the most to the spatial weight being considered will benefit the most from increased positional accuracy. These are rapidly identified using a new visualization tool we call the LIGA scatterplot. Herein lies a paradox for spatial analysis: For a given level of positional error increasing sample density

  17. MRI-based flow measurements in the main pulmonary artery to detect pulmonary arterial hypertension in patients with cystic fibrosis; MRT-basierte Flussmessungen im Truncus pulmonalis zur Detektion einer pulmonal-arteriellen Hypertonie in Patienten mit zystischer Fibrose

    Energy Technology Data Exchange (ETDEWEB)

    Wolf, T.; Anjorin, A.; Abolmaali, N. [TU Dresden (Germany). OncoRay, Biologisches und Molekulares Imaging; Posselt, H. [Frankfurt Univ. (Germany). Klinik fuer Paediatrie I, Muskoviszidoseambulanz; Smaczny, C. [Frankfurt Univ. (Germany). Medizinische Klinik I, Pneumologie und Allergologie; Vogl, T.J. [Frankfurt Univ. (Germany). Inst. fuer Diagnostische und Interventionelle Radiologie

    2009-02-15

    Development of pulmonary arterial hypertension (PH) is a common problem in the course of patients suffering from cystic fibrosis (CF). This study was performed to evaluate MRI based flow measurements (MR{sub venc}; Velocity ENCoding) to detect signs of an evolving PH in patients suffering from CF. 48 patients (median age: 16 years, range: 10 - 40 years, 25 female) suffering from CF of different severity (mean FEV1: 74 % {+-} 23, mean Shwachman-score: 63 {+-} 10) were examined using MRI based flow measurements of the main pulmonary artery (MPA). Phase-contrast flash sequences (TR: 9.6 ms, TE: 2.5 ms, bandwidth: 1395 Hertz/Pixel) were utilized. Results were compared to an age- and sex-matched group of 48 healthy subjects. Analyzed flow data where: heart frequency (HF), cardiac output (HZV), acceleration time (AT), proportional acceleration time related to heart rate (ATr), mean systolic blood velocity (MFG), peak velocity (Peak), maximum flow (Fluss{sub max}), mean flow (Fluss{sub mitt}) and distensibility (Dist). The comparison of means revealed significant differences only for MFG, Fluss{sub max} and Dist, but overlap was marked. However, using a scatter-plot of AT versus MFG, it was possible to identify five CF-patients demonstrating definite signs of PH: AT = 81 ms {+-} 14, MFG = 46 {+-} 11 cm/s, Dist = 41 % {+-} 7. These CF-patients where the most severely affected in the investigated group, two of them were listed for complete heart and lung transplantation. The comparison of this subgroup and the remaining CF-patients revealed a highly significant difference for the AT (p = 0.000001) without overlap. Screening of CF-patients for the development of PH using MR{sub venc} of the MPA is not possible. In later stages of disease, the quantification of AT, MFG and Dist in the MPA may be useful for the detection, follow-up and control of therapy of PH. MR{sub venc} of the MPA completes the MRI-based follow-up of lung parenchyma damage in patients suffering from CF

  18. A Novel Hybrid Data-Driven Model for Daily Land Surface Temperature Forecasting Using Long Short-Term Memory Neural Network Based on Ensemble Empirical Mode Decomposition

    Directory of Open Access Journals (Sweden)

    Xike Zhang

    2018-05-01

    five models. The scatterplots of the predicted results of the six models versus the original daily LST data series show that the hybrid EEMD-LSTM model is superior to the other five models. It is concluded that the proposed hybrid EEMD-LSTM model in this study is a suitable tool for temperature forecasting.

  19. A Novel Hybrid Data-Driven Model for Daily Land Surface Temperature Forecasting Using Long Short-Term Memory Neural Network Based on Ensemble Empirical Mode Decomposition.

    Science.gov (United States)

    Zhang, Xike; Zhang, Qiuwen; Zhang, Gui; Nie, Zhiping; Gui, Zifan; Que, Huafei

    2018-05-21

    models. The scatterplots of the predicted results of the six models versus the original daily LST data series show that the hybrid EEMD-LSTM model is superior to the other five models. It is concluded that the proposed hybrid EEMD-LSTM model in this study is a suitable tool for temperature forecasting.

  20. A statistical learning framework for groundwater nitrate models of the Central Valley, California, USA

    Science.gov (United States)

    Nolan, Bernard T.; Fienen, Michael N.; Lorenz, David L.

    2015-01-01

    We used a statistical learning framework to evaluate the ability of three machine-learning methods to predict nitrate concentration in shallow groundwater of the Central Valley, California: boosted regression trees (BRT), artificial neural networks (ANN), and Bayesian networks (BN). Machine learning methods can learn complex patterns in the data but because of overfitting may not generalize well to new data. The statistical learning framework involves cross-validation (CV) training and testing data and a separate hold-out data set for model evaluation, with the goal of optimizing predictive performance by controlling for model overfit. The order of prediction performance according to both CV testing R2 and that for the hold-out data set was BRT > BN > ANN. For each method we identified two models based on CV testing results: that with maximum testing R2 and a version with R2 within one standard error of the maximum (the 1SE model). The former yielded CV training R2 values of 0.94–1.0. Cross-validation testing R2 values indicate predictive performance, and these were 0.22–0.39 for the maximum R2 models and 0.19–0.36 for the 1SE models. Evaluation with hold-out data suggested that the 1SE BRT and ANN models predicted better for an independent data set compared with the maximum R2 versions, which is relevant to extrapolation by mapping. Scatterplots of predicted vs. observed hold-out data obtained for final models helped identify prediction bias, which was fairly pronounced for ANN and BN. Lastly, the models were compared with multiple linear regression (MLR) and a previous random forest regression (RFR) model. Whereas BRT results were comparable to RFR, MLR had low hold-out R2 (0.07) and explained less than half the variation in the training data. Spatial patterns of predictions by the final, 1SE BRT model agreed reasonably well with previously observed patterns of nitrate occurrence in groundwater of the Central Valley.

  1. MRI-based flow measurements in the main pulmonary artery to detect pulmonary arterial hypertension in patients with cystic fibrosis

    International Nuclear Information System (INIS)

    Wolf, T.; Anjorin, A.; Abolmaali, N.; Posselt, H.; Smaczny, C.; Vogl, T.J.

    2009-01-01

    Development of pulmonary arterial hypertension (PH) is a common problem in the course of patients suffering from cystic fibrosis (CF). This study was performed to evaluate MRI based flow measurements (MR venc ; Velocity ENCoding) to detect signs of an evolving PH in patients suffering from CF. 48 patients (median age: 16 years, range: 10 - 40 years, 25 female) suffering from CF of different severity (mean FEV1: 74 % ± 23, mean Shwachman-score: 63 ± 10) were examined using MRI based flow measurements of the main pulmonary artery (MPA). Phase-contrast flash sequences (TR: 9.6 ms, TE: 2.5 ms, bandwidth: 1395 Hertz/Pixel) were utilized. Results were compared to an age- and sex-matched group of 48 healthy subjects. Analyzed flow data where: heart frequency (HF), cardiac output (HZV), acceleration time (AT), proportional acceleration time related to heart rate (ATr), mean systolic blood velocity (MFG), peak velocity (Peak), maximum flow (Fluss max ), mean flow (Fluss mitt ) and distensibility (Dist). The comparison of means revealed significant differences only for MFG, Fluss max and Dist, but overlap was marked. However, using a scatter-plot of AT versus MFG, it was possible to identify five CF-patients demonstrating definite signs of PH: AT = 81 ms ± 14, MFG = 46 ± 11 cm/s, Dist = 41 % ± 7. These CF-patients where the most severely affected in the investigated group, two of them were listed for complete heart and lung transplantation. The comparison of this subgroup and the remaining CF-patients revealed a highly significant difference for the AT (p = 0.000001) without overlap. Screening of CF-patients for the development of PH using MR venc of the MPA is not possible. In later stages of disease, the quantification of AT, MFG and Dist in the MPA may be useful for the detection, follow-up and control of therapy of PH. MR venc of the MPA completes the MRI-based follow-up of lung parenchyma damage in patients suffering from CF. (orig.)

  2. FlooDSuM - a decision support methodology for assisting local authorities in flood situations

    Science.gov (United States)

    Schwanbeck, Jan; Weingartner, Rolf

    2014-05-01

    Decision making in flood situations is a difficult task, especially in small to medium-sized mountain catchments (30 - 500 km2) which are usually characterized by complex topography, high drainage density and quick runoff response to rainfall events. Operating hydrological models driven by numerical weather prediction systems, which have a lead-time of several hours up to few even days, would be beneficial in this case as time for prevention could be gained. However, the spatial and quantitative accuracy of such meteorological forecasts usually decrease with increasing lead-time. In addition, the sensitivity of rainfall-runoff models to inaccuracies in estimations of areal rainfall increases with decreasing catchment size. Accordingly, decisions on flood alerts should ideally be based on areal rainfall from high resolution and short-term numerical weather prediction, nowcasts or even real-time measurements, which is transformed into runoff by a hydrological model. In order to benefit from the best possible rainfall data while retaining enough time for alerting and for prevention, the hydrological model should be fast and easily applicable by decision makers within local authorities themselves. The proposed decision support methodology FlooDSuM (Flood Decision Support Methodology) aims to meet those requirements. Applying FlooDSuM, a few successive binary decisions of increasing complexity have to be processed following a flow-chart-like structure. Prepared data and straightforwardly applicable tools are provided for each of these decisions. Maps showing the current flood disposition are used for the first step. While danger of flooding cannot be excluded more and more complex and time consuming methods will be applied. For the final decision, a set of scatter-plots relating areal precipitation to peak flow is provided. These plots take also further decisive parameters into account such as storm duration, distribution of rainfall intensity in time as well as the

  3. Uncertainty and Sensitivity Analysis Results Obtained in the 1996 Performance Assessment for the Waste Isolation Pilot Plant

    Energy Technology Data Exchange (ETDEWEB)

    Bean, J.E.; Berglund, J.W.; Davis, F.J.; Economy, K.; Garner, J.W.; Helton, J.C.; Johnson, J.D.; MacKinnon, R.J.; Miller, J.; O' Brien, D.G.; Ramsey, J.L.; Schreiber, J.D.; Shinta, A.; Smith, L.N.; Stockman, C.; Stoelzel, D.M.; Vaughn, P.

    1998-09-01

    The Waste Isolation Pilot Plant (WPP) is located in southeastern New Mexico and is being developed by the U.S. Department of Energy (DOE) for the geologic (deep underground) disposal of transuranic (TRU) waste. A detailed performance assessment (PA) for the WIPP was carried out in 1996 and supports an application by the DOE to the U.S. Environmental Protection Agency (EPA) for the certification of the WIPP for the disposal of TRU waste. The 1996 WIPP PA uses a computational structure that maintains a separation between stochastic (i.e., aleatory) and subjective (i.e., epistemic) uncertainty, with stochastic uncertainty arising from the many possible disruptions that could occur over the 10,000 yr regulatory period that applies to the WIPP and subjective uncertainty arising from the imprecision with which many of the quantities required in the PA are known. Important parts of this structure are (1) the use of Latin hypercube sampling to incorporate the effects of subjective uncertainty, (2) the use of Monte Carlo (i.e., random) sampling to incorporate the effects of stochastic uncertainty, and (3) the efficient use of the necessarily limited number of mechanistic calculations that can be performed to support the analysis. The use of Latin hypercube sampling generates a mapping from imprecisely known analysis inputs to analysis outcomes of interest that provides both a display of the uncertainty in analysis outcomes (i.e., uncertainty analysis) and a basis for investigating the effects of individual inputs on these outcomes (i.e., sensitivity analysis). The sensitivity analysis procedures used in the PA include examination of scatterplots, stepwise regression analysis, and partial correlation analysis. Uncertainty and sensitivity analysis results obtained as part of the 1996 WIPP PA are presented and discussed. Specific topics considered include two phase flow in the vicinity of the repository, radionuclide release from the repository, fluid flow and radionuclide

  4. Satellite and ground-based remote sensing of aerosols during intense haze event of October 2013 over lahore, Pakistan

    Science.gov (United States)

    Tariq, Salman; Zia, ul-Haq; Ali, Muhammad

    2016-02-01

    Due to increase in population and economic development, the mega-cities are facing increased haze events which are causing important effects on the regional environment and climate. In order to understand these effects, we require an in-depth knowledge of optical and physical properties of aerosols in intense haze conditions. In this paper an effort has been made to analyze the microphysical and optical properties of aerosols during intense haze event over mega-city of Lahore by using remote sensing data obtained from satellites (Terra/Aqua Moderate-resolution Imaging Spectroradiometer (MODIS) and Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observation (CALIPSO)) and ground based instrument (AErosol RObotic NETwork (AERONET)) during 6-14 October 2013. The instantaneous highest value of Aerosol Optical Depth (AOD) is observed to be 3.70 on 9 October 2013 followed by 3.12 on 8 October 2013. The primary cause of such high values is large scale crop residue burning and urban-industrial emissions in the study region. AERONET observations show daily mean AOD of 2.36 which is eight times higher than the observed values on normal day. The observed fine mode volume concentration is more than 1.5 times greater than the coarse mode volume concentration on the high aerosol burden day. We also find high values (~0.95) of Single Scattering Albedo (SSA) on 9 October 2013. Scatter-plot between AOD (500 nm) and Angstrom exponent (440-870 nm) reveals that biomass burning/urban-industrial aerosols are the dominant aerosol type on the heavy aerosol loading day over Lahore. MODIS fire activity image suggests that the areas in the southeast of Lahore across the border with India are dominated by biomass burning activities. A Hybrid Single-Particle Lagrangian Integrated Trajectory (HYSPLIT) model backward trajectory showed that the winds at 1000 m above the ground are responsible for transport from southeast region of biomass burning to Lahore. CALIPSO derived sub-types of

  5. A critical review of the ESCAPE project for estimating long-term health effects of air pollution.

    Science.gov (United States)

    Lipfert, Frederick W

    2017-02-01

    The European Study of Cohorts for Air Pollution Effects (ESCAPE) is a13-nation study of long-term health effects of air pollution based on subjects pooled from up to 22 cohorts that were intended for other purposes. Twenty-five papers have been published on associations of various health endpoints with long-term exposures to NOx, NO2, traffic indicators, PM10, PM2.5 and PM constituents including absorbance (elemental carbon). Seven additional ESCAPE papers found moderate correlations (R2=0.3-0.8) between measured air quality and estimates based on land-use regression that were used; personal exposures were not considered. I found no project summaries or comparisons across papers; here I conflate the 25 ESCAPE findings in the context of other recent European epidemiology studies. Because one ESCAPE cohort contributed about half of the subjects, I consider it and the other 18 cohorts separately to compare their contributions to the combined risk estimates. I emphasize PM2.5 and confirm the published hazard ratio of 1.14 (1.04-1.26) per 10μg/m3 for all-cause mortality. The ESCAPE papers found 16 statistically significant (p<0.05) risks among the125 pollutant-endpoint combinations; 4 each for PM2.5 and PM10, 1 for PM absorbance, 5 for NO2, and 2 for traffic. No PM constituent was consistently significant. No significant associations were reported for cardiovascular mortality; low birthrate was significant for all pollutants except PM absorbance. Based on associations with PM2.5, I find large differences between all-cause death estimates and the sum of specific-cause death estimates. Scatterplots of PM2.5 mortality risks by cause show no consistency across the 18 cohorts, ostensibly because of the relatively few subjects. Overall, I find the ESCAPE project inconclusive and I question whether the efforts required to estimate exposures for small cohorts were worthwhile. I suggest that detailed studies of the large cohort using historical exposures and additional

  6. Evaluation of NLDAS 12-km and downscaled 1-km temperature products in New York State for potential use in health exposure response studies

    Science.gov (United States)

    Estes, M. G., Jr.; Insaf, T.; Crosson, W. L.; Al-Hamdan, M. Z.

    2017-12-01

    Heat exposure metrics (maximum and minimum daily temperatures,) have a close relationship with human health. While meteorological station data provide a good source of point measurements, temporal and spatially consistent temperature data are needed for health studies. Reanalysis data such as the North American Land Data Assimilation System's (NLDAS) 12-km gridded product are an effort to resolve spatio-temporal environmental data issues; the resolution may be too coarse to accurately capture the effects of elevation, mixed land/water areas, and urbanization. As part of this NASA Applied Sciences Program funded project, the NLDAS 12-km air temperature product has been downscaled to 1-km using MODIS Land Surface Temperature patterns. Limited validation of the native 12-km NLDAS reanalysis data has been undertaken. Our objective is to evaluate the accuracy of both the 12-km and 1-km downscaled products using the US Historical Climatology Network station data geographically dispersed across New York State. Statistical methods including correlation, scatterplots, time series and summary statistics were used to determine the accuracy of the remotely-sensed maximum and minimum temperature products. The specific effects of elevation and slope on remotely-sensed temperature product accuracy were determined with 10-m digital elevation data that were used to calculate percent slope and link with the temperature products at multiple scales. Preliminary results indicate the downscaled temperature product improves accuracy over the native 12-km temperature product with average correlation improvements from 0.81 to 0.85 for minimum and 0.71 to 0.79 for maximum temperatures in 2009. However, the benefits vary temporally and geographically. Our results will inform health studies using remotely-sensed temperature products to determine health risk from excessive heat by providing a more robust assessment of the accuracy of the 12-km NLDAS product and additional accuracy gained from

  7. The iron-responsive microsomal proteome of Aspergillus fumigatus.

    Science.gov (United States)

    Moloney, Nicola M; Owens, Rebecca A; Meleady, Paula; Henry, Michael; Dolan, Stephen K; Mulvihill, Eoin; Clynes, Martin; Doyle, Sean

    2016-03-16

    Aspergillus fumigatus is an opportunistic fungal pathogen. Siderophore biosynthesis and iron acquisition are essential for virulence. Yet, limited data exist with respect to the adaptive nature of the fungal microsomal proteome under iron-limiting growth conditions, as encountered during host infection. Here, we demonstrate that under siderophore biosynthetic conditions--significantly elevated fusarinine C (FSC) and triacetylfusarinine C (TAFC) production (pproteome remodelling occurs. Specifically, a four-fold enrichment of transmembrane-containing proteins was observed with respect to whole cell lysates following ultracentrifugation-based microsomal extraction. Comparative label-free proteomic analysis of microsomal extracts, isolated following iron-replete and -deplete growth, identified 710 unique proteins. Scatterplot analysis (MaxQuant) demonstrated high correlation amongst biological replicates from each growth condition (Pearson correlation >0.96 within groups; biological replicates (n=4)). Quantitative and qualitative comparison revealed 231 proteins with a significant change in abundance between the iron-replete and iron-deplete conditions (pAspergillus fumigatus must acquire iron to facilitate growth and pathogenicity. Iron-chelating non-ribosomal peptides, termed siderophores, mediate iron uptake via membrane-localised transporter proteins. Here we demonstrate for the first time that growth of A. fumigatus under iron-deplete conditions, concomitant with siderophore biosynthesis, leads to an extensive remodelling of the microsomal proteome which includes significantly altered levels of 231 constituent proteins (96 increased and 135 decreased in abundance), many of which have not previously been localised to the microsome. We also demonstrate the first synthesis of a fluorescent version of fusarinine C, an extracellular A. fumigatus siderophore, and its uptake and localization under iron-restricted conditions. This infers the use of an A. fumigatus

  8. Alcohol-Attributable Fraction in Liver Disease: Does GDP Per Capita Matter?

    Science.gov (United States)

    Kröner, Paul T; Mankal, Pavan Kumar; Dalapathi, Vijay; Shroff, Kavin; Abed, Jean; Kotler, Donald P

    2015-01-01

    The alcohol-attributable fraction (AAF) quantifies alcohol's disease burden. Alcoholic liver disease (ALD) is influenced by alcohol consumption per capita, duration, gender, ethnicity, and other comorbidities. In this study, we investigated the association between AAF/alcohol-related liver mortality and alcohol consumption per capita, while stratifying to per-capita gross domestic product (GDP). Data obtained from the World Health Organization and World Bank for both genders on AAF on liver disease, per-capita alcohol consumption (L/y), and per-capita GDP (USD/y) were used to conduct a cross-sectional study. Countries were classified as "high-income" and "very low income" if their respective per-capita GDP was greater than $30,000 or less than $1,000. Differences in total alcohol consumption per capita and AAF were calculated using a 2-sample t test. Scatterplots were generated to supplement the Pearson correlation coefficients, and F test was conducted to assess for differences in variance of ALD between high-income and very low income countries. Twenty-six and 27 countries met the criteria for high-income and very low income countries, respectively. Alcohol consumption per capita was higher in high-income countries. AAF and alcohol consumption per capita for both genders in high-income and very low income countries had a positive correlation. The F test yielded an F value of 1.44 with P = .357. No statistically significant correlation was found among alcohol types and AAF. Significantly higher mortality from ALD was found in very low income countries relative to high-income countries. Previous studies had noted a decreased AAF in low-income countries as compared to higher-income countries. However, the non-statistically significant difference between AAF variances of low-income and high-income countries was found by this study. A possible explanation is that both high-income and low-income populations will consume sufficient amount of alcohol, irrespective of its

  9. Methyl chloride in the UT/LS observed by CARIBIC: global distribution, Asian summer monsoon outflow, and use as a tracer for tropical air

    Science.gov (United States)

    Baker, A. K.; Umezawa, T.; Oram, D.; Sauvage, C.; Rauthe-Schoech, A.; Montzka, S. A.; Zahn, A.; Brenninkmeijer, C. A. M.

    2014-12-01

    We present spatiotemporal variations of methyl chloride (CH3Cl) in the UT/LS observed mainly by the CARIBIC passenger aircraft for the years 2005-2011. The CH3Cl mixing ratio in the UT over Europe was higher than that observed at a European surface baseline station year-round, indicative of a persistent positive vertical gradient at NH mid latitudes. A series of flights over Africa and South Asia show that CH3Cl mixing ratios increase toward tropical latitudes, and the observed UT CH3Cl level over these two regions and the Atlantic was higher than that measured at remote surface sites. Strong emissions of CH3Cl in the tropics combined with meridional transport through the UT may explain such vertical and latitudinal gradients. Comparisons with CO data indicate that non-combustion sources in the tropics dominantly contribute to forming the latitudinal gradient of CH3Cl in the UT. We also observed elevated CH3Cl and CO in air influenced by biomass burning in South America and Africa, and the enhancement ratios derived for CH3Cl to CO in those regions agree with previous observations. In contrast, correlations indicate a high CH3Cl to CO ratio of 2.9±0.5 ppt ppb-1 in the Asian summer monsoon anticyclone and domestic biofuel emissions in South Asia are inferred to be responsible. We estimated CH3Cl emissions from South Asia to be 134±23 Gg Cl yr-1, which is higher than a previous estimate due to the higher CH3Cl to CO ratio observed in this study. We also examine the use of CH3Cl as a tracer of tropical tropospheric air in the LMS, where we identified air masses with elevated CH3Cl that were however stratospheric in terms of N2O. Back trajectories suggest recent low-latitude origins of such air masses in early summer. In this season, high CH3Cl LMS air shows a clear branch connecting stratospheric and tropical tropospheric air on N2O-CH3Cl scatterplots. This distinct feature vanishes in late summer when the LMS is ventilated by tropospheric air.

  10. Uncertainty and Sensitivity Analysis Results Obtained in the 1996 Performance Assessment for the Waste Isolation Pilot Plant

    International Nuclear Information System (INIS)

    Bean, J.E.; Berglund, J.W.; Davis, F.J.; Economy, K.; Garner, J.W.; Helton, J.C.; Johnson, J.D.; MacKinnon, R.J.; Miller, J.; O'Brien, D.G.; Ramsey, J.L.; Schreiber, J.D.; Shinta, A.; Smith, L.N.; Stockman, C.; Stoelzel, D.M.; Vaughn, P.

    1998-01-01

    The Waste Isolation Pilot Plant (WPP) is located in southeastern New Mexico and is being developed by the U.S. Department of Energy (DOE) for the geologic (deep underground) disposal of transuranic (TRU) waste. A detailed performance assessment (PA) for the WIPP was carried out in 1996 and supports an application by the DOE to the U.S. Environmental Protection Agency (EPA) for the certification of the WIPP for the disposal of TRU waste. The 1996 WIPP PA uses a computational structure that maintains a separation between stochastic (i.e., aleatory) and subjective (i.e., epistemic) uncertainty, with stochastic uncertainty arising from the many possible disruptions that could occur over the 10,000 yr regulatory period that applies to the WIPP and subjective uncertainty arising from the imprecision with which many of the quantities required in the PA are known. Important parts of this structure are (1) the use of Latin hypercube sampling to incorporate the effects of subjective uncertainty, (2) the use of Monte Carlo (i.e., random) sampling to incorporate the effects of stochastic uncertainty, and (3) the efficient use of the necessarily limited number of mechanistic calculations that can be performed to support the analysis. The use of Latin hypercube sampling generates a mapping from imprecisely known analysis inputs to analysis outcomes of interest that provides both a display of the uncertainty in analysis outcomes (i.e., uncertainty analysis) and a basis for investigating the effects of individual inputs on these outcomes (i.e., sensitivity analysis). The sensitivity analysis procedures used in the PA include examination of scatterplots, stepwise regression analysis, and partial correlation analysis. Uncertainty and sensitivity analysis results obtained as part of the 1996 WIPP PA are presented and discussed. Specific topics considered include two phase flow in the vicinity of the repository, radionuclide release from the repository, fluid flow and radionuclide

  11. Descriptive Statistics: Reporting the Answers to the 5 Basic Questions of Who, What, Why, When, Where, and a Sixth, So What?

    Science.gov (United States)

    Vetter, Thomas R

    2017-11-01

    of the association between the exposure and the outcome (eg, the risk ratio or odds ratio) in the population likely resides. There are many possible ways to graphically display or illustrate different types of data. While there is often latitude as to the choice of format, ultimately, the simplest and most comprehensible format is preferred. Common examples include a histogram, bar chart, line chart or line graph, pie chart, scatterplot, and box-and-whisker plot. Valid and reliable descriptive statistics can answer basic yet important questions about a research data set, namely: "Who, What, Why, When, Where, How, How Much?"

  12. Monitoring Recent Fluctuations of the Southern Pool of Lake Chad Using Multiple Remote Sensing Data: Implications for Water Balance Analysis

    Directory of Open Access Journals (Sweden)

    Wenbin Zhu

    2017-10-01

    Full Text Available The drought episodes in the second half of the 20th century have profoundly modified the state of Lake Chad and investigation of its variations is necessary under the new circumstances. Multiple remote sensing observations were used in this paper to study its variation in the recent 25 years. Unlike previous studies, only the southern pool of Lake Chad (SPLC was selected as our study area, because it is the only permanent open water area after the serious lake recession in 1973–1975. Four satellite altimetry products were used for water level retrieval and 904 Landsat TM/ETM+ images were used for lake surface area extraction. Based on the water level (L and surface area (A retrieved (with coinciding dates, linear regression method was used to retrieve the SPLC’s L-A curve, which was then integrated to estimate water volume variations ( Δ V . The results show that the SPLC has been in a relatively stable phase, with a slight increasing trend from 1992 to 2016. On annual average scale, the increase rate of water level, surface area and water volume is 0.5 cm year−1, 0.14 km2 year−1 and 0.007 km3 year−1, respectively. As for the intra-annual variations of the SPLC, the seasonal variation amplitude of water level, lake area and water volume is 1.38 m, 38.08 km2 and 2.00 km3, respectively. The scatterplots between precipitation and Δ V indicate that there is a time lag of about one to two months in the response of water volume variations to precipitation, which makes it possible for us to predict Δ V . The water balance of the SPLC is significantly different from that of the entire Lake Chad. While evaporation accounts for 96% of the lake’s total water losses, only 16% of the SPLC’s losses are consumed by evaporation, with the other 84% offset by outflow.

  13. WebDMS: A Web-Based Data Management System for Environmental Data

    Science.gov (United States)

    Ekstrand, A. L.; Haderman, M.; Chan, A.; Dye, T.; White, J. E.; Parajon, G.

    2015-12-01

    DMS is an environmental Data Management System to manage, quality-control (QC), summarize, document chain-of-custody, and disseminate data from networks ranging in size from a few sites to thousands of sites, instruments, and sensors. The server-client desktop version of DMS is used by local and regional air quality agencies (including the Bay Area Air Quality Management District, the South Coast Air Quality Management District, and the California Air Resources Board), the EPA's AirNow Program, and the EPA's AirNow-International (AirNow-I) program, which offers countries the ability to run an AirNow-like system. As AirNow's core data processing engine, DMS ingests, QCs, and stores real-time data from over 30,000 active sensors at over 5,280 air quality and meteorological sites from over 130 air quality agencies across the United States. As part of the AirNow-I program, several instances of DMS are deployed in China, Mexico, and Taiwan. The U.S. Department of State's StateAir Program also uses DMS for five regions in China and plans to expand to other countries in the future. Recent development has begun to migrate DMS from an onsite desktop application to WebDMS, a web-based application designed to take advantage of cloud hosting and computing services to increase scalability and lower costs. WebDMS will continue to provide easy-to-use data analysis tools, such as time-series graphs, scatterplots, and wind- or pollution-rose diagrams, as well as allowing data to be exported to external systems such as the EPA's Air Quality System (AQS). WebDMS will also provide new GIS analysis features and a suite of web services through a RESTful web API. These changes will better meet air agency needs and allow for broader national and international use (for example, by the AirNow-I partners). We will talk about the challenges and advantages of migrating DMS to the web, modernizing the DMS user interface, and making it more cost-effective to enhance and maintain over time.

  14. Radical radiotherapy for early glottic cancer: Results in a series of 1087 patients from two Italian radiation oncology centers. I. The case of T1N0 disease

    International Nuclear Information System (INIS)

    Cellai, Enrico; Frata, Paolo; Magrini, Stefano M.; Paiar, Fabiola; Barca, Raffaella; Fondelli, Simona; Polli, Caterina; Livi, Lorenzo; Bonetti, Bartolomea; Vitali, Elisabetta; De Stefani, Agostina; Buglione, Michela; Biti, Gianpaolo

    2005-01-01

    Purpose: To retrospectively evaluate local control rates, late damage incidence, functional results, and second tumor occurrence according to the different patient, tumor, and treatment features in a large bi-institutional series of T1 glottic cancer. Methods and Materials: A total of 831 T1 glottic cancer cases treated consecutively with radical intent at the Florence University Radiation Oncology Department (FLO) and at the Radiation Oncology Department of University of Brescia-Istituto del Radio 'O. Alberti' (BS) were studied. Actuarial cumulative local control probability (LC), disease-specific (DSS), and overall survival (OS) rates have been calculated and compared in the different clinical and therapeutic subgroups with both univariate and multivariate analysis. Types of relapse and their surgical salvage have been evaluated, along with the functional results of treatment. Late damage incidence and second tumor cumulative probability (STP) have been also calculated. Results: In the entire series, 3-, 5-, and 10-year OS was equal to 86%, 77%, and 57%, respectively. Corresponding values for LC were 86%, 84%, and 83% and for DSS 96%, 95%, and 93%, taking into account surgical salvage of relapsed cases. Eighty-seven percent of the patients were cured with function preserved. Main determinants of a worse LC at univariate analysis were: male gender, earlier treatment period, larger tumor extent, anterior commissure involvement, and the use of Cobalt 60. At multivariate analysis, only gender, tumor extent, anterior commissure involvement, and beam type retained statistical significance. Higher total doses and larger field sizes are significantly related (logistic regression) with a higher late damage incidence. Scatterplot analysis of various combinations of field dimensions and total dose showed that field dimensions >35 and 2 , together with doses of >65 Gy, offer the best local control results together with an acceptably low late damage incidence. Twenty-year STP

  15. WE-FG-206-12: Enhanced Laws Textures: A Potential MRI Surrogate Marker of Hepatic Fibrosis in a Murine Model

    Energy Technology Data Exchange (ETDEWEB)

    Li, B; Yu, H; Jara, H; Soto, J; Anderson, S [Boston University Medical Center, Boston, MA (United States)

    2016-06-15

    Purpose: To compare enhanced Laws texture derived from parametric proton density (PD) maps to other MRI-based surrogate markers (T2, PD, ADC) in assessing degrees of liver fibrosis in a murine model of hepatic fibrosis using 11.7T scanner. Methods: This animal study was IACUC approved. Fourteen mice were divided into control (n=1) and experimental (n=13). The latter were fed a DDC-supplemented diet to induce hepatic fibrosis. Liver specimens were imaged using an 11.7T scanner; the parametric PD, T2, and ADC maps were generated from spin-echo pulsed field gradient and multi-echo spin-echo acquisitions. Enhanced Laws texture analysis was applied to the PD maps: first, hepatic blood vessels and liver margins were segmented/removed using an automated dual-clustering algorithm; secondly, an optimal thresholding algorithm was applied to reduce the partial volume artifact; next, mean and stdev were corrected to minimize grayscale variation across images; finally, Laws texture was extracted. Degrees of fibrosis was assessed by an experienced pathologist and digital image analysis (%Area Fibrosis). Scatterplots comparing enhanced Laws texture, T2, PD, and ADC values to degrees of fibrosis were generated and correlation coefficients were calculated. Unenhanced Laws texture was also compared to assess the effectiveness of the proposed enhancements. Results: Hepatic fibrosis and the enhanced Laws texture were strongly correlated with higher %Area Fibrosis associated with higher Laws texture (r=0.89). Only a moderate correlation was detected between %Area Fibrosis and unenhanced Laws texture (r=0.70). Strong correlation also existed between ADC and %Area Fibrosis (r=0.86). Moderate correlations were seen between %Area Fibrosis and PD (r=0.65) and T2 (r=0.66). Conclusions: Higher degrees of hepatic fibrosis are associated with increased Laws texture. The proposed enhancements improve the accuracy of Laws texture. Enhanced Laws texture features are more accurate than PD and T2 in

  16. APLICACIÓN DEL PROCEDIMIENTO DE DIAGNÓSTICO DE LA CALIDAD DE LOS DATOS EN EMPRESA PRODUCTORA DE ENVASES DE MADERA

    Directory of Open Access Journals (Sweden)

    Darian Pérez Rodríguez

    2010-11-01

    pesos because of bad data quality. The consistency is the least affected dimension, globally the quality level is 78% and the security of data is regular. For sealer and clear approximately half of errors are graves and mean grave. The mistaken values are produce in the economic area grouping the causes in input of data, bad interpretation of solicitude and bad design of data base, giving solutions such as created system for double data input.

    The techniques and analysis methods more frequently use were: inquiry, interviews, bibliographic consultation, revision of documents, process focusing, boxplot, scatterplot, Pareto and Ishikawa´s diagrams.


  17. Comparison of spatiotemporal prediction models of daily exposure of individuals to ambient nitrogen dioxide and ozone in Montreal, Canada.

    Science.gov (United States)

    Buteau, Stephane; Hatzopoulou, Marianne; Crouse, Dan L; Smargiassi, Audrey; Burnett, Richard T; Logan, Travis; Cavellin, Laure Deville; Goldberg, Mark S

    2017-07-01

    In previous studies investigating the short-term health effects of ambient air pollution the exposure metric that is often used is the daily average across monitors, thus assuming that all individuals have the same daily exposure. Studies that incorporate space-time exposures of individuals are essential to further our understanding of the short-term health effects of ambient air pollution. As part of a longitudinal cohort study of the acute effects of air pollution that incorporated subject-specific information and medical histories of subjects throughout the follow-up, the purpose of this study was to develop and compare different prediction models using data from fixed-site monitors and other monitoring campaigns to estimate daily, spatially-resolved concentrations of ozone (O 3 ) and nitrogen dioxide (NO 2 ) of participants' residences in Montreal, 1991-2002. We used the following methods to predict spatially-resolved daily concentrations of O 3 and NO 2 for each geographic region in Montreal (defined by three-character postal code areas): (1) assigning concentrations from the nearest monitor; (2) spatial interpolation using inverse-distance weighting; (3) back-extrapolation from a land-use regression model from a dense monitoring survey, and; (4) a combination of a land-use and Bayesian maximum entropy model. We used a variety of indices of agreement to compare estimates of exposure assigned from the different methods, notably scatterplots of pairwise predictions, distribution of differences and computation of the absolute agreement intraclass correlation (ICC). For each pairwise prediction, we also produced maps of the ICCs by these regions indicating the spatial variability in the degree of agreement. We found some substantial differences in agreement across pairs of methods in daily mean predicted concentrations of O 3 and NO 2 . On a given day and postal code area the difference in the concentration assigned could be as high as 131ppb for O 3 and 108ppb

  18. A History of Regression and Related Model-Fitting in the Earth Sciences (1636?-2000)

    International Nuclear Information System (INIS)

    Howarth, Richard J.

    2001-01-01

    roots in meeting the evident need for improved estimators in spatial interpolation. Technical advances in regression analysis during the 1970s embraced the development of regression diagnostics and consequent attention to outliers; the recognition of problems caused by correlated predictors, and the subsequent introduction of ridge regression to overcome them; and techniques for fitting errors-in-variables and mixture models. Improvements in computational power have enabled ever more computer-intensive methods to be applied. These include algorithms which are robust in the presence of outliers, for example Rousseeuw's 1984 Least Median Squares; nonparametric smoothing methods, such as kernel-functions, splines and Cleveland's 1979 LOcally WEighted Scatterplot Smoother (LOWESS); and the Classification and Regression Tree (CART) technique of Breiman and others in 1984. Despite a continuing improvement in the rate of technology-transfer from the statistical to the earth-science community, despite an abrupt drop to a time-lag of about 10 years following the introduction of digital computers, these more recent developments are only just beginning to penetrate beyond the research community of earth scientists. Examples of applications to problem-solving in the earth sciences are given

  19. HJ-Biplot como herramienta de inspección de matrices de datos bibliométricos

    Directory of Open Access Journals (Sweden)

    Díaz-Faes, Adrián A.

    2013-03-01

    Full Text Available The aim of this paper is to demonstrate the usefulness of the HJ-Biplot in bibliometric studies. It is a simple and intuitive display, similar to a scatterplot, but capturing the multivariate covariance structures between bibliometric indicators. Their interpretation does not require specialized statistical knowledge, but merely to know how to interpret the length of a vector, the angle between two vectors and the distance between two points. With this aim, an analysis has been performed of the scientific output of CSIC's own centres as well as of joint centres during the period 2006-2009, in relation to a series of indicators based on impact and collaboration. Biplot methods are graphical representations of multivariate data. Using HJ-Biplot it is possible to interpret simultaneously the position of the centres, represented by dots; indicators, represented by vectors; and the relationships between them. The position of the centres in the context of their area as well as within the overall CSIC is analysed and those centres with a unique behaviour are identified. We conclude that the Humanities and Social Sciences, and Food Science and Technology are the areas with a greater homogeneous pattern in the performance of their centres, while Physics and Agriculture, are more heterogeneous.

    El objetivo de este trabajo es poner de manifiesto la utilidad del HJ-Biplot en los estudios bibliométricos. El HJ-Biplot es una representación intuitiva y sencilla, similar a un diagrama de dispersión, pero que captura las estructuras de covariación multivariantes entre los indicadores bibliométricos. Su interpretación no requiere conocimientos estadísticos especializados, basta con saber interpretar la longitud de un vector, el ángulo entre dos vectores y la distancia entre dos puntos. Con este fin, se analiza la actividad científica de los centros propios y mixtos del CSIC durante el período 2006-2009 mediante una serie de indicadores de

  20. Regional Distribution of Metals and C and N Stable Isotopes in the Epiphytic Ball Moss (Tillandsia Recurvata) at the Mezquital Valley, Hidalgo State

    Science.gov (United States)

    Zambrano-Garcia, A.; López-Veneroni, D.; Rojas, A.; Torres, A.; Sosa, G.

    2007-05-01

    the local oil refinery and the oil- fueled power plant. Two distinct Ni:V scatterplot trends suggest that there are two main petrogenic emission sources in the region. Calcium and, to some extent, Mg were higher near the mining areas and a calcium carbonate factory. Lead had a diffuse distribution, probably related to former gasoline vehicle exhaust emissions, rather than to current emissions. Antimony was more abundant at sites far from agriculture and industrial areas, which suggests a natural origin (rocks or soils). The spatial distribution of stable isotopes also showed distinct patterns near the industrial sources with relatively 13C -depleted and 15N -enriched values near the oil refinery and the electrical power plant. Although it is not yet possible to provide quantitative estimates for emission contributions per source type, biomonitoring with T. recurvata provided for the first time a clear picture of the relative deposition patterns for several airborne metals in the Mezquital Valley.

  1. Low resolution scans can provide a sufficiently accurate, cost- and time-effective alternative to high resolution scans for 3D shape analyses

    Directory of Open Access Journals (Sweden)

    Ariel E. Marcy

    2018-06-01

    Full Text Available Background Advances in 3D shape capture technology have made powerful shape analyses, such as geometric morphometrics, more feasible. While the highly accurate micro-computed tomography (µCT scanners have been the “gold standard,” recent improvements in 3D surface scanners may make this technology a faster, portable, and cost-effective alternative. Several studies have already compared the two devices but all use relatively large specimens such as human crania. Here we perform shape analyses on Australia’s smallest rodent to test whether a 3D scanner produces similar results to a µCT scanner. Methods We captured 19 delicate mouse (Pseudomys delicatulus crania with a µCT scanner and a 3D scanner for geometric morphometrics. We ran multiple Procrustes ANOVAs to test how variation due to scan device compared to other sources such as biologically relevant variation and operator error. We quantified operator error as levels of variation and repeatability. Further, we tested if the two devices performed differently at classifying individuals based on sexual dimorphism. Finally, we inspected scatterplots of principal component analysis (PCA scores for non-random patterns. Results In all Procrustes ANOVAs, regardless of factors included, differences between individuals contributed the most to total variation. The PCA plots reflect this in how the individuals are dispersed. Including only the symmetric component of shape increased the biological signal relative to variation due to device and due to error. 3D scans showed a higher level of operator error as evidenced by a greater spread of their replicates on the PCA, a higher level of multivariate variation, and a lower repeatability score. However, the 3D scan and µCT scan datasets performed identically in classifying individuals based on intra-specific patterns of sexual dimorphism. Discussion Compared to µCT scans, we find that even low resolution 3D scans of very small specimens are

  2. Estimated Probability of a Cervical Spine Injury During an ISS Mission

    Science.gov (United States)

    Brooker, John E.; Weaver, Aaron S.; Myers, Jerry G.

    2013-01-01

    Introduction: The Integrated Medical Model (IMM) utilizes historical data, cohort data, and external simulations as input factors to provide estimates of crew health, resource utilization and mission outcomes. The Cervical Spine Injury Module (CSIM) is an external simulation designed to provide the IMM with parameter estimates for 1) a probability distribution function (PDF) of the incidence rate, 2) the mean incidence rate, and 3) the standard deviation associated with the mean resulting from injury/trauma of the neck. Methods: An injury mechanism based on an idealized low-velocity blunt impact to the superior posterior thorax of an ISS crewmember was used as the simulated mission environment. As a result of this impact, the cervical spine is inertially loaded from the mass of the head producing an extension-flexion motion deforming the soft tissues of the neck. A multibody biomechanical model was developed to estimate the kinematic and dynamic response of the head-neck system from a prescribed acceleration profile. Logistic regression was performed on a dataset containing AIS1 soft tissue neck injuries from rear-end automobile collisions with published Neck Injury Criterion values producing an injury transfer function (ITF). An injury event scenario (IES) was constructed such that crew 1 is moving through a primary or standard translation path transferring large volume equipment impacting stationary crew 2. The incidence rate for this IES was estimated from in-flight data and used to calculate the probability of occurrence. The uncertainty in the model input factors were estimated from representative datasets and expressed in terms of probability distributions. A Monte Carlo Method utilizing simple random sampling was employed to propagate both aleatory and epistemic uncertain factors. Scatterplots and partial correlation coefficients (PCC) were generated to determine input factor sensitivity. CSIM was developed in the SimMechanics/Simulink environment with a

  3. Accounting for the relationship between per diem cost and LOS when estimating hospitalization costs.

    Science.gov (United States)

    Ishak, K Jack; Stolar, Marilyn; Hu, Ming-yi; Alvarez, Piedad; Wang, Yamei; Getsios, Denis; Williams, Gregory C

    2012-12-01

    Hospitalization costs in clinical trials are typically derived by multiplying the length of stay (LOS) by an average per-diem (PD) cost from external sources. This assumes that PD costs are independent of LOS. Resource utilization in early days of the stay is usually more intense, however, and thus, the PD cost for a short hospitalization may be higher than for longer stays. The shape of this relationship is unlikely to be linear, as PD costs would be expected to gradually plateau. This paper describes how to model the relationship between PD cost and LOS using flexible statistical modelling techniques. An example based on a clinical study of clevidipine for the treatment of peri-operative hypertension during hospitalizations for cardiac surgery is used to illustrate how inferences about cost-savings associated with good blood pressure (BP) control during the stay can be affected by the approach used to derive hospitalization costs.Data on the cost and LOS of hospitalizations for coronary artery bypass grafting (CABG) from the Massachusetts Acute Hospital Case Mix Database (the MA Case Mix Database) were analyzed to link LOS to PD cost, factoring in complications that may have occurred during the hospitalization or post-discharge. The shape of the relationship between LOS and PD costs in the MA Case Mix was explored graphically in a regression framework. A series of statistical models including those based on simple logarithmic transformation of LOS to more flexible models using LOcally wEighted Scatterplot Smoothing (LOESS) techniques were considered. A final model was selected, using simplicity and parsimony as guiding principles in addition traditional fit statistics (like Akaike's Information Criterion, or AIC). This mapping was applied in ECLIPSE to predict an LOS-specific PD cost, and then a total cost of hospitalization. These were then compared for patients who had good vs. poor peri-operative blood-pressure control. The MA Case Mix dataset included data

  4. Accounting for the relationship between per diem cost and LOS when estimating hospitalization costs

    Directory of Open Access Journals (Sweden)

    Ishak K

    2012-12-01

    Full Text Available Abstract Background Hospitalization costs in clinical trials are typically derived by multiplying the length of stay (LOS by an average per-diem (PD cost from external sources. This assumes that PD costs are independent of LOS. Resource utilization in early days of the stay is usually more intense, however, and thus, the PD cost for a short hospitalization may be higher than for longer stays. The shape of this relationship is unlikely to be linear, as PD costs would be expected to gradually plateau. This paper describes how to model the relationship between PD cost and LOS using flexible statistical modelling techniques. Methods An example based on a clinical study of clevidipine for the treatment of peri-operative hypertension during hospitalizations for cardiac surgery is used to illustrate how inferences about cost-savings associated with good blood pressure (BP control during the stay can be affected by the approach used to derive hospitalization costs. Data on the cost and LOS of hospitalizations for coronary artery bypass grafting (CABG from the Massachusetts Acute Hospital Case Mix Database (the MA Case Mix Database were analyzed to link LOS to PD cost, factoring in complications that may have occurred during the hospitalization or post-discharge. The shape of the relationship between LOS and PD costs in the MA Case Mix was explored graphically in a regression framework. A series of statistical models including those based on simple logarithmic transformation of LOS to more flexible models using LOcally wEighted Scatterplot Smoothing (LOESS techniques were considered. A final model was selected, using simplicity and parsimony as guiding principles in addition traditional fit statistics (like Akaike’s Information Criterion, or AIC. This mapping was applied in ECLIPSE to predict an LOS-specific PD cost, and then a total cost of hospitalization. These were then compared for patients who had good vs. poor peri-operative blood

  5. Photometry of the bright and dark terrains of Vesta and Lutetia with comparison to other asteroids

    Science.gov (United States)

    Longobardo, A.; Palomba, E.; Capaccioni, F.; De Sanctis, M.; Tosi, F.; Schroder, S.; Li, J.; Capria, M.; Ammannito, E.; Raymond, C.; Russell, C.

    2014-07-01

    The reflectance of a planetary surface as measured at different phase angles can provide useful information about several properties, both optical (importance of multiple and single scattering, regolith shadowing) and physical (roughness and regolith grain size). In particular, disk-resolved observations allow one to monitor photometric properties variations across a planetary surface. In this work, we retrieved disk-resolved phase functions of asteroids Vesta and Lutetia, by means of hyperspectral images returned by the Visible and InfraRed (VIR) mapping spectrometer onboard NASA's Dawn spacecraft, and the Visible, InfraRed, and Thermal Imaging Spectrometer (VIRTIS), onboard ESA's Rosetta spacecraft, respectively. Then we compared their photometric properties with those obtained of other asteroids closely explored by space missions (Gaspra, Ida, Eros, Annefrank, Steins, Mathilde). The trend of reflectance as a function of phase angle has been obtained by undertaking a statistical analysis, based on the empirical definition of reflectance families. For each family, the relation between reflectance and phase has been then calculated. On Vesta, we find steeper phase functions in dark material units, which become flatter with increasing albedo. This has been ascribed to a relevant role of multiple scattering in bright regions. As opposed to Vesta, Lutetia is a more homogeneous body. Hence we can consider a unique phase function for the whole asteroid surface. We chose two parameters useful to describe the photometric behavior of these asteroids: the reflectance which would be observed at a 30° phase, tagged R30, and the ''phase slope'' or the reflectance percent decrease between 20° and 60° phase, tagged PS. These two parameters have been calculated also on disk-resolved phase functions of other asteroids available in literature. We find that all S-type asteroids place in the same region of the R30-PS scatterplot, due to their similar photometric properties. C

  6. Trends in surface-water quality at selected National Stream Quality Accounting Network (NASQAN) stations, in Michigan

    Science.gov (United States)

    Syed, Atiq U.; Fogarty, Lisa R.

    2005-01-01

    -treatment processes, and effective regulations. Phosphorus data for most of the study stations could not be analyzed because of the data limitations for trend tests. The only station with a significant negative trend in total phosphorus concentration is the Clinton River at Mount Clemens. However, scatter-plot analyses of phosphorus data indicate decreasing concentrations with time for most of the study stations. Positive trends in concentration of nitrogen compounds were detected at the Kalamazoo River near Saugatuck and Muskegon River near Bridgeton. Positive trends in both fecal coliform and total fecal coliform were detected at the Tahquamenon River near Paradise. Various different point and nonpoint sources could produce such positive trends, but most commonly the increase in concentrations of nitrogen compounds and fecal coliform bacteria are associated with agricultural practices and sewage-plant discharges. The constituent with the most numerous and geographically widespread significant trend is pH. The pH levels increased at six out of nine stations on all the major rivers in Michigan, with no negative trend at any station. The cause of pH increase is difficult to determine, as it could be related to a combination of anthropogenic activities and natural processes occurring simultaneously in the environment. Trends in concentration of major ions, such as calcium, sodium, magnesium, sulfate, fluoride, chloride, and potassium, were detected at eight out of nine stations. A negative trend was detected only in sulfate and fluoride concentrations; a positive trend was detected only in calcium concentration. The major ions with the most widespread significant trends are sodium and chloride; three positive and two negative trends were detected for sodium, and three negative and two positive trends were detected for chloride. The negative trends in chloride concentrations outnumbered the positive trends. This result indicates a slight improvement in surface-water quality because