WorldWideScience

Sample records for stereology-based measurement predicts

  1. Digital stereology in neuropathology

    DEFF Research Database (Denmark)

    Kristiansen, Sarah Line Brøgger; Nyengaard, Jens Randel

    2012-01-01

    will therefore present the relevant stereological estimators for obtaining reliable quantitative structural data from brains and peripheral nerves when using digital light microscopy. It is discussed how to obtain brain and nerve fibre samples to fulfil the requirements for the estimators. A presentation......-dimensional structural knowledge. Accordingly, stereology is a science based on statistical sampling principles and geometric measures. The application of stereology to neuropathological studies allows the researcher to efficiently obtain a precise estimate of various structural quantities. This neuropathological review...

  2. Newsletter '77 in stereology

    International Nuclear Information System (INIS)

    Ondracek, G.

    1977-12-01

    There are three groups of contributions forming the present Newsletter in Stereology which are such of theoretical type, stereological activities in bio-sciences and quatitative image analysis in materials science. The report is introduced by two papers treating theoretical problems as the definition of particle size based on the total curvature and the definition of pattern recognition categories. It than follows a summarizing description and comparison of alternative techniques used to measure and conclude stereological parameters in bio-sciences. The discussion includes the sample preparation, semi- and complete automatic measuring procedures as well as the computation of primary data. The biological part ends by considering the use of those quantitative microscopical methods to investigate and classify foreign compounds inside the human liver stereologically. The materials science part reports about tests made on steel specimens to evaluate the accuracy of automatic microstructural analyses and about the use of image 'erosion' and 'dilatation' to measure microstructural parameters automatically. The last subject is part of a serie on morphology in quantitative metallography started in the previous Newsletter '76. The last paper on materials sciences considers the use of stereology and microstructural analysis in respect to a quality control, choosing WC-Co hardmetals as an example, where stereologically defined microstructural parameters do not serve only to describe microstructures quantitatively but also provide a usefull tool to determine properties indirectly. (orig.) [de

  3. Newsletter '76 in stereology

    International Nuclear Information System (INIS)

    Ondracek, G.

    1976-08-01

    The present newsletter on stereology deals with a brief outlook about stereological problems to be solved in the future, compares definitions in pattern recognition and stereology and exposes the main notions of mathematical morphology used in quantitative metallography. This includes the description of the main stereological equations relating the parameters describing the dimensional features to the parameters measured in plane sections as well as a special type of equation for practical uses by which the average fiber length in composite materials can be determined. In this context the methods of particle shape descriptions have been summarized and reviewed and an example is given, how particle size and shape distributions can be measured statistically by automatic feature analysis of morphometric sections. - The introduction of stereological microstructural parameters into microstructure - property equations opens the way to calculate the materials properties by a stereological microstructure analysis and extends the possibilities of the common microstructural quality control. This is demonstrated for WC-Co hard metals. (orig./GSC) [de

  4. PROBES, POPULATIONS, SAMPLES, MEASUREMENTS AND RELATIONS IN STEREOLOGY

    Directory of Open Access Journals (Sweden)

    Robert T Dehoff

    2011-05-01

    Full Text Available This summary paper provides an overview of the content of stereology. The typical problem at hand centers around some three dimensional object that has an internal structure that determines its function, performance, or response. To understand and quantify the geometry of that structure it is necessary to probe it with geometric entities: points, lines, planes volumes, etc. Meaningful results are obtained only if the set of probes chosen for use in the assessment is drawn uniformly from the population of such probes for the structure as a whole. This requires an understanding of the population of each kind of probe. Interaction of the probes with the structure produce geometric events which are the focus of stereological measurements. In almost all applications the measurement that is made is a simple count of the number of these events. Rigorous application of these requirements for sample design produce unbiased estimates of geometric properties of features in the structure no matter how complex are the features or what their arrangement in space. It is this assumption-free characteristic of the methodology that makes it a powerful tool for characterizing the internal structure of three dimensional objects.

  5. [Simulation and data analysis of stereological modeling based on virtual slices].

    Science.gov (United States)

    Wang, Hao; Shen, Hong; Bai, Xiao-yan

    2008-05-01

    To establish a computer-assisted stereological model for simulating the process of slice section and evaluate the relationship between section surface and estimated three-dimensional structure. The model was designed by mathematic method as a win32 software based on the MFC using Microsoft visual studio as IDE for simulating the infinite process of sections and analysis of the data derived from the model. The linearity of the fitting of the model was evaluated by comparison with the traditional formula. The win32 software based on this algorithm allowed random sectioning of the particles distributed randomly in an ideal virtual cube. The stereological parameters showed very high throughput (>94.5% and 92%) in homogeneity and independence tests. The data of density, shape and size of the section were tested to conform to normal distribution. The output of the model and that from the image analysis system showed statistical correlation and consistency. The algorithm we described can be used for evaluating the stereologic parameters of the structure of tissue slices.

  6. Abdominal fat volume estimation by stereology on CT: a comparison with manual planimetry.

    Science.gov (United States)

    Manios, G E; Mazonakis, M; Voulgaris, C; Karantanas, A; Damilakis, J

    2016-03-01

    To deploy and evaluate a stereological point-counting technique on abdominal CT for the estimation of visceral (VAF) and subcutaneous abdominal fat (SAF) volumes. Stereological volume estimations based on point counting and systematic sampling were performed on images from 14 consecutive patients who had undergone abdominal CT. For the optimization of the method, five sampling intensities in combination with 100 and 200 points were tested. The optimum stereological measurements were compared with VAF and SAF volumes derived by the standard technique of manual planimetry on the same scans. Optimization analysis showed that the selection of 200 points along with the sampling intensity 1/8 provided efficient volume estimations in less than 4 min for VAF and SAF together. The optimized stereology showed strong correlation with planimetry (VAF: r = 0.98; SAF: r = 0.98). No statistical differences were found between the two methods (VAF: P = 0.81; SAF: P = 0.83). The 95% limits of agreement were also acceptable (VAF: -16.5%, 16.1%; SAF: -10.8%, 10.7%) and the repeatability of stereology was good (VAF: CV = 4.5%, SAF: CV = 3.2%). Stereology may be successfully applied to CT images for the efficient estimation of abdominal fat volume and may constitute a good alternative to the conventional planimetric technique. Abdominal obesity is associated with increased risk of disease and mortality. Stereology may quantify visceral and subcutaneous abdominal fat accurately and consistently. The application of stereology to estimating abdominal volume fat reduces processing time. Stereology is an efficient alternative method for estimating abdominal fat volume.

  7. Efficient stereological approaches for the volumetry of a normal or enlarged spleen from MDCT images

    Energy Technology Data Exchange (ETDEWEB)

    Mazonakis, Michalis; Stratakis, John; Damilakis, John [University of Crete, Department of Medical Physics, Faculty of Medicine, P.O. Box 2208, Iraklion, Crete (Greece)

    2015-06-01

    To introduce efficient stereological approaches for estimating the volume of a normal or enlarged spleen from MDCT. All study participants underwent an abdominal MDCT. The first group included 20 consecutive patients with splenomegaly and the second group consisted of 20 subjects with a normal spleen. Splenic volume estimations were performed using the stereological point counting method. Stereological assessments were optimized using the systematic slice sampling procedure. Planimetric measurements based on manual tracing of splenic boundaries on each slice were taken as reference values. Stereological analysis using five to eight systematically sampled slices provided enlarged splenic volume estimations with a mean precision of 4.9 ± 1.0 % in a mean time of 2.3 ± 0.4 min. A similar measurement duration and error was observed for normal splenic volume assessment using four to seven systematically selected slices. These stereological approaches slightly but insignificantly overestimated the volume of a normal and enlarged spleen compared to planimetry (P > 0.05) with a mean difference of -1.3 ± 4.3 % and -2.7 ± 5.2 %, respectively. The two methods were highly correlated (r ≥ 0.96). The variability of repeated stereological estimations was below 3.8 %. The proposed stereological approaches enable the rapid, reproducible, and accurate splenic volume estimation from MDCT data in patients with or without splenomegaly. (orig.)

  8. Unbiased stereologic techniques for practical use in diagnostic histopathology

    DEFF Research Database (Denmark)

    Sørensen, Flemming Brandt

    1995-01-01

    Grading of malignancy by the examination of morphologic and cytologic details in histologic sections from malignant neoplasms is based exclusively on qualitative features, associated with significant subjectivity, and thus rather poor reproducibility. The traditional way of malignancy grading may...... by introducing quantitative techniques in the histopathologic discipline of malignancy grading. Unbiased stereologic methods, especially based on measurements of nuclear three-dimensional mean size, have during the last decade proved their value in this regard. In this survey, the methods are reviewed regarding...... the basic technique involved, sampling, efficiency, and reproducibility. Various types of cancers, where stereologic grading of malignancy has been used, are reviewed and discussed with regard to the development of a new objective and reproducible basis for carrying out prognosis-related malignancy grading...

  9. Unbiased stereologic techniques for practical use in diagnostic histopathology

    DEFF Research Database (Denmark)

    Sørensen, Flemming Brandt

    1995-01-01

    by introducing quantitative techniques in the histopathologic discipline of malignancy grading. Unbiased stereologic methods, especially based on measurements of nuclear three-dimensional mean size, have during the last decade proved their value in this regard. In this survey, the methods are reviewed regarding......Grading of malignancy by the examination of morphologic and cytologic details in histologic sections from malignant neoplasms is based exclusively on qualitative features, associated with significant subjectivity, and thus rather poor reproducibility. The traditional way of malignancy grading may...... of solid tumors. This new, unbiased attitude to malignancy grading is associated with excellent virtues, which ultimately may help the clinician in the choice of optimal treatment of the individual patient suffering from cancer. Stereologic methods are not solely applicable to the field of malignancy...

  10. Practical Stereology Applications for the Pathologist.

    Science.gov (United States)

    Brown, Danielle L

    2017-05-01

    Qualitative histopathology is the gold standard for routine examination of morphological tissue changes in the regulatory or academic environment. The human eye is exceptional for pattern recognition but often cannot detect small changes in quantity. In cases where detection of subtle quantitative changes is critical, more sensitive methods are required. Two-dimensional histomorphometry can provide additional quantitative information and is quite useful in many cases. However, the provided data may not be referent to the entire tissue and, as such, it makes several assumptions, which are sources of bias. In contrast, stereology is design based rather than assumption based and uses stringent sampling methods to obtain accurate and precise 3-dimensional information using geometrical and statistical principles. Recent advances in technology have made stereology more approachable and practical for the pathologist in both regulatory and academic environments. This review introduces pathologists to the basic principles of stereology and walks the reader through some real-world examples for the application of these principles in the workplace.

  11. Abdominal fat volume estimation by stereology on CT: a comparison with manual planimetry

    Energy Technology Data Exchange (ETDEWEB)

    Manios, G.E.; Mazonakis, M.; Damilakis, J. [University of Crete, Department of Medical Physics, Faculty of Medicine, Heraklion, Crete (Greece); Voulgaris, C.; Karantanas, A. [University of Crete, Department of Radiology, Faculty of Medicine, Heraklion, Crete (Greece)

    2016-03-15

    To deploy and evaluate a stereological point-counting technique on abdominal CT for the estimation of visceral (VAF) and subcutaneous abdominal fat (SAF) volumes. Stereological volume estimations based on point counting and systematic sampling were performed on images from 14 consecutive patients who had undergone abdominal CT. For the optimization of the method, five sampling intensities in combination with 100 and 200 points were tested. The optimum stereological measurements were compared with VAF and SAF volumes derived by the standard technique of manual planimetry on the same scans. Optimization analysis showed that the selection of 200 points along with the sampling intensity 1/8 provided efficient volume estimations in less than 4 min for VAF and SAF together. The optimized stereology showed strong correlation with planimetry (VAF: r = 0.98; SAF: r = 0.98). No statistical differences were found between the two methods (VAF: P = 0.81; SAF: P = 0.83). The 95 % limits of agreement were also acceptable (VAF: -16.5 %, 16.1 %; SAF: -10.8 %, 10.7 %) and the repeatability of stereology was good (VAF: CV = 4.5 %, SAF: CV = 3.2 %). Stereology may be successfully applied to CT images for the efficient estimation of abdominal fat volume and may constitute a good alternative to the conventional planimetric technique. (orig.)

  12. Assessment of left ventricular function and mass by MR imaging: a stereological study based on the systematic slice sampling procedure.

    Science.gov (United States)

    Mazonakis, Michalis; Sahin, Bunyamin; Pagonidis, Konstantin; Damilakis, John

    2011-06-01

    The aim of this study was to combine the stereological technique with magnetic resonance (MR) imaging data for the volumetric and functional analysis of the left ventricle (LV). Cardiac MR examinations were performed in 13 consecutive subjects with known or suspected coronary artery disease. The end-diastolic volume (EDV), end-systolic volume, ejection fraction (EF), and mass were estimated by stereology using the entire slice set depicting LV and systematic sampling intensities of 1/2 and 1/3 that provided samples with every second and third slice, respectively. The repeatability of stereology was evaluated. Stereological assessments were compared with the reference values derived by manually tracing the endocardial and epicardial contours on MR images. Stereological EDV and EF estimations obtained by the 1/3 systematic sampling scheme were significantly different from those by manual delineation (P sampling intensity of 1/2 (P > .05). For these stereological approaches, a high correlation (r(2) = 0.80-0.93) and clinically acceptable limits of agreement were found with the reference method. Stereological estimations obtained by both sample sizes presented comparable coefficient of variation values of 2.9-5.8%. The mean time for stereological measurements on the entire slice set was 3.4 ± 0.6 minutes and it was reduced to 2.5 ± 0.5 minutes with the 1/2 systematic sampling scheme. Stereological analysis on systematic samples of MR slices generated by the 1/2 sampling intensity provided efficient and quick assessment of LV volumes, function, and mass. Copyright © 2011 AUR. Published by Elsevier Inc. All rights reserved.

  13. Nondestructive, stereological estimation of canopy surface area

    DEFF Research Database (Denmark)

    Wulfsohn, Dvora-Laio; Sciortino, Marco; Aaslyng, Jesper M.

    2010-01-01

    We describe a stereological procedure to estimate the total leaf surface area of a plant canopy in vivo, and address the problem of how to predict the variance of the corresponding estimator. The procedure involves three nested systematic uniform random sampling stages: (i) selection of plants from...... a canopy using the smooth fractionator, (ii) sampling of leaves from the selected plants using the fractionator, and (iii) area estimation of the sampled leaves using point counting. We apply this procedure to estimate the total area of a chrysanthemum (Chrysanthemum morifolium L.) canopy and evaluate both...... the time required and the precision of the estimator. Furthermore, we compare the precision of point counting for three different grid intensities with that of several standard leaf area measurement techniques. Results showed that the precision of the plant leaf area estimator based on point counting...

  14. Stereological analysis of spatial structures

    DEFF Research Database (Denmark)

    Hansen, Linda Vadgård

    The thesis deals with stereological analysis of spatial structures. One area of focus has been to improve the precision of well-known stereological estimators by including information that is available via automatic image analysis. Furthermore, the thesis presents a stochastic model for star......-shaped three-dimensional objects using the radial function. It appears that the model is highly fleksiblel in the sense that it can be used to describe an object with arbitrary irregular surface. Results on the distribution of well-known local stereological volume estimators are provided....

  15. Practicable methods for histological section thickness measurement in quantitative stereological analyses.

    Science.gov (United States)

    Matenaers, Cyrill; Popper, Bastian; Rieger, Alexandra; Wanke, Rüdiger; Blutke, Andreas

    2018-01-01

    The accuracy of quantitative stereological analysis tools such as the (physical) disector method substantially depends on the precise determination of the thickness of the analyzed histological sections. One conventional method for measurement of histological section thickness is to re-embed the section of interest vertically to its original section plane. The section thickness is then measured in a subsequently prepared histological section of this orthogonally re-embedded sample. However, the orthogonal re-embedding (ORE) technique is quite work- and time-intensive and may produce inaccurate section thickness measurement values due to unintentional slightly oblique (non-orthogonal) positioning of the re-embedded sample-section. Here, an improved ORE method is presented, allowing for determination of the factual section plane angle of the re-embedded section, and correction of measured section thickness values for oblique (non-orthogonal) sectioning. For this, the analyzed section is mounted flat on a foil of known thickness (calibration foil) and both the section and the calibration foil are then vertically (re-)embedded. The section angle of the re-embedded section is then calculated from the deviation of the measured section thickness of the calibration foil and its factual thickness, using basic geometry. To find a practicable, fast, and accurate alternative to ORE, the suitability of spectral reflectance (SR) measurement for determination of plastic section thicknesses was evaluated. Using a commercially available optical reflectometer (F20, Filmetrics®, USA), the thicknesses of 0.5 μm thick semi-thin Epon (glycid ether)-sections and of 1-3 μm thick plastic sections (glycolmethacrylate/ methylmethacrylate, GMA/MMA), as regularly used in physical disector analyses, could precisely be measured within few seconds. Compared to the measured section thicknesses determined by ORE, SR measures displayed less than 1% deviation. Our results prove the applicability

  16. Sampling for stereology in lungs

    Directory of Open Access Journals (Sweden)

    J. R. Nyengaard

    2006-12-01

    Full Text Available The present article reviews the relevant stereological estimators for obtaining reliable quantitative structural data from the lungs. Stereological sampling achieves reliable, quantitative information either about the whole lung or complete lobes, whilst minimising the workload. Studies have used systematic random sampling, which has fixed and constant sampling probabilities on all blocks, sections and fields of view. For an estimation of total lung or lobe volume, the Cavalieri principle can be used, but it is not useful in estimating individual cell volume due to various effects from over- or underprojection. If the number of certain structures is required, two methods can be used: the disector and the fractionator. The disector method is a three-dimensional stereological probe for sampling objects according to their number. However, it may be affected on tissue deformation and, therefore, the fractionator method is often the preferred sampling principle. In this method, a known and predetermined fraction of an object is sampled in one or more steps, with the final step estimating the number. Both methods can be performed in a physical and optical manner, therefore enabling cells and larger lung structure numbers (e.g. number of alveoli to be estimated. Some estimators also require randomisation of orientation, so that all directions have an equal chance of being chosen. Using such isotropic sections, surface area, length, and diameter can be estimated on a Cavalieri set of sections. Stereology can also illustrate the potential for transport between two compartments by analysing the barrier width. Estimating the individual volume of cells can be achieved by local stereology using a two-step procedure that first samples lung cells using the disector and then introduces individual volume estimation of the sampled cells. The coefficient of error of most unbiased stereological estimators is a combination of variance from blocks, sections, fields

  17. Confocal stereology: an efficient tool for measurement of microscopic structures

    Czech Academy of Sciences Publication Activity Database

    Kubínová, Lucie; Janáček, Jiří

    2015-01-01

    Roč. 360, č. 1 (2015), s. 13-28 ISSN 0302-766X R&D Projects: GA MŠk(CZ) LH13028 Institutional support: RVO:67985823 Keywords : 3-D images * confocal microscopy * geometrical characteristics * spatial probes * stereology Subject RIV: EA - Cell Biology Impact factor: 2.948, year: 2015

  18. Application of stereology to dermatological research

    DEFF Research Database (Denmark)

    Kamp, Søren; Jemec, Gregor Borut Ernst; Kemp, Kåre

    2009-01-01

    Stereology is a set of mathematical and statistical tools to estimate three-dimensional (3-D) characteristics of objects from regular two-dimensional (2-D) sections. In medicine and biology, it can be used to estimate features such as cell volume, cell membrane surface area, total length of blood...... vessels per volume tissue and total number of cells. The unbiased quantification of these 3-D features allows for a better understanding of morphology in vivo compared with 2-D methods. This review provides an introduction to the field of stereology with specific emphasis on the application of stereology...

  19. Assessing the Effects of Fibrosis on Lung Function by Light Microscopy-Coupled Stereology

    DEFF Research Database (Denmark)

    Pilecki, Bartosz; Sørensen, Grith Lykke

    2017-01-01

    Pulmonary diseases such as fibrosis are characterized by structural abnormalities that lead to impairment of proper lung function. Stereological analysis of serial tissue sections allows detection and quantitation of subtle changes in lung architecture. Here, we describe a stereology-based method...

  20. Stereological estimation of surface area from digital images

    DEFF Research Database (Denmark)

    Ziegel, Johanna; Kiderlen, Markus

    2010-01-01

    A sampling design of local stereology is combined with a method from digital stereology to yield a novel estimator of surface area based on counts of configurations observed in a digitization of an isotropic 2- dimensional slice with thickness s. As a tool, a result of the second author and J....... Rataj on infinitesimal increase of volumes of morphological transforms is refined and used. The proposed surface area estimator is asymptotically unbiased in the case of sets contained in the ball centred at the origin with radius s and in the case of balls centred at the origin with unknown radius...

  1. Stereological, functional and molecular studies of development and disease : a collection of published works 1981 to 2013

    OpenAIRE

    Bertram, John F.

    2016-01-01

    The unifying theme throughout this Doctorate of Science thesis is the development, refinement and utilisation of stereological techniques to study tissue structures in development, and in adult health and disease. Stereology is the discipline based on geometric probability theory that enables us to quantify structures in three-dimensional space. Stereological techniques are often applied in material science as well as biomedical science. When applied to histology and pathology, stereology can...

  2. Prediction of prognosis in patients with epidural hematoma by a new stereological method

    International Nuclear Information System (INIS)

    Kalkan, E.; Cander, B.; Gul, M.; Girisgin, S.; Karabagli, H.; Sahin, B.

    2007-01-01

    Epidural hematoma (EH) is a serious clinical event observed in 2% of head trauma patients. Studies regarding the effects of epidural hematoma volume (EHV) on prognosis are not sufficient. In this study, we applied the volume fraction approach of the stereological method to estimate the hematoma to brain volume fraction (HBVF), and investigated the relation between the HBVF and prognosis. Fifty-nine EH patients (46 male and 13 female subjects, with average age of 21 years) admitted to the emergency clinic were included. The HBVF was estimated on the printed films of cranial computed tomography scans. For this purpose, common point counting grids were superimposed over the scan frames. According to the clinical results, patients were divided into three groups as complete recovery (43), disability (8) and exitus (8). The HBVF was compared with the clinical results. HBVF was determined as 4.6% in the patients with recovery, 8.1% in disability, and 7.6% in exitus patients. The HBVF values were lowest in recovery patients, and the difference between the recovery and the other two groups was statistically significant (p=0.007). However, there was no statistically significant difference in HBVF between disability and exitus patients (p>0.05). In conclusion, the HBVF can be an important tool to determine prognosis, and it can be measured using the volume fraction approach of stereological methods as developed in the present study. (author)

  3. A review of state-of-the-art stereology for better quantitative 3D morphology in cardiac research.

    Science.gov (United States)

    Mühlfeld, Christian; Nyengaard, Jens Randel; Mayhew, Terry M

    2010-01-01

    The aim of stereological methods in biomedical research is to obtain quantitative information about three-dimensional (3D) features of tissues, cells, or organelles from two-dimensional physical or optical sections. With immunogold labeling, stereology can even be used for the quantitative analysis of the distribution of molecules within tissues and cells. Nowadays, a large number of design-based stereological methods offer an efficient quantitative approach to intriguing questions in cardiac research, such as "Is there a significant loss of cardiomyocytes during progression from ventricular hypertrophy to heart failure?" or "Does a specific treatment reduce the degree of fibrosis in the heart?" Nevertheless, the use of stereological methods in cardiac research is rare. The present review article demonstrates how some of the potential pitfalls in quantitative microscopy may be avoided. To this end, we outline the concepts of design-based stereology and illustrate their practical applications to a wide range of biological questions in cardiac research. We hope that the present article will stimulate researchers in cardiac research to incorporate design-based stereology into their study designs, thus promoting an unbiased quantitative 3D microscopy.

  4. STEREOLOGICAL ESTIMATION OF SURFACE AREA FROM DIGITAL IMAGES

    Directory of Open Access Journals (Sweden)

    Johanna Ziegel

    2011-05-01

    Full Text Available A sampling design of local stereology is combined with a method from digital stereology to yield a novel estimator of surface area based on counts of configurations observed in a digitization of an isotropic 2- dimensional slice with thickness s. As a tool, a result of the second author and J. Rataj on infinitesimal increase of volumes of morphological transforms is refined and used. The proposed surface area estimator is asymptotically unbiased in the case of sets contained in the ball centred at the origin with radius s and in the case of balls centred at the origin with unknown radius. For general shapes bounds for the asymptotic expected relative worst case error are given. A simulation example is discussed for surface area estimation based on 2×2×2-configurations.

  5. Stereology application in the investigation of physical and mechanical properties of porous materials

    International Nuclear Information System (INIS)

    Cytermann, Richard.

    1979-04-01

    The sintering of carbonyl nickel powders has been studied through stereology (quantitative microscopy) associated with different physical and mechanical measurements. This study demonstrated that a set of stereological parameters, such as porosity, grain size, mean pore volume ..., was necessary to characterize porous parts with the same porosity obtained through different ways. On the one hand, stereology permitted to elucidate powder shape and speed of pressure rising influence on the compacting process. On the other hand, the study of physical and mechanical properties related to their microstructure led to distinguish: properties such as elasticity modulus independant of compacting pressure, sintering temperature and powder shape. Their evolution has been characterized through contiguity coefficient; properties such as tensile strength dependant of sintering parameters. Their characterization required the simultaneous measurement of porosity mean pore volume, shape factor and grain size [fr

  6. Computer-assisted stereology and automated image analysis for quantification of tumor infiltrating lymphocytes in colon cancer.

    Science.gov (United States)

    Eriksen, Ann C; Andersen, Johnnie B; Kristensson, Martin; dePont Christensen, René; Hansen, Torben F; Kjær-Frifeldt, Sanne; Sørensen, Flemming B

    2017-08-29

    Precise prognostic and predictive variables allowing improved post-operative treatment stratification are missing in patients treated for stage II colon cancer (CC). Investigation of tumor infiltrating lymphocytes (TILs) may be rewarding, but the lack of a standardized analytic technique is a major concern. Manual stereological counting is considered the gold standard, but digital pathology with image analysis is preferred due to time efficiency. The purpose of this study was to compare manual stereological estimates of TILs with automatic counts obtained by image analysis, and at the same time investigate the heterogeneity of TILs. From 43 patients treated for stage II CC in 2002 three paraffin embedded, tumor containing tissue blocks were selected one of them representing the deepest invasive tumor front. Serial sections from each of the 129 blocks were immunohistochemically stained for CD3 and CD8, and the slides were scanned. Stereological estimates of the numerical density and area fraction of TILs were obtained using the computer-assisted newCAST stereology system. For the image analysis approach an app-based algorithm was developed using Visiopharm Integrator System software. For both methods the tumor areas of interest (invasive front and central area) were manually delineated by the observer. Based on all sections, the Spearman's correlation coefficients for density estimates varied from 0.9457 to 0.9638 (p heterogeneity, intra-class correlation coefficients (ICC) for CD3+ TILs varied from 0.615 to 0.746 in the central area, and from 0.686 to 0.746 in the invasive area. ICC for CD8+ TILs varied from 0.724 to 0.775 in the central area, and from 0.746 to 0.765 in the invasive area. Exact objective and time efficient estimates of numerical densities and area fractions of CD3+ and CD8+ TILs in stage II colon cancer can be obtained by image analysis and are highly correlated to the corresponding estimates obtained by the gold standard based on stereology

  7. Introduction into integral geometry and stereology

    DEFF Research Database (Denmark)

    Kiderlen, Markus

    Statistics and Random Fields and is a self-containing introduction into integral geometry and its applications in stereology. The most important integral geometric tools for stereological applications are kinematic formulas and results of Blaschke-Petkantschin type. Therefore, Crofton's formula......This text is the extended version of two talks held at the Summer Academy Stochastic Geometry, Spatial Statistics and Random Fields in the Soellerhaus, Germany, in September 2009. It forms (with slight modifications) a chapter of the Springer lecture notes Lectures on Stochastic Geometry, Spatial...

  8. UTILIZATION OF STEREOLOGY FOR QUANTITATIVE ANALYSIS OF PLASTIC DEFORMATION OF FORMING PIECES

    Directory of Open Access Journals (Sweden)

    Maroš Martinkovič

    2012-01-01

    Full Text Available Mechanical working leads to final properties of forming pieces, which are affected by conditions of production technology. Utilization of stereology leads to the detail analysis of three-dimensional plastic deformed material structure by different forming technologies, e.g. forging, extruding, upsetting, metal spinning, drawing etc. The microstructure of cold drawing wires was analyzed. Grain boundaries orientation was measured on the parallel section of wire with a different degree of deformation and direct axis plastic deformation was evaluated in bulk formed part. The strain of probes on their sections was obtained using stereology by measurement of degree of grain boundary orientation which was converted to deformation using model of conversion of grain boundary orientation degree to deformation.

  9. Microstructure characterization via stereological relations — A shortcut for beginners

    Energy Technology Data Exchange (ETDEWEB)

    Pabst, Willi, E-mail: pabstw@vscht.cz; Gregorová, Eva; Uhlířová, Tereza

    2015-07-15

    Stereological relations that can be routinely applied for the quantitative characterization of microstructures of heterogeneous single- and two-phase materials via global microstructural descriptors are reviewed. It is shown that in the case of dense, single-phase polycrystalline materials (e.g., transparent yttrium aluminum garnet ceramics) two quantities have to be determined, the interface density (or, equivalently, the mean chord length of the grains) and the mean curvature integral density (or, equivalently, the Jeffries grain size), while for two-phase materials (e.g., highly porous, cellular alumina ceramics), one additional quantity, the volume fraction (porosity), is required. The Delesse–Rosiwal law is recalled and size measures are discussed. It is shown that the Jeffries grain size is based on the triple junction line length density, while the mean chord length of grains is based on the interface density (grain boundary area density). In contrast to widespread belief, however, these two size measures are not alternative, but independent (and thus complementary), measures of grain size. Concomitant with this fact, a clear distinction between linear and planar grain size numbers is proposed. Finally, based on our concept of phase-specific quantities, it is shown that under certain conditions it is possible to define a Jeffries size also for two-phase materials and that the ratio of the mean chord length and the Jeffries size has to be considered as an invariant number for a certain type of microstructure, i.e., a characteristic value that is independent of the absolute size of the microstructural features (e.g., grains, inclusions or pores). - Highlights: • Stereology-based image analysis is reviewed, including error considerations. • Recipes are provided for measuring global metric microstructural descriptors. • Size measures are based on interface density and mean curvature integral density. • Phase-specific quantities and a generalized

  10. Practical application of stereological methods in experimental kidney animal models.

    Science.gov (United States)

    Fernández García, María Teresa; Núñez Martínez, Paula; García de la Fuente, Vanessa; Sánchez Pitiot, Marta; Muñiz Salgueiro, María Del Carmen; Perillán Méndez, Carmen; Argüelles Luis, Juan; Astudillo González, Aurora

    The kidneys are vital organs responsible for excretion, fluid and electrolyte balance and hormone production. The nephrons are the kidney's functional and structural units. The number, size and distribution of the nephron components contain relevant information on renal function. Stereology is a branch of morphometry that applies mathematical principles to obtain three-dimensional information from serial, parallel and equidistant two-dimensional microscopic sections. Because of the complexity of stereological studies and the lack of scientific literature on the subject, the aim of this paper is to clearly explain, through animal models, the basic concepts of stereology and how to calculate the main kidney stereological parameters that can be applied in future experimental studies. Copyright © 2016 Sociedad Española de Nefrología. Published by Elsevier España, S.L.U. All rights reserved.

  11. STEREOLOGY FROM ONE OF ALL THE POSSIBLE ANGLES

    Directory of Open Access Journals (Sweden)

    Leszek Wojnar

    2011-05-01

    Full Text Available Relation between image analysis and stereology is discussed in terms of different fields of applications, especially materials science, biology and medicine. Some long-term tendencies observed as well as possible future trends are discussed. The need of a wider use of image analysis techniques including ma1hematical morphology in any field of science is demons1rated. Simultaneously, the significance of stereological background in automatic quantification of1he investigated structures is confirmed.

  12. Prediction of residential radon exposure of the whole Swiss population: comparison of model-based predictions with measurement-based predictions.

    Science.gov (United States)

    Hauri, D D; Huss, A; Zimmermann, F; Kuehni, C E; Röösli, M

    2013-10-01

    Radon plays an important role for human exposure to natural sources of ionizing radiation. The aim of this article is to compare two approaches to estimate mean radon exposure in the Swiss population: model-based predictions at individual level and measurement-based predictions based on measurements aggregated at municipality level. A nationwide model was used to predict radon levels in each household and for each individual based on the corresponding tectonic unit, building age, building type, soil texture, degree of urbanization, and floor. Measurement-based predictions were carried out within a health impact assessment on residential radon and lung cancer. Mean measured radon levels were corrected for the average floor distribution and weighted with population size of each municipality. Model-based predictions yielded a mean radon exposure of the Swiss population of 84.1 Bq/m(3) . Measurement-based predictions yielded an average exposure of 78 Bq/m(3) . This study demonstrates that the model- and the measurement-based predictions provided similar results. The advantage of the measurement-based approach is its simplicity, which is sufficient for assessing exposure distribution in a population. The model-based approach allows predicting radon levels at specific sites, which is needed in an epidemiological study, and the results do not depend on how the measurement sites have been selected. © 2013 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  13. The efficiency of systematic sampling in stereology-reconsidered

    DEFF Research Database (Denmark)

    Gundersen, Hans Jørgen Gottlieb; Jensen, Eva B. Vedel; Kieu, K

    1999-01-01

    In the present paper, we summarize and further develop recent research in the estimation of the variance of stereological estimators based on systematic sampling. In particular, it is emphasized that the relevant estimation procedure depends on the sampling density. The validity of the variance...... estimation is examined in a collection of data sets, obtained by systematic sampling. Practical recommendations are also provided in a separate section....

  14. Stereological study of postnatal development in the mouse utricular macula

    DEFF Research Database (Denmark)

    Kirkegaard, Mette; Nyengaard, Jens Randel

    2005-01-01

    This study describes the morphometric changes taking place in the utricular macula of mice with ages in geometric progression from 1 to 512 days after birth. By using design-based stereological methods, the total volume and surface area of the sensory epithelium as well the total number of the ha...

  15. Stereological estimation of nuclear volume in benign and atypical meningiomas

    DEFF Research Database (Denmark)

    Madsen, C; Schrøder, H D

    1993-01-01

    A stereological estimation of nuclear volume in benign and atypical meningiomas was made. The aim was to investigate whether this method could discriminate between these two meningeal neoplasms. The difference was significant and it was moreover seen that there was no overlap between the two groups....... The results demonstrate that atypical meningiomas can be distinguished from benign meningiomas by an objective stereological estimation of nuclear volume....

  16. A simple and efficient alternative to implementing systematic random sampling in stereological designs without a motorized microscope stage.

    Science.gov (United States)

    Melvin, Neal R; Poda, Daniel; Sutherland, Robert J

    2007-10-01

    When properly applied, stereology is a very robust and efficient method to quantify a variety of parameters from biological material. A common sampling strategy in stereology is systematic random sampling, which involves choosing a random sampling [corrected] start point outside the structure of interest, and sampling relevant objects at [corrected] sites that are placed at pre-determined, equidistant intervals. This has proven to be a very efficient sampling strategy, and is used widely in stereological designs. At the microscopic level, this is most often achieved through the use of a motorized stage that facilitates the systematic random stepping across the structure of interest. Here, we report a simple, precise and cost-effective software-based alternative to accomplishing systematic random sampling under the microscope. We believe that this approach will facilitate the use of stereological designs that employ systematic random sampling in laboratories that lack the resources to acquire costly, fully automated systems.

  17. Tensor-based morphometry and stereology reveal brain pathology in the complexin1 knockout mouse.

    Science.gov (United States)

    Kielar, Catherine; Sawiak, Stephen J; Navarro Negredo, Paloma; Tse, Desmond H Y; Morton, A Jennifer

    2012-01-01

    Complexins (Cplxs) are small, soluble, regulatory proteins that bind reversibly to the SNARE complex and modulate synaptic vesicle release. Cplx1 knockout mice (Cplx1(-/-)) have the earliest known onset of ataxia seen in a mouse model, although hitherto no histopathology has been described in these mice. Nevertheless, the profound neurological phenotype displayed by Cplx1(-/-) mutants suggests that significant functional abnormalities must be present in these animals. In this study, MRI was used to automatically detect regions where structural differences were not obvious when using a traditional histological approach. Tensor-based morphometry of Cplx1(-/-) mouse brains showed selective volume loss from the thalamus and cerebellum. Stereological analysis of Cplx1(-/-) and Cplx1(+/+) mice brain slices confirmed the volume loss in the thalamus as well as loss in some lobules of the cerebellum. Finally, stereology was used to show that there was loss of cerebellar granule cells in Cplx1(-/-) mice when compared to Cplx1(+/+) animals. Our study is the first to describe pathological changes in Cplx1(-/-) mouse brain. We suggest that the ataxia in Cplx1(-/-) mice is likely to be due to pathological changes in both cerebellum and thalamus. Reduced levels of Cplx proteins have been reported in brains of patients with neurodegenerative diseases. Therefore, understanding the effects of Cplx depletion in brains from Cplx1(-/-) mice may also shed light on the mechanisms underlying pathophysiology in disorders in which loss of Cplx1 occurs.

  18. Tensor-based morphometry and stereology reveal brain pathology in the complexin1 knockout mouse.

    Directory of Open Access Journals (Sweden)

    Catherine Kielar

    Full Text Available Complexins (Cplxs are small, soluble, regulatory proteins that bind reversibly to the SNARE complex and modulate synaptic vesicle release. Cplx1 knockout mice (Cplx1(-/- have the earliest known onset of ataxia seen in a mouse model, although hitherto no histopathology has been described in these mice. Nevertheless, the profound neurological phenotype displayed by Cplx1(-/- mutants suggests that significant functional abnormalities must be present in these animals. In this study, MRI was used to automatically detect regions where structural differences were not obvious when using a traditional histological approach. Tensor-based morphometry of Cplx1(-/- mouse brains showed selective volume loss from the thalamus and cerebellum. Stereological analysis of Cplx1(-/- and Cplx1(+/+ mice brain slices confirmed the volume loss in the thalamus as well as loss in some lobules of the cerebellum. Finally, stereology was used to show that there was loss of cerebellar granule cells in Cplx1(-/- mice when compared to Cplx1(+/+ animals. Our study is the first to describe pathological changes in Cplx1(-/- mouse brain. We suggest that the ataxia in Cplx1(-/- mice is likely to be due to pathological changes in both cerebellum and thalamus. Reduced levels of Cplx proteins have been reported in brains of patients with neurodegenerative diseases. Therefore, understanding the effects of Cplx depletion in brains from Cplx1(-/- mice may also shed light on the mechanisms underlying pathophysiology in disorders in which loss of Cplx1 occurs.

  19. Comparison of automated brain volumetry methods with stereology in children aged 2 to 3 years

    Energy Technology Data Exchange (ETDEWEB)

    Mayer, Kristina N. [University Children' s Hospital of Zurich, Center for MR Research, Zurich (Switzerland); University Children' s Hospital, Pediatric Cardiology, Zurich (Switzerland); Latal, Beatrice [University Children' s Hospital, Child Development Center, Zurich (Switzerland); University Children' s Hospital, Children' s Research Center, Zurich (Switzerland); Knirsch, Walter [University Children' s Hospital, Pediatric Cardiology, Zurich (Switzerland); University Children' s Hospital, Children' s Research Center, Zurich (Switzerland); Scheer, Ianina [University Children' s Hospital, Department for Diagnostic Neuroradiology, Zurich (Switzerland); Rhein, Michael von [University Children' s Hospital, Child Development Center, Zurich (Switzerland); Reich, Bettina; Bauer, Juergen; Gummel, Kerstin [Justus-Liebig University, Pediatric Heart Center, University Hospital Giessen, Giessen (Germany); Roberts, Neil [University of Edinburgh, Clinical Research and Imaging Centre (CRIC), The Queens Medical Research Institute (QMRI), Edinburgh (United Kingdom); O' Gorman Tuura, Ruth [University Children' s Hospital of Zurich, Center for MR Research, Zurich (Switzerland); University Children' s Hospital, Children' s Research Center, Zurich (Switzerland)

    2016-09-15

    The accurate and precise measurement of brain volumes in young children is important for early identification of children with reduced brain volumes and an increased risk for neurodevelopmental impairment. Brain volumes can be measured from cerebral MRI (cMRI), but most neuroimaging tools used for cerebral segmentation and volumetry were developed for use in adults and have not been validated in infants or young children. Here, we investigate the feasibility and accuracy of three automated software methods (i.e., SPM, FSL, and FreeSurfer) for brain volumetry in young children and compare the measures with corresponding volumes obtained using the Cavalieri method of modern design stereology. Cerebral MRI data were collected from 21 children with a complex congenital heart disease (CHD) before Fontan procedure, at a median age of 27 months (range 20.9-42.4 months). Data were segmented with SPM, FSL, and FreeSurfer, and total intracranial volume (ICV) and total brain volume (TBV) were compared with corresponding measures obtained using the Cavalieri method. Agreement between the estimated brain volumes (ICV and TBV) relative to the gold standard stereological volumes was strongest for FreeSurfer (p < 0.001) and moderate for SPM segment (ICV p = 0.05; TBV p = 0.006). No significant association was evident between ICV and TBV obtained using SPM NewSegment and FSL FAST and the corresponding stereological volumes. FreeSurfer provides an accurate method for measuring brain volumes in young children, even in the presence of structural brain abnormalities. (orig.)

  20. Comparison of automated brain volumetry methods with stereology in children aged 2 to 3 years

    International Nuclear Information System (INIS)

    Mayer, Kristina N.; Latal, Beatrice; Knirsch, Walter; Scheer, Ianina; Rhein, Michael von; Reich, Bettina; Bauer, Juergen; Gummel, Kerstin; Roberts, Neil; O'Gorman Tuura, Ruth

    2016-01-01

    The accurate and precise measurement of brain volumes in young children is important for early identification of children with reduced brain volumes and an increased risk for neurodevelopmental impairment. Brain volumes can be measured from cerebral MRI (cMRI), but most neuroimaging tools used for cerebral segmentation and volumetry were developed for use in adults and have not been validated in infants or young children. Here, we investigate the feasibility and accuracy of three automated software methods (i.e., SPM, FSL, and FreeSurfer) for brain volumetry in young children and compare the measures with corresponding volumes obtained using the Cavalieri method of modern design stereology. Cerebral MRI data were collected from 21 children with a complex congenital heart disease (CHD) before Fontan procedure, at a median age of 27 months (range 20.9-42.4 months). Data were segmented with SPM, FSL, and FreeSurfer, and total intracranial volume (ICV) and total brain volume (TBV) were compared with corresponding measures obtained using the Cavalieri method. Agreement between the estimated brain volumes (ICV and TBV) relative to the gold standard stereological volumes was strongest for FreeSurfer (p < 0.001) and moderate for SPM segment (ICV p = 0.05; TBV p = 0.006). No significant association was evident between ICV and TBV obtained using SPM NewSegment and FSL FAST and the corresponding stereological volumes. FreeSurfer provides an accurate method for measuring brain volumes in young children, even in the presence of structural brain abnormalities. (orig.)

  1. EARLY HISTORY OF GEOMETRIC PROBABILITY AND STEREOLOGY

    Directory of Open Access Journals (Sweden)

    Magdalena Hykšová

    2012-03-01

    Full Text Available The paper provides an account of the history of geometric probability and stereology from the time of Newton to the early 20th century. It depicts the development of two parallel ways: on one hand, the theory of geometric probability was formed with minor attention paid to other applications than those concerning spatial chance games. On the other hand, practical rules of the estimation of area or volume fraction and other characteristics, easily deducible from geometric probability theory, were proposed without the knowledge of this branch. A special attention is paid to the paper of J.-É. Barbier published in 1860, which contained the fundamental stereological formulas, but remained almost unnoticed both by mathematicians and practicians.

  2. Quantitative analysis of the renal aging in rats. Stereological study

    OpenAIRE

    Melchioretto, Eduardo Felippe; Zeni, Marcelo; Veronez, Djanira Aparecida da Luz; Martins Filho, Eduardo Lopes; Fraga, Rogério de

    2016-01-01

    ABSTRACT PURPOSE: To evaluate the renal function and the renal histological alterations through the stereology and morphometrics in rats submitted to the natural process of aging. METHODS: Seventy two Wistar rats, divided in six groups. Each group was sacrificed in a different age: 3, 6, 9, 12, 18 and 24 months. It was performed right nephrectomy, stereological and morphometric analysis of the renal tissue (renal volume and weight, density of volume (Vv[glom]) and numerical density (Nv[glo...

  3. Stereological measures of trabecular bone structure: comparison of 3D micro computed tomography with 2D histological sections in human proximal tibial bone biopsies

    DEFF Research Database (Denmark)

    Thomsen, Jesper Skovhus; Laib, A.; Koller, B.

    2005-01-01

    Stereology applied on histological sections is the 'gold standard' for obtaining quantitative information on cancellous bone structure. Recent advances in micro computed tomography (microCT) have made it possible to acquire three-dimensional (3D) data non-destructively. However, before the 3D...... methods can be used as a substitute for the current 'gold standard' they have to be verified against the existing standard. The aim of this study was to compare bone structural measures obtained from 3D microCT data sets with those obtained by stereology performed on conventional histological sections...... tibial metaphysis. The biopsies were embedded in methylmetacrylate before microCT scanning in a Scanco microCT 40 scanner at a resolution of 20 x 20 x 20 microm3, and the 3D data sets were analysed with a computer program. After microCT scanning, 16 sections were cut from the central 2 mm of each biopsy...

  4. A handheld support system to facilitate stereological measurements and mapping of branching structures

    DEFF Research Database (Denmark)

    Gardi, J.E.; Wulfsohn, Dvora-Laiô; Nyengaard, J.R.

    2007-01-01

    specifications, software and Graphical User Interface (GUI) development, functionality and application of the handheld system using four examples: (1) sampling monkey lung bronchioles for estimation of diameter and wall thickness (2) sampling rat kidney for estimating number of arteries and arterioles......‘BranchSampler' is a system for computer-assisted manual stereology written for handheld devices running Windows CE. The system has been designed specifically to streamline data collection and optimize sampling of tree-like branching structures, with particular aims of reducing user errors, saving...

  5. Observational attachment theory-based parenting measures predict children's attachment narratives independently from social learning theory-based measures.

    Science.gov (United States)

    Matias, Carla; O'Connor, Thomas G; Futh, Annabel; Scott, Stephen

    2014-01-01

    Conceptually and methodologically distinct models exist for assessing quality of parent-child relationships, but few studies contrast competing models or assess their overlap in predicting developmental outcomes. Using observational methodology, the current study examined the distinctiveness of attachment theory-based and social learning theory-based measures of parenting in predicting two key measures of child adjustment: security of attachment narratives and social acceptance in peer nominations. A total of 113 5-6-year-old children from ethnically diverse families participated. Parent-child relationships were rated using standard paradigms. Measures derived from attachment theory included sensitive responding and mutuality; measures derived from social learning theory included positive attending, directives, and criticism. Child outcomes were independently-rated attachment narrative representations and peer nominations. Results indicated that Attachment theory-based and Social Learning theory-based measures were modestly correlated; nonetheless, parent-child mutuality predicted secure child attachment narratives independently of social learning theory-based measures; in contrast, criticism predicted peer-nominated fighting independently of attachment theory-based measures. In young children, there is some evidence that attachment theory-based measures may be particularly predictive of attachment narratives; however, no single model of measuring parent-child relationships is likely to best predict multiple developmental outcomes. Assessment in research and applied settings may benefit from integration of different theoretical and methodological paradigms.

  6. STEREOLOGICAL ANALYSIS OF SHAPE

    Directory of Open Access Journals (Sweden)

    Asger Hobolth

    2011-05-01

    Full Text Available This paper concerns the problem of making stereological inference about the shape variability in a population of spatial particles. Under rotational invariance the shape variability can be estimated from central planar sections through the particles. A simple, but flexible, parametric model for rotation invariant spatial particles is suggested. It is shown how the parameters of the model can be estimated from observations on central sections. The corresponding model for planar particles is also discussed in some detail.

  7. Digital immunohistochemistry wizard: image analysis-assisted stereology tool to produce reference data set for calibration and quality control.

    Science.gov (United States)

    Plancoulaine, Benoît; Laurinaviciene, Aida; Meskauskas, Raimundas; Baltrusaityte, Indra; Besusparis, Justinas; Herlin, Paulette; Laurinavicius, Arvydas

    2014-01-01

    Digital image analysis (DIA) enables better reproducibility of immunohistochemistry (IHC) studies. Nevertheless, accuracy of the DIA methods needs to be ensured, demanding production of reference data sets. We have reported on methodology to calibrate DIA for Ki67 IHC in breast cancer tissue based on reference data obtained by stereology grid count. To produce the reference data more efficiently, we propose digital IHC wizard generating initial cell marks to be verified by experts. Digital images of proliferation marker Ki67 IHC from 158 patients (one tissue microarray spot per patient) with an invasive ductal carcinoma of the breast were used. Manual data (mD) were obtained by marking Ki67-positive and negative tumour cells, using a stereological method for 2D object enumeration. DIA was used as an initial step in stereology grid count to generate the digital data (dD) marks by Aperio Genie and Nuclear algorithms. The dD were collected into XML files from the DIA markup images and overlaid on the original spots along with the stereology grid. The expert correction of the dD marks resulted in corrected data (cD). The percentages of Ki67 positive tumour cells per spot in the mD, dD, and cD sets were compared by single linear regression analysis. Efficiency of cD production was estimated based on manual editing effort. The percentage of Ki67-positive tumor cells was in very good agreement in the mD, dD, and cD sets: regression of cD from dD (R2=0.92) reflects the impact of the expert editing the dD as well as accuracy of the DIA used; regression of the cD from the mD (R2=0.94) represents the consistency of the DIA-assisted ground truth (cD) with the manual procedure. Nevertheless, the accuracy of detection of individual tumour cells was much lower: in average, 18 and 219 marks per spot were edited due to the Genie and Nuclear algorithm errors, respectively. The DIA-assisted cD production in our experiment saved approximately 2/3 of manual marking. Digital IHC wizard

  8. Current automated 3D cell detection methods are not a suitable replacement for manual stereologic cell counting

    Directory of Open Access Journals (Sweden)

    Christoph eSchmitz

    2014-05-01

    Full Text Available Stereologic cell counting has had a major impact on the field of neuroscience. A major bottleneck in stereologic cell counting is that the user must manually decide whether or not each cell is counted according to three-dimensional (3D stereologic counting rules by visual inspection within hundreds of microscopic fields-of-view per investigated brain or brain region. Reliance on visual inspection forces stereologic cell counting to be very labor-intensive and time-consuming, and is the main reason why biased, non-stereologic two-dimensional (2D cell counting approaches have remained in widespread use. We present an evaluation of the performance of modern automated cell detection and segmentation algorithms as a potential alternative to the manual approach in stereologic cell counting. The image data used in this study were 3D microscopic images of thick brain tissue sections prepared with a variety of commonly used nuclear and cytoplasmic stains. The evaluation compared the numbers and locations of cells identified unambiguously and counted exhaustively by an expert observer with those found by three automated 3D cell detection algorithms: nuclei segmentation from the FARSIGHT toolkit, nuclei segmentation by 3D multiple level set methods, and the 3D object counter plug-in for ImageJ. Of these methods, FARSIGHT performed best, with true-positive detection rates between 38–99% and false-positive rates from 3.6–82%. The results demonstrate that the current automated methods suffer from lower detection rates and higher false-positive rates than are acceptable for obtaining valid estimates of cell numbers. Thus, at present, stereologic cell counting with manual decision for object inclusion according to unbiased stereologic counting rules remains the only adequate method for unbiased cell quantification in histologic tissue sections.

  9. Stereological quantification of mast cells in human synovium

    DEFF Research Database (Denmark)

    Damsgaard, T E; Sørensen, Flemming Brandt; Herlin, T

    1999-01-01

    Mast cells participate in both the acute allergic reaction as well as in chronic inflammatory diseases. Earlier studies have revealed divergent results regarding the quantification of mast cells in the human synovium. The aim of the present study was therefore to quantify these cells in the human...... synovium, using stereological techniques. Different methods of staining and quantification have previously been used for mast cell quantification in human synovium. Stereological techniques provide precise and unbiased information on the number of cell profiles in two-dimensional tissue sections of......, in this case, human synovium. In 10 patients suffering from osteoarthritis a median of 3.6 mast cells/mm2 synovial membrane was found. The total number of cells (synoviocytes, fibroblasts, lymphocytes, leukocytes) present was 395.9 cells/mm2 (median). The mast cells constituted 0.8% of all the cell profiles...

  10. Stereological analysis of nuclear volume in recurrent meningiomas

    DEFF Research Database (Denmark)

    Madsen, C; Schrøder, H D

    1994-01-01

    A stereological estimation of nuclear volume in recurrent and non-recurrent meningiomas was made. The aim was to investigate whether this method could discriminate between these two groups. We found that the mean nuclear volumes in recurrent meningiomas were all larger at debut than in any...... of the control tumors. The mean nuclear volume of the individual recurrent tumors appeared to change with time, showing a tendency to diminish. A relationship between large nuclear volume at presentation and number of or time interval between recurrences was not found. We conclude that measurement of mean...... nuclear volume in meningiomas might help identify a group at risk of recurrence....

  11. Stereology of human myometrium in pregnancy: influence of maternal body mass index and age.

    LENUS (Irish Health Repository)

    Sweeney, Eva M

    2013-04-01

    Knowledge of the stereology of human myometrium in pregnancy is limited. Uterine contractile performance may be altered in association with maternal obesity and advanced maternal age. The aim of this study was to investigate the stereology of human myometrium in pregnancy, and to evaluate a potential influence of maternal body mass index (BMI) and age.

  12. Design-based stereological estimation of the total number of cardiac myocytes in histological sections

    DEFF Research Database (Denmark)

    Brüel, Annemarie; Nyengaard, Jens Randel

    2005-01-01

    in LM sections using design-based stereology. MATERIALS AND METHODS: From formalin-fixed left rat ventricles (LV) isotropic uniformly random sections were cut. The total number of myocyte nuclei per LV was estimated using the optical disector. Two-microm-thick serial paraffin sections were stained......BACKGROUND: Counting the total number of cardiac myocytes has not previously been possible in ordinary histological sections using light microscopy (LM) due to difficulties in defining the myocyte borders properly. AIM: To describe a method by which the total number of cardiac myocytes is estimated...... with antibodies against cadherin and type IV collagen to visualise the intercalated discs and the myocyte membranes, respectively. Using the physical disector in "local vertical windows" of the serial sections, the average number of nuclei per myocyte was estimated.RESULTS: The total number of myocyte nuclei...

  13. Stereological estimation of nuclear mean volume in invasive meningiomas

    DEFF Research Database (Denmark)

    Madsen, C; Schrøder, H D

    1996-01-01

    A stereological estimation of nuclear mean volume in bone and brain invasive meningiomas was made. For comparison the nuclear mean volume of benign meningiomas was estimated. The aim was to investigate whether this method could discriminate between these groups. We found that the nuclear mean...... volume in the bone and brain invasive meningiomas was larger than in the benign tumors. The difference was significant and moreover it was seen that there was no overlap between the two groups. In the bone invasive meningiomas the nuclear mean volume appeared to be larger inside than outside the bone....... No significant difference in nuclear mean volume was found between brain and bone invasive meningiomas. The results demonstrate that invasive meningiomas differ from benign meningiomas by an objective stereological estimation of nuclear mean volume (p

  14. Quantitative analysis of the renal aging in rats. Stereological study.

    Science.gov (United States)

    Melchioretto, Eduardo Felippe; Zeni, Marcelo; Veronez, Djanira Aparecida da Luz; Martins, Eduardo Lopes; Fraga, Rogério de

    2016-05-01

    To evaluate the renal function and the renal histological alterations through the stereology and morphometrics in rats submitted to the natural process of aging. Seventy two Wistar rats, divided in six groups. Each group was sacrificed in a different age: 3, 6, 9, 12, 18 and 24 months. It was performed right nephrectomy, stereological and morphometric analysis of the renal tissue (renal volume and weight, density of volume (Vv[glom]) and numerical density (Nv[glom]) of the renal glomeruli and average glomerular volume (Vol[glom])) and also it was evaluated the renal function for the dosage of serum creatinine and urea. There was significant decrease of the renal function in the oldest rats. The renal volume presented gradual increase during the development of the rats with the biggest values registered in the group of animals at 12 months of age and significant progressive decrease in older animals. Vv[glom] presented statistically significant gradual reduction between the groups and the Nv[glom] also decreased significantly. The renal function proved to be inferior in senile rats when compared to the young rats. The morphometric and stereological analysis evidenced renal atrophy, gradual reduction of the volume density and numerical density of the renal glomeruli associated to the aging process.

  15. Light scattering in porous materials: Geometrical optics and stereological approach

    International Nuclear Information System (INIS)

    Malinka, Aleksey V.

    2014-01-01

    Porous material has been considered from the point of view of stereology (geometrical statistics), as a two-phase random mixture of solid material and air. Considered are the materials having the refractive index with the real part that differs notably from unit and the imaginary part much less than unit. Light scattering in such materials has been described using geometrical optics. These two – the geometrical optics laws and the stereological approach – allow one to obtain the inherent optical properties of such a porous material, which are basic in the radiative transfer theory: the photon survival probability, the scattering phase function, and the polarization properties (Mueller matrix). In this work these characteristics are expressed through the refractive index of the material and the random chord length distribution. The obtained results are compared with the traditional approach, modeling the porous material as a pack of particles of different shapes. - Highlights: • Porous material has been considered from the point of view of stereology. • Properties of a two-phase random mixture of solid material and air are considered. • Light scattering in such materials has been described using geometrical optics. • The inherent optical properties of such a porous material have been obtained

  16. Application of stereological methods to estimate post-mortem brain surface area using 3T MRI

    DEFF Research Database (Denmark)

    Furlong, Carolyn; García-Fiñana, Marta; Puddephat, Michael

    2013-01-01

    The Cavalieri and Vertical Sections methods of design based stereology were applied in combination with 3 tesla (i.e. 3T) Magnetic Resonance Imaging (MRI) to estimate cortical and subcortical volume, area of the pial surface, area of the grey-white matter boundary, and thickness of the cerebral...

  17. Improving efficiency in stereology

    DEFF Research Database (Denmark)

    Keller, Kresten Krarup; Andersen, Ina Trolle; Andersen, Johnnie Bremholm

    2013-01-01

    of the study was to investigate the time efficiency of the proportionator and the autodisector on virtual slides compared with traditional methods in a practical application, namely the estimation of osteoclast numbers in paws from mice with experimental arthritis and control mice. Tissue slides were scanned......, a proportionator sampling and a systematic, uniform random sampling were simulated. We found that the proportionator was 50% to 90% more time efficient than systematic, uniform random sampling. The time efficiency of the autodisector on virtual slides was 60% to 100% better than the disector on tissue slides. We...... conclude that both the proportionator and the autodisector on virtual slides may improve efficiency of cell counting in stereology....

  18. An unbiased stereological method for efficiently quantifying the innervation of the heart and other organs based on total length estimations

    DEFF Research Database (Denmark)

    Mühlfeld, Christian; Papadakis, Tamara; Krasteva, Gabriela

    2010-01-01

    Quantitative information about the innervation is essential to analyze the structure-function relationships of organs. So far, there has been no unbiased stereological tool for this purpose. This study presents a new unbiased and efficient method to quantify the total length of axons in a given...... reference volume, illustrated on the left ventricle of the mouse heart. The method is based on the following steps: 1) estimation of the reference volume; 2) randomization of location and orientation using appropriate sampling techniques; 3) counting of nerve fiber profiles hit by a defined test area within...

  19. Estimation of fetal volume by magnetic resonance imaging and stereology.

    Science.gov (United States)

    Roberts, N; Garden, A S; Cruz-Orive, L M; Whitehouse, G H; Edwards, R H

    1994-11-01

    The current methods to monitor fetal growth in utero are based on ultrasound image measurements which, lacking a proper sampling methodology, may be biased to unknown degrees. The Cavalieri method of stereology guarantees the accurate estimation of the volume of an arbitrary object from a few systematic sections. Non-invasive scanning methods, and magnetic resonance imaging (MRI) in particular, are valuable tools to provide the necessary sections, and therefore offer interesting possibilities for unbiased quantification. This paper describes how to estimate fetal volume in utero with a coefficient of error of less than 5% in less than 5 min, from three or four properly sampled MRI scans. MRI was chosen because it does not use ionizing radiations on the one hand, and it offers a good image quality on the other. The impact of potential sources of bias such as fetal motion, chemical shift and partial voluming artefacts is discussed. The methods are illustrated on four subjects monitored between weeks 28 and 40 of gestation.

  20. Stereological Cell Morphometry In Right Atrium Myocardium Of Primates

    Science.gov (United States)

    Mandarim-De-Lacerda, Carlos A...; Hureau, Jacques

    1986-07-01

    The mechanism by which the cardiac impulse is propagated in normal hearts from its origin in the sinus node to the atrio-ventricular node has not been agreed on fully. We studied the "internodal posterior tract" through the crista terminalis by light microscopy and stereological morphometry. The hearts of 12 Papio cynocephalus were perfused , after sacrifice,with phosphate-buffered formol saline. The regions of the crista terminalis (CT), interatrial septum (IAS), atrioventricular bundle (AVB) and interventricular septum (IVS) were cut off and embedded in paraplast and sectioned (10 4m). The multipurpose test system M 42 was superimposed over the photomicrographs (1,890 points test, ESR = 2%) to the stereological computing. The quantitative results show that the cells from CT were more closely relationed with IAS cells than others cells (IVS and AVB cells). This results are not a morphological evidence to establish the specificity of the "internodal posterior tract". The cellular arrangement and anatomical variation in CT myocardium is very important.

  1. THE APPLICATION OF STEREOLOGY METHOD FOR ESTIMATING THE NUMBER OF 3D BaTiO3 – CERAMIC GRAINS CONTACT SURFACES

    Directory of Open Access Journals (Sweden)

    Vojislav V Mitić

    2011-05-01

    Full Text Available Methods of stereological study are of great importance for structural research of electronic ceramic materials including BaTiO3-ceramic materials. The broad application of ceramics, based on barium-titanate, in advanced electronics nowadays demands a constant research of its structure, that through the correlation structureproperties, a fundamental in the basic materials properties prognosis triad (technology-structure-properties, leads to further prognosis and properties design of these ceramics. Microstructure properties of BaTiO3- ceramic material, expressed in grains' boundary contact, are of basic importance for electric properties of this material, particularly the capacity. In this paper, a significant step towards establishing control under capacitive properties of BaTiO3-ceramics is being done by estimating the number of grains contact surfaces. Defining an efficient stereology method for estimating the number of BaTiO3-ceramic grains contact surfaces, we have started from a mathematical model of mutual grains distribution in the prescribed volume of BaTiO3-ceramic sample. Since the real microstructure morphology of BaTiO3-ceramics is in some way disordered, spherical shaped grains, using computer-modelling methods, are approximated by polyhedra with a great number of small convex polygons. By dividing the volume of BaTiO3-ceramic sample with the definite number of parallel planes, according to a given pace, into the intersection plane a certain number of grains contact surfaces are identified. According to quantitative estimation of 2D stereological parameters the modelled 3D internal microstructure is obtained. Experiments were made by using the scanning electronic microscopy (SEM method with the ceramic samples prepared under pressing pressures up to 150 MPa and sintering temperature up to 1370°C while the obtained microphotographs were used as a base of confirming the validity of presented stereology method. This paper, by applying

  2. Stereological estimation of nuclear volume in benign and atypical meningiomas

    DEFF Research Database (Denmark)

    Madsen, C; Schrøder, H D

    1993-01-01

    A stereological estimation of nuclear volume in benign and atypical meningiomas was made. The aim was to investigate whether this method could discriminate between these two meningeal neoplasms. The difference was significant and it was moreover seen that there was no overlap between the two groups...

  3. Histological versus stereological methods applied at spermatogonia during normal human development

    DEFF Research Database (Denmark)

    Cortes, D

    1990-01-01

    The number of spermatogonia per tubular transverse section (S/T), and the percentage of seminiferous tubulus containing spermatogonia (the fertility index (FI] were measured in 40 pairs of normal autopsy testes aged 28 weeks of gestation-40 years. S/T and FI showed similar changes during the whole...... period, and were minimal between 1 and 4 years. The number of spermatogonia per testis (S/testis) and the number of spermatogonia per cm3 testis tissue (S/cm3) were estimated by stereological methods in the same testes. S/T and FI respectively were significantly correlated both to S/testis and S/cm3. So...

  4. Autogenous bone graft and ePTFE membrane in the treatment of peri-implantitis. II. Stereologic and histologic observations in cynomolgus monkeys

    DEFF Research Database (Denmark)

    Schou, Søren; Holmstrup, Palle; Skovgaard, Lene Theil

    2003-01-01

    autogenous bone graft; guided bone regeneration; histology; membrane; non-human primates; oral implants; osseointegration; pathalogy; peri-implantitis; stereology; treatment......autogenous bone graft; guided bone regeneration; histology; membrane; non-human primates; oral implants; osseointegration; pathalogy; peri-implantitis; stereology; treatment...

  5. Stereological estimates of nuclear volume in the prognostic evaluation of primary flat carcinoma in situ of the urinary bladder

    DEFF Research Database (Denmark)

    Sørensen, Flemming Brandt; Jacobsen, F

    1991-01-01

    Primary, flat carcinoma in situ of the urinary bladder is rare and its behaviour is unpredictable. The aim of this retrospective study was to obtain base-line data and investigate the prognostic value of unbiased, stereological estimates of the volume-weighted mean nuclear volume, nuclear vv, in ...

  6. Design-based stereological analysis of the lung parenchymal architecture and alveolar type II cells in surfactant protein A and D double deficient mice

    DEFF Research Database (Denmark)

    Jung, A; Allen, L; Nyengaard, Jens Randel

    2005-01-01

    Alveolar epithelial type II cells synthesize and secrete surfactant. The surfactant-associated proteins A and D (SP-A and SP-D), members of the collectin protein family, participate in pulmonary immune defense, modulation of inflammation, and surfactant metabolism. Both proteins are known to have......, but the mean volume of a single lamellar body remains constant. These results demonstrate that chronic deficiency of SP-A and SP-D in mice leads to parenchymal remodeling, type II cell hyperplasia and hypertrophy, and disturbed intracellular surfactant metabolism. The design-based stereological approach...

  7. Stereological estimates of nuclear volume in squamous cell carcinoma of the uterine cervix and its precursors

    DEFF Research Database (Denmark)

    Sørensen, Flemming Brandt; Bichel, P; Jakobsen, A

    1991-01-01

    Using modern stereology, this study was carried out to obtain base-line data concerning three-dimensional, mean nuclear size in precancerous and invasive lesions of the uterine cervix. Unbiased estimates of the volume-weighted mean nuclear volume (nuclear vv) were obtained by point-sampling of nu......Using modern stereology, this study was carried out to obtain base-line data concerning three-dimensional, mean nuclear size in precancerous and invasive lesions of the uterine cervix. Unbiased estimates of the volume-weighted mean nuclear volume (nuclear vv) were obtained by point......-sampling of nuclear intercepts in 51 pre-treatment biopsies from patients with invasive squamous cell carcinomas (SCC). Vertical sections from 27 specimens with cervical intraepithelial neoplasia (CIN) grades I through III were also investigated, along with 10 CIN III associated with microinvasion (CIN III + M......). On average, nuclear vv was larger in SCC than in CIN III and CIN III + M together (2 P = 8.9 . 10(-5). A conspicuous overlap of nuclear vv existed between all investigated lesional groups. The reproducibility of estimates of nuclear vv in biopsies with SCC was acceptable (r = 0.85 and r = 0.84 in intra...

  8. The proportionator: unbiased stereological estimation using biased automatic image analysis and non-uniform probability proportional to size sampling

    DEFF Research Database (Denmark)

    Gardi, Jonathan Eyal; Nyengaard, Jens Randel; Gundersen, Hans Jørgen Gottlieb

    2008-01-01

    examined, which in turn leads to any of the known stereological estimates, including size distributions and spatial distributions. The unbiasedness is not a function of the assumed relation between the weight and the structure, which is in practice always a biased relation from a stereological (integral......, the desired number of fields are sampled automatically with probability proportional to the weight and presented to the expert observer. Using any known stereological probe and estimator, the correct count in these fields leads to a simple, unbiased estimate of the total amount of structure in the sections...... geometric) point of view. The efficiency of the proportionator depends, however, directly on this relation to be positive. The sampling and estimation procedure is simulated in sections with characteristics and various kinds of noises in possibly realistic ranges. In all cases examined, the proportionator...

  9. Cerebral atrophy in AIDS: a stereological study

    DEFF Research Database (Denmark)

    Oster, S; Christoffersen, P; Gundersen, H J

    1993-01-01

    Stereological estimates of mean volumes, surface areas, and cortical thicknesses were obtained on formalin-fixed brains from 19 men with AIDS and 19 controls. Volumes of neocortex, white matter, central brain nuclei, ventricles and archicortex were estimated using point counting and Cavalieri......'s unbiased principle for volume estimation. In AIDS, the mean volume of neocortex was reduced by 11%, and that of the central brain nuclei by 18%. Mean ventricular volume was increased by 55%. Mean neocortical thickness was reduced by 12%. The mean volume of white matter was reduced by 13%. The findings in 6...

  10. Stereology as a tool to assess reproduction strategy and fecundity of teleost fishes

    DEFF Research Database (Denmark)

    Bucholtz, Rikke Hagstrøm

    methods to assess fecundity and reproductive strategies. The strength of the stereological method being that, in combination with conventional histological analysis, quantification of all oocyte categories is possible, as well as registration of qualitative characteristics relating to spawning history...... of oocyte dynamics in fish and were successfully implemented in herring ovaries for quantification of both oocyte numbers and sizes as well as total volume fraction of atretic oocytes, introducing a negligible error to the total variance of estimates. The histological nature of the stereological methods...... facilitated a ready validation of maturity data, distinguishing first time spawners from repeat spawners, as well as a ready recognition of ongoing oocyte recruitment in early maturity stages, early stage atresia, POFs and residual eggs. Analyzing a sample of females all collected during a short time frame...

  11. Unbiased stereological methods used for the quantitative evaluation of guided bone regeneration

    DEFF Research Database (Denmark)

    Aaboe, Else Merete; Pinholt, E M; Schou, S

    1998-01-01

    The present study describes the use of unbiased stereological methods for the quantitative evaluation of the amount of regenerated bone. Using the principle of guided bone regeneration the amount of regenerated bone after placement of degradable or non-degradable membranes covering defects...

  12. A Sensor Dynamic Measurement Error Prediction Model Based on NAPSO-SVM.

    Science.gov (United States)

    Jiang, Minlan; Jiang, Lan; Jiang, Dingde; Li, Fei; Song, Houbing

    2018-01-15

    Dynamic measurement error correction is an effective way to improve sensor precision. Dynamic measurement error prediction is an important part of error correction, and support vector machine (SVM) is often used for predicting the dynamic measurement errors of sensors. Traditionally, the SVM parameters were always set manually, which cannot ensure the model's performance. In this paper, a SVM method based on an improved particle swarm optimization (NAPSO) is proposed to predict the dynamic measurement errors of sensors. Natural selection and simulated annealing are added in the PSO to raise the ability to avoid local optima. To verify the performance of NAPSO-SVM, three types of algorithms are selected to optimize the SVM's parameters: the particle swarm optimization algorithm (PSO), the improved PSO optimization algorithm (NAPSO), and the glowworm swarm optimization (GSO). The dynamic measurement error data of two sensors are applied as the test data. The root mean squared error and mean absolute percentage error are employed to evaluate the prediction models' performances. The experimental results show that among the three tested algorithms the NAPSO-SVM method has a better prediction precision and a less prediction errors, and it is an effective method for predicting the dynamic measurement errors of sensors.

  13. Estimation of absolute microglial cell numbers in mouse fascia dentata using unbiased and efficient stereological cell counting principles

    DEFF Research Database (Denmark)

    Wirenfeldt, Martin; Dalmau, Ishar; Finsen, Bente

    2003-01-01

    Stereology offers a set of unbiased principles to obtain precise estimates of total cell numbers in a defined region. In terms of microglia, which in the traumatized and diseased CNS is an extremely dynamic cell population, the strength of stereology is that the resultant estimate is unaffected...... of microglia, although with this thickness, the intensity of the staining is too high to distinguish single cells. Lectin histochemistry does not visualize microglia throughout the section and, accordingly, is not suited for the optical fractionator. The mean total number of Mac-1+ microglial cells...... in the unilateral dentate gyrus of the normal young adult male C57BL/6 mouse was estimated to be 12,300 (coefficient of variation (CV)=0.13) with a mean coefficient of error (CE) of 0.06. The perspective of estimating microglial cell numbers using stereology is to establish a solid basis for studying the dynamics...

  14. Stereological observations of platelet-reinforced mullite- and zirconia-matrix composites

    International Nuclear Information System (INIS)

    Cherian, I.K.; Kriven, W.M.; Lehigh, M.D.; Nettleship, I.

    1996-01-01

    Recently, the effect of solid inclusions on the sintering of ceramic powders has been explained in terms of a back-stress that opposes densification. Several analyses have been proposed to describe this problem. However, little quantitative information exists concerning the effect of reinforcement on microstructural evolution. This study compares the microstructural development of zirconia and mullite matrices in the presence of alumina platelets. The effect of platelet loading on density is similar for both composites. Quantitative stereological examinations reveal that the average grain size and pore size are finer for the zirconia-matrix composite. The platelet loading does not have any noticeable effect on the average grain size of the matrix in either composite. However, the average pore size increases as the volume fraction of platelets increases for both materials. Contiguity measurements have detected some aggregation of platelets in the zirconia-matrix composite

  15. A Stereological Method for the Quantitative Evaluation of Cartilage Repair Tissue

    Science.gov (United States)

    Nyengaard, Jens Randel; Lind, Martin; Spector, Myron

    2015-01-01

    Objective To implement stereological principles to develop an easy applicable algorithm for unbiased and quantitative evaluation of cartilage repair. Design Design-unbiased sampling was performed by systematically sectioning the defect perpendicular to the joint surface in parallel planes providing 7 to 10 hematoxylin–eosin stained histological sections. Counting windows were systematically selected and converted into image files (40-50 per defect). The quantification was performed by two-step point counting: (1) calculation of defect volume and (2) quantitative analysis of tissue composition. Step 2 was performed by assigning each point to one of the following categories based on validated and easy distinguishable morphological characteristics: (1) hyaline cartilage (rounded cells in lacunae in hyaline matrix), (2) fibrocartilage (rounded cells in lacunae in fibrous matrix), (3) fibrous tissue (elongated cells in fibrous tissue), (4) bone, (5) scaffold material, and (6) others. The ability to discriminate between the tissue types was determined using conventional or polarized light microscopy, and the interobserver variability was evaluated. Results We describe the application of the stereological method. In the example, we assessed the defect repair tissue volume to be 4.4 mm3 (CE = 0.01). The tissue fractions were subsequently evaluated. Polarized light illumination of the slides improved discrimination between hyaline cartilage and fibrocartilage and increased the interobserver agreement compared with conventional transmitted light. Conclusion We have applied a design-unbiased method for quantitative evaluation of cartilage repair, and we propose this algorithm as a natural supplement to existing descriptive semiquantitative scoring systems. We also propose that polarized light is effective for discrimination between hyaline cartilage and fibrocartilage. PMID:26069715

  16. PLASTICITY OF SKELETAL MUSCLE STUDIED BY STEREOLOGY

    Directory of Open Access Journals (Sweden)

    Ida Eržen

    2011-05-01

    Full Text Available The present contribution provides an overview of stereological methods applied in the skeletal muscle research at the Institute of Anatomy of the Medical Faculty in Ljubljana. Interested in skeletal muscle plasticity we studied three different topics: (i expression of myosin heavy chain isoforms in slow and fast muscles under experimental conditions, (ii frequency of satellite cells in young and old human and rat muscles and (iii capillary supply of rat fast and slow muscles. We analysed the expression of myosin heavy chain isoforms within slow rat soleus and fast extensor digitorum longus muscles after (i homotopic and heterotopic transplantation of both muscles, (ii low frequency electrical stimulation of the fast muscle and (iii transposition of the fast nerve to the slow muscle. The models applied were able to turn the fast muscle into a completely slow muscle, but not vice versa. One of the indicators for the regenerative potential of skeletal muscles is its satellite cell pool. The estimated parameters, number of satellite cells per unit fibre length, corrected to the reference sarcomere length (Nsc/Lfib and number of satellite cells per number of nuclei (myonuclei and satellite cell nuclei (Nsc/Nnucl indicated that the frequency of M-cadherin stained satellite cells declines in healthy old human and rat muscles compared to young muscles. To access differences in capillary densities among slow and fast muscles and slow and fast muscle fibres, we have introduced Slicer and Fakir methods, and tested them on predominantly slow and fast rat muscles. Discussing three different topics that require different approach, the present paper reflects the three decades of the development of stereological methods: 2D analysis by simple point counting in the 70's, the disector in the 80's and virtual spatial probes in the 90's. In all methods the interactive computer assisted approach was utilised.

  17. A note on stereological estimation of the volume-weighted second moment of particle volume

    DEFF Research Database (Denmark)

    Jensen, E B; Sørensen, Flemming Brandt

    1991-01-01

    It is shown that for a variety of biological particle shapes, the volume-weighted second moment of particle volume can be estimated stereologically using only the areas of particle transects, which can be estimated manually by point-counting....

  18. Stereological quantification of immune-competent cells in baseline biopsy specimens from achilles tendons

    DEFF Research Database (Denmark)

    Kragsnaes, Maja Skov; Fredberg, Ulrich; Stribolt, Katrine

    2014-01-01

    BACKGROUND: Limited data exist on the presence and function of immune-competent cells in chronic tendinopathic tendons and their potential role in inflammation and tissue healing as well as in predicting long-term outcome. PURPOSE: To quantify subtypes of immune-competent cells in biopsy specimens...... immunohistochemically by quantifying the presence of macrophages (CD68-PGM1(+), CD68-KP1(+)), hemosiderophages (Perls blue), T lymphocytes (CD2(+), CD3(+), CD4(+), CD7(+), CD8(+)), B lymphocytes (CD20(+)), natural killer cells (CD56(+)), mast cells (NaSDCl(+)), Schwann cells (S100(+)), and endothelial cells (CD34......(+)) using a stereological technique. A follow-up examination was conducted more than 4 years (range, 4-9 years) after the biopsy procedure to evaluate the long-term presence of Achilles tendon symptoms. RESULTS: Macrophages, T lymphocytes, mast cells, and natural killer cells were observed in the majority...

  19. Stereological analysis of neuron, glial and endothelial cell numbers in the human amygdaloid complex.

    Directory of Open Access Journals (Sweden)

    María García-Amado

    Full Text Available Cell number alterations in the amygdaloid complex (AC might coincide with neurological and psychiatric pathologies with anxiety imbalances as well as with changes in brain functionality during aging. This stereological study focused on estimating, in samples from 7 control individuals aged 20 to 75 years old, the number and density of neurons, glia and endothelial cells in the entire AC and in its 5 nuclear groups (including the basolateral (BL, corticomedial and central groups, 5 nuclei and 13 nuclear subdivisions. The volume and total cell number in these territories were determined on Nissl-stained sections with the Cavalieri principle and the optical fractionator. The AC mean volume was 956 mm(3 and mean cell numbers (x10(6 were: 15.3 neurons, 60 glial cells and 16.8 endothelial cells. The numbers of endothelial cells and neurons were similar in each AC region and were one fourth the number of glial cells. Analysis of the influence of the individuals' age at death on volume, cell number and density in each of these 24 AC regions suggested that aging does not affect regional size or the amount of glial cells, but that neuron and endothelial cell numbers respectively tended to decrease and increase in territories such as AC or BL. These accurate stereological measures of volume and total cell numbers and densities in the AC of control individuals could serve as appropriate reference values to evaluate subtle alterations in this structure in pathological conditions.

  20. Stereological analysis of neuron, glial and endothelial cell numbers in the human amygdaloid complex.

    Science.gov (United States)

    García-Amado, María; Prensa, Lucía

    2012-01-01

    Cell number alterations in the amygdaloid complex (AC) might coincide with neurological and psychiatric pathologies with anxiety imbalances as well as with changes in brain functionality during aging. This stereological study focused on estimating, in samples from 7 control individuals aged 20 to 75 years old, the number and density of neurons, glia and endothelial cells in the entire AC and in its 5 nuclear groups (including the basolateral (BL), corticomedial and central groups), 5 nuclei and 13 nuclear subdivisions. The volume and total cell number in these territories were determined on Nissl-stained sections with the Cavalieri principle and the optical fractionator. The AC mean volume was 956 mm(3) and mean cell numbers (x10(6)) were: 15.3 neurons, 60 glial cells and 16.8 endothelial cells. The numbers of endothelial cells and neurons were similar in each AC region and were one fourth the number of glial cells. Analysis of the influence of the individuals' age at death on volume, cell number and density in each of these 24 AC regions suggested that aging does not affect regional size or the amount of glial cells, but that neuron and endothelial cell numbers respectively tended to decrease and increase in territories such as AC or BL. These accurate stereological measures of volume and total cell numbers and densities in the AC of control individuals could serve as appropriate reference values to evaluate subtle alterations in this structure in pathological conditions.

  1. The changes of stage distribution of seminiferous epithelium cycle and its correlations with Leydig cell stereological parameters in aging men.

    Science.gov (United States)

    Huang, Rui; Zhu, Wei-Jie; Li, Jing; Gu, Yi-Qun

    2014-12-01

    To evaluate the changes of stage distribution of seminiferous epithelium cycle and its correlations with Leydig cell stereological parameters in aging men. Point counting method was used to analyze the stereological parameters of Leydig cells. The stage number of seminiferous epithelium cycle was calculated in the same testicular tissue samples which were used for Leydig cell stereological analysis. The aging group had shown more severe pathological changes as well as higher pathologic scores than the young group. Compared with the control group, the volume density (VV) and surface density (NA) of Leydig cells in the aging group were increased significantly. The stage number of seminiferous epithelium cycle in the aging group was decreased coincidently compared to the young group. Leydig cell Vv in the young group has a positive relationship with stages I, II, III, V and VI of seminiferous epithelium cycle, and Leydig cell NA and numerical density (NV) were positively related to stage IV. However, only the correlation between NV and stage II was found in the aging group. The stage number of seminiferous epithelium cycle was decreased in aging testes. Changes in the stage distribution in aging testes were related to the Leydig cell stereological parameters which presented as a sign of morphological changes. Copyright © 2014 Elsevier GmbH. All rights reserved.

  2. DNA-index and stereological estimation of nuclear volume in primary and metastatic malignant melanomas

    DEFF Research Database (Denmark)

    Sørensen, Flemming Brandt; Kristensen, I B; Grymer, F

    1990-01-01

    The aim of this study was to investigate the relationship between physical nuclear volume and ploidy level in malignant melanomas, and to analyse the heterogeneity of these two parameters among primary and corresponding secondary tumours. Unbiased stereological estimates of nuclear volume can...

  3. A characteristic ventricular shape in myelomeningocele-associated hydrocephalus? A CT stereology study

    International Nuclear Information System (INIS)

    Roost, D. van; Solymosi, L.; Funke, K.

    1995-01-01

    We measured the volume of the supratentorial ventricles in 39 consecutive children with myelomeningocele (MMC) and associated hydrocephalus, using a stereological method based on the Cavalieri theorem of systematic sampling. We distinguished the following groups: newborns before and after cerebrospinal fluid shunting (14), a somewhat larger group of newborns with an untreated MMC-associated hydrocephalus (25) and a group of shunted children at a mean age of 1.5 years (28). We paid special attention to the shape of the lateral ventricles, looking separately at the anterior and posterior halves. The measurements were compared with a healthy control group (10) and with children with hydrocephalus unrelated to MMC (15). The average volume ratio of the posterior to the anterior half of the lateral ventricles was 1.05 ± 0.39 in non-hydrocephalic children, 1.11 ± 0.55 in untreated hydrocephalic children without MMC, and 2.15 ± 0.65 in MMC-associated hydrocephalus prior to shunting. These ratios did not change significantly after shunting. This confirms our impression that MMC-associated hydrocephalus shows a characteristic shape, with a disproportionate enlargement of the posterior part of the lateral ventricles, in clear contrast to the normal-width frontal horns. This shape is reminiscent of the fetal ventricular shape. It reveals disturbance of brain development in children with MMC, which goes beyond the classic description of the Chiari malformation. (orig.)

  4. Novel efficient methods for measuring mesophyll anatomical characteristics from fresh thick sections using stereology and confocal microscopy: application on acid rain-treated Norway spruce needles

    Czech Academy of Sciences Publication Activity Database

    Albrechtová, Jana; Janáček, Jiří; Lhotáková, Zuzana; Radochová, Barbora; Kubínová, Lucie

    2007-01-01

    Roč. 58, č. 6 (2007), s. 1451-1461 ISSN 0022-0957 R&D Projects: GA AV ČR IAA5011810; GA AV ČR(CZ) IAA600110507; GA MŠk(CZ) LC06063 Institutional research plan: CEZ:AV0Z50110509; CEZ:AV0Z60050516 Keywords : mesophyll * stereology * confocal microscopy Subject RIV: EA - Cell Biology Impact factor: 3.917, year: 2007

  5. Evaluation of FITC-induced atopic dermatitis-like disease in NC/Nga mice and BALB/c mice using computer-assisted stereological toolbox, a computer-aided morphometric system

    DEFF Research Database (Denmark)

    Hvid, Malene; Jensen, Helene Kofoed; Deleuran, Bent

    2009-01-01

    Stereological Toolbox as a stereological method, the mice were sensitized to FITC and the histological efficiency of disease induction with regard to inflammation and CD4+ and CD8+ lymphocytes, in addition to mast cells, was evaluated. The method was validated by comparison to a conventional semiquantitative...

  6. Unbiased Stereologic Estimation of the Spatial Distribution of Paget’s Disease in the Human Temporal Bone

    DEFF Research Database (Denmark)

    Bloch, Sune Land; Sørensen, Mads Sølvsten

    2014-01-01

    remodeling around the inner ear space and to compare it with that of otosclerosis in a contemporary context of temporal bone dynamics. MATERIALS AND METHODS: From the temporal bone collection of Massachusetts Eye and Ear Infirmary, 15 of 29 temporal bones with Paget's disease were selected to obtain...... an independent sample. All volume distributions were obtained along the normal axis of capsular bone remodeling activity by the use of vector-based stereology. RESULTS: Pagetic bone remodeling was distributed centrifugally around the inner ear space at the individual and the general level. This pattern...

  7. Quantifying Golgi structure using EM: combining volume-SEM and stereology for higher throughput.

    Science.gov (United States)

    Ferguson, Sophie; Steyer, Anna M; Mayhew, Terry M; Schwab, Yannick; Lucocq, John Milton

    2017-06-01

    Investigating organelles such as the Golgi complex depends increasingly on high-throughput quantitative morphological analyses from multiple experimental or genetic conditions. Light microscopy (LM) has been an effective tool for screening but fails to reveal fine details of Golgi structures such as vesicles, tubules and cisternae. Electron microscopy (EM) has sufficient resolution but traditional transmission EM (TEM) methods are slow and inefficient. Newer volume scanning EM (volume-SEM) methods now have the potential to speed up 3D analysis by automated sectioning and imaging. However, they produce large arrays of sections and/or images, which require labour-intensive 3D reconstruction for quantitation on limited cell numbers. Here, we show that the information storage, digital waste and workload involved in using volume-SEM can be reduced substantially using sampling-based stereology. Using the Golgi as an example, we describe how Golgi populations can be sensed quantitatively using single random slices and how accurate quantitative structural data on Golgi organelles of individual cells can be obtained using only 5-10 sections/images taken from a volume-SEM series (thereby sensing population parameters and cell-cell variability). The approach will be useful in techniques such as correlative LM and EM (CLEM) where small samples of cells are treated and where there may be variable responses. For Golgi study, we outline a series of stereological estimators that are suited to these analyses and suggest workflows, which have the potential to enhance the speed and relevance of data acquisition in volume-SEM.

  8. Unbiased stereological estimation of d-dimensional volume in Rn from an isotropic random slice through a fixed point

    DEFF Research Database (Denmark)

    Jensen, Eva B. Vedel; Kiêu, K

    1994-01-01

    Unbiased stereological estimators of d-dimensional volume in R(n) are derived, based on information from an isotropic random r-slice through a specified point. The content of the slice can be subsampled by means of a spatial grid. The estimators depend only on spatial distances. As a fundamental ...... lemma, an explicit formula for the probability that an isotropic random r-slice in R(n) through 0 hits a fixed point in R(n) is given....

  9. Blood capillary length estimation from three-dimensional microscopic data by image analysis and stereology.

    Science.gov (United States)

    Kubínová, Lucie; Mao, Xiao Wen; Janáček, Jiří

    2013-08-01

    Studies of the capillary bed characterized by its length or length density are relevant in many biomedical studies. A reliable assessment of capillary length from two-dimensional (2D), thin histological sections is a rather difficult task as it requires physical cutting of such sections in randomized directions. This is often technically demanding, inefficient, or outright impossible. However, if 3D image data of the microscopic structure under investigation are available, methods of length estimation that do not require randomized physical cutting of sections may be applied. Two different rat brain regions were optically sliced by confocal microscopy and resulting 3D images processed by three types of capillary length estimation methods: (1) stereological methods based on a computer generation of isotropic uniform random virtual test probes in 3D, either in the form of spatial grids of virtual "slicer" planes or spherical probes; (2) automatic method employing a digital version of the Crofton relations using the Euler characteristic of planar sections of the binary image; and (3) interactive "tracer" method for length measurement based on a manual delineation in 3D of the axes of capillary segments. The presented methods were compared in terms of their practical applicability, efficiency, and precision.

  10. ISOL yield predictions from holdup-time measurements

    International Nuclear Information System (INIS)

    Spejewski, Eugene H.; Carter, H Kennon; Mervin, Brenden T.; Prettyman, Emily S.; Kronenberg, Andreas; Stracener, Daniel W

    2008-01-01

    A formalism based on a simple model is derived to predict ISOL yields for all isotopes of a given element based on a holdup-time measurement of a single isotope of that element. Model predictions, based on parameters obtained from holdup-time measurements, are compared to independently-measured experimental values

  11. Are performance-based measures predictive of work participation in patients with musculoskeletal disorders? A systematic review.

    Science.gov (United States)

    Kuijer, P P F M; Gouttebarge, V; Brouwer, S; Reneman, M F; Frings-Dresen, M H W

    2012-02-01

    Assessments of whether patients with musculoskeletal disorders (MSDs) can participate in work mainly consist of case history, physical examinations, and self-reports. Performance-based measures might add value in these assessments. This study answers the question: how well do performance-based measures predict work participation in patients with MSDs? A systematic literature search was performed to obtain longitudinal studies that used reliable performance-based measures to predict work participation in patients with MSDs. The following five sources of information were used to retrieve relevant studies: PubMed, Embase, AMA Guide to the Evaluation of Functional Ability, references of the included papers, and the expertise and personal file of the authors. A quality assessment specific for prognostic studies and an evidence synthesis were performed. Of the 1,230 retrieved studies, eighteen fulfilled the inclusion criteria. The studies included 4,113 patients, and the median follow-up period was 12 months. Twelve studies took possible confounders into account. Five studies were of good quality and thirteen of moderate quality. Two good-quality and all thirteen moderate-quality studies (83%) reported that performance-based measures were predictive of work participation. Two good-quality studies (11%) reported both an association and no association between performance-based measures and work participation. One good-quality study (6%) found no effect. A performance-based lifting test was used in fourteen studies and appeared to be predictive of work participation in thirteen studies. Strong evidence exists that a number of performance-based measures are predictive of work participation in patients with MSDs, especially lifting tests. Overall, the explained variance was modest.

  12. Using biased image analysis for improving unbiased stereological number estimation - a pilot simulation study of the smooth fractionator

    DEFF Research Database (Denmark)

    Gardi, Jonathan Eyal; Nyengaard, Jens Randel; Gundersen, Hans Jørgen Gottlieb

    2006-01-01

    uniformly random sampling design and the ordinary simple random sampling design. The smooth protocol is performed using biased information from crude (but fully automatic) image analysis of the fields of view. The different design paradigms are compared using simulation in three different cell distributions......The smooth fractionator was introduced in 2002. The combination of a smoothing protocol with a computer-aided stereology tool provides better precision and a lighter workload. This study uses simulation to compare fractionator sampling based on the smooth design, the commonly used systematic...

  13. Prediction method for cavitation erosion based on measurement of bubble collapse impact loads

    International Nuclear Information System (INIS)

    Hattori, S; Hirose, T; Sugiyama, K

    2009-01-01

    The prediction of cavitation erosion rates is important in order to evaluate the exact life of components. The measurement of impact loads in bubble collapses helps to predict the life under cavitation erosion. In this study, we carried out erosion tests and the measurements of impact loads in bubble collapses with a vibratory apparatus. We evaluated the incubation period based on a cumulative damage rule by measuring the impact loads of cavitation acting on the specimen surface and by using the 'constant impact load - number of impact loads curve' similar to the modified Miner's rule which is employed for fatigue life prediction. We found that the parameter Σ(F i α xn i ) (F i : impact load, n i : number of impacts and α: constant) is suitable for the evaluation of the erosion life. Moreover, we propose a new method that can predict the incubation period under various cavitation conditions.

  14. Some new, simple and efficient stereological methods and their use in pathological research and diagnosis

    DEFF Research Database (Denmark)

    Gundersen, H J; Bendtsen, T F; Korbo, L

    1988-01-01

    Stereology is a set of simple and efficient methods for quantitation of three-dimensional microscopic structures which is specifically tuned to provide reliable data from sections. Within the last few years, a number of new methods has been developed which are of special interest to pathologists...... are invariably simple and easy....

  15. Three-dimensional stereology as a tool for evaluating bladder outlet obstruction

    DEFF Research Database (Denmark)

    Wijk, J. Van der; Wijk, J. Van der; Horn, T.

    2008-01-01

    -cup biopsy, taken during cystoscopy, was stereologically evaluated to determine the smooth muscle cell volume and the fractions of collagen and smooth muscle using light and electron microscopy. Results. The collagen fraction was higher in patients than in controls (probably because the patients were older...... tract symptoms (LUTS) suggestive of BOO and five controls (mean age 48.6 years; range 43-53 years) without LUTS were studied. All participants underwent a full examination, including determination of the International Prostate Symptom Score, laboratory analysis and a urodynamic evaluation. A cold...

  16. Introducing Stereology as a Tool to Assess the Severity of Psoriasis

    DEFF Research Database (Denmark)

    Kamp, Søren; Stenderup, Karin; Rosada, Cecilia

    2008-01-01

    to histological specimens in order to obtain three-dimensional properties from two-dimensional tissue samples. The psoriasis xenograft model used in this trial is accepted as a leading animal model for psoriasis. Psoriatic skin from psoriatic patients was grafted onto severe combined immunodeficient (SCID) mice......  The purpose of this study was to introduce stereology as a novel tool in assessing the severity of psoriasis. Psoriasis is a well described chronic inflammatory skin disease affecting approximately 2% of the Caucasian population.   The severity of psoriasis has been assessed by a multitude...

  17. STEREOLOGICAL ANALYSIS OF THE COCHLEAR NUCLEI OF MONKEY (MACACA FASCICULARIS AFTER DEAFFERENTATION

    Directory of Open Access Journals (Sweden)

    Ana M Insausti

    2011-05-01

    Full Text Available The cochlear nuclei (CN in the brainstem receive the input signals from the inner ear through the cochlear nerve, and transmit these signals to higher auditory centres. A variety of lesions of the cochlear nerve cause deafness. As reported in the literature, artificial removal of auditive input, or 'deafferentation', induces structural alterations in the CN. The purpose of this study was to estimate a number of relevant stereological parameters of the CN in control and deafferented Macaca fascicularis monkeys.

  18. Systematic versus random sampling in stereological studies.

    Science.gov (United States)

    West, Mark J

    2012-12-01

    The sampling that takes place at all levels of an experimental design must be random if the estimate is to be unbiased in a statistical sense. There are two fundamental ways by which one can make a random sample of the sections and positions to be probed on the sections. Using a card-sampling analogy, one can pick any card at all out of a deck of cards. This is referred to as independent random sampling because the sampling of any one card is made without reference to the position of the other cards. The other approach to obtaining a random sample would be to pick a card within a set number of cards and others at equal intervals within the deck. Systematic sampling along one axis of many biological structures is more efficient than random sampling, because most biological structures are not randomly organized. This article discusses the merits of systematic versus random sampling in stereological studies.

  19. Stereology, an unbiased methodological approach to study plant anatomy and cytology: Past, present and future

    Czech Academy of Sciences Publication Activity Database

    Kubínová, Lucie; Radochová, Barbora; Lhotáková, Z.; Kubínová, Z.; Albrechtová, J.

    2017-01-01

    Roč. 36, č. 3 (2017), s. 187-205 ISSN 1580-3139 R&D Projects: GA MŠk(CZ) LM2015062 Institutional support: RVO:67985823 Keywords : chloroplast * confocal microscopy * leaf anatomy * mesophyll * stereological methods * systematic uniform random sampling Subject RIV: FS - Medical Facilities ; Equipment OBOR OECD: Medical laboratory technology (including laboratory samples analysis Impact factor: 1.135, year: 2016

  20. Prediction of betavoltaic battery output parameters based on SEM measurements and Monte Carlo simulation

    International Nuclear Information System (INIS)

    Yakimov, Eugene B.

    2016-01-01

    An approach for a prediction of "6"3Ni-based betavoltaic battery output parameters is described. It consists of multilayer Monte Carlo simulation to obtain the depth dependence of excess carrier generation rate inside the semiconductor converter, a determination of collection probability based on the electron beam induced current measurements, a calculation of current induced in the semiconductor converter by beta-radiation, and SEM measurements of output parameters using the calculated induced current value. Such approach allows to predict the betavoltaic battery parameters and optimize the converter design for any real semiconductor structure and any thickness and specific activity of beta-radiation source. - Highlights: • New procedure for betavoltaic battery output parameters prediction is described. • A depth dependence of beta particle energy deposition for Si and SiC is calculated. • Electron trajectories are assumed isotropic and uniformly started under simulation.

  1. Predictive Validity of Curriculum-Based Measures for English Learners at Varying English Proficiency Levels

    Science.gov (United States)

    Kim, Jennifer Sun; Vanderwood, Michael L.; Lee, Catherine Y.

    2016-01-01

    This study examined the predictive validity of curriculum-based measures in reading for Spanish-speaking English learners (ELs) at various levels of English proficiency. Third-grade Spanish-speaking EL students were screened during the fall using DIBELS Oral Reading Fluency (DORF) and Daze. Predictive validity was examined in relation to spring…

  2. Blood Capillary Length Estimation from Three-Dimensional Microscopic Data by Image Analysis and Stereology

    Czech Academy of Sciences Publication Activity Database

    Kubínová, Lucie; Mao, X. W.; Janáček, Jiří

    2013-01-01

    Roč. 19, č. 4 (2013), s. 898-906 ISSN 1431-9276 R&D Projects: GA MŠk(CZ) ME09010; GA MŠk(CZ) LH13028; GA ČR(CZ) GAP108/11/0794 Institutional research plan: CEZ:AV0Z5011922 Institutional support: RVO:67985823 Keywords : capillaries * confocal microscopy * image analysis * length * rat brain * stereology Subject RIV: EA - Cell Biology Impact factor: 1.757, year: 2013

  3. A comparison of porosity analysis using 2D stereology estimates and 3D serial sectioning for additively manufactured Ti 6Al 2Sn 4Zr 2Mo alloy

    International Nuclear Information System (INIS)

    Ganti, Satya R.; Velez, Michael A.; Geier, Brian A.; Hayes, Brian J.; Turner, Bryan J.; Jenkins, Elizabeth J.

    2017-01-01

    Porosity is a typical defect in additively manufactured (AM) parts. Such defects limit the properties and performance of AM parts, and therefore need to be characterized accurately. Current methods for characterization of defects and microstructure rely on classical stereological methods that extrapolate information from two dimensional images. The automation of serial sectioning provides an opportunity to precisely and accurately quantify porosity in three dimensions in materials. In this work, we analyzed the porosity of an additively manufactured Ti 6Al 2Sn 4Zr 2Mo sample using Robo-Met.3D "r"e"g"i"s"t"e"r"e"d, an automated serial sectioning system. Image processing for three dimensional reconstruction of the serial-sectioned two dimensional images was performed using open source image analysis software (Fiji/ImageJ, Dream.3D, Paraview). The results from this 3D serial sectioning analysis were then compared to classical 2D stereological methods (Saltykov stereological theory). We found that for this dataset, the classical 2D methods underestimated the porosity size and distributions of the larger pores; a critical attribute to fatigue behavior of the AM part. The results suggest that acquiring experimental data with equipment such as Robo-Met.3D "r"e"g"i"s"t"e"r"e"d to measure the number and size of particles such as pores in a volume irrespective of knowing their shape is a better choice.

  4. Changes in total cell numbers of the basal ganglia in patients with multiple system atrophy - A stereological study

    DEFF Research Database (Denmark)

    Salvesen, Lisette; Ullerup, Birgitte H; Sunay, Fatma B

    2014-01-01

    Total numbers of neurons, oligodendrocytes, astrocytes, and microglia in the basal ganglia and red nucleus were estimated in brains from 11 patients with multiple system atrophy (MSA) and 11 age- and gender-matched control subjects with unbiased stereological methods. Compared to the control...

  5. First and second order stereology of hyaline cartilage: Application on mice femoral cartilage.

    Science.gov (United States)

    Noorafshan, Ali; Niazi, Behnam; Mohamadpour, Masoomeh; Hoseini, Leila; Hoseini, Najmeh; Owji, Ali Akbar; Rafati, Ali; Sadeghi, Yasaman; Karbalay-Doust, Saied

    2016-11-01

    Stereological techniques could be considered in research on cartilage to obtain quantitative data. The present study aimed to explain application of the first- and second-order stereological methods on articular cartilage of mice and the methods applied on the mice exposed to cadmium (Cd). The distal femoral articular cartilage of BALB/c mice (control and Cd-treated) was removed. Then, volume and surface area of the cartilage and number of chondrocytes were estimated using Cavalieri and optical dissector techniques on isotropic uniform random sections. Pair-correlation function [g(r)] and cross-correlation function were calculated to express the spatial arrangement of chondrocytes-chondrocytes and chondrocytes-matrix (chondrocyte clustering/dispersing), respectively. The mean±standard deviation of the cartilage volume, surface area, and thickness were 1.4±0.1mm 3 , 26.2±5.4mm 2 , and 52.8±6.7μm, respectively. Besides, the mean number of chondrocytes was 680±200 (×10 3 ). The cartilage volume, cartilage surface area, and number of chondrocytes were respectively reduced by 25%, 27%, and 27% in the Cd-treated mice in comparison to the control animals (pcartilage components carried potential advantages for investigating the cartilage in different joint conditions. Chondrocyte clustering/dispersing and cellularity can be evaluated in cartilage assessment in normal or abnormal situations. Copyright © 2016 Elsevier GmbH. All rights reserved.

  6. Deterministic Predictions of Vessel Responses Based on Past Measurements

    DEFF Research Database (Denmark)

    Nielsen, Ulrik Dam; Jensen, Jørgen Juncher

    2017-01-01

    The paper deals with a prediction procedure from which global wave-induced responses can be deterministically predicted a short time, 10-50 s, ahead of current time. The procedure relies on the autocorrelation function and takes into account prior measurements only; i.e. knowledge about wave...

  7. Intralesional and metastatic heterogeneity in malignant melanomas demonstrated by stereologic estimates of nuclear volume

    DEFF Research Database (Denmark)

    Sørensen, Flemming Brandt; Erlandsen, M

    1990-01-01

    Regional variability of nuclear 3-dimensional size can be estimated objectively using point-sampled intercepts obtained from different, defined zones within individual neoplasms. In the present study, stereologic estimates of the volume-weighted mean nuclear volume, nuclear vv, within peripheral...... melanomas showed large interindividual variation. This finding emphasizes that unbiased estimates of nuclear vv are robust to regional heterogeneity of nuclear volume and thus suitable for purposes of objective, quantitative malignancy grading of melanomas....

  8. FIRST USE OF STEREOLOGY TO QUANTIFY THE SURVIVAL OF FAT AUTOGRAFTS

    Directory of Open Access Journals (Sweden)

    Eduardo Serna Cuéllar

    2011-05-01

    Full Text Available It is not usual to perform quantitative analyses on surgical materials. Rather, they are evaluated clinically, through qualitative methods, and if quantitation is done, it is on a 2-dimensional basis. In this study, the long-term survival of fat autografts (FAG in 40 subjects with facial soft tissue defects is quantified. An adipose tissue preparation from the abdomen obtained through liposuction and centrifugation is injected subcutaneously. Approximately 14 months later, the treated area is biopsied. Extensive computer-based histological analyses were performed using the stereological method in order to directly obtain three parameters: volume fraction of adipocytes in the fat tissue (VV, density (number per volume of adipocytes in the fat tissue (NV, and the mean cell volume of adipocytes (VA in each tissue sample. A set of equations based on these three quantitative parameters is produced for evaluation of the volumetric survival fraction (VSF of FAG. The presented data evidenced a 66% survival fraction at the 14-month follow-up. In routine practice, it would be sufficient to perform this volumetric analysis on the injected and biopsied fat samples to know what fraction of the FAG has survived. This is an objective method for quantifying FAG survival and will allow a standardized comparison between different research series and authors.

  9. Machine learning based analytics of micro-MRI trabecular bone microarchitecture and texture in type 1 Gaucher disease.

    Science.gov (United States)

    Sharma, Gulshan B; Robertson, Douglas D; Laney, Dawn A; Gambello, Michael J; Terk, Michael

    2016-06-14

    Type 1 Gaucher disease (GD) is an autosomal recessive lysosomal storage disease, affecting bone metabolism, structure and strength. Current bone assessment methods are not ideal. Semi-quantitative MRI scoring is unreliable, not standardized, and only evaluates bone marrow. DXA BMD is also used but is a limited predictor of bone fragility/fracture risk. Our purpose was to measure trabecular bone microarchitecture, as a biomarker of bone disease severity, in type 1 GD individuals with different GD genotypes and to apply machine learning based analytics to discriminate between GD patients and healthy individuals. Micro-MR imaging of the distal radius was performed on 20 type 1 GD patients and 10 healthy controls (HC). Fifteen stereological and textural measures (STM) were calculated from the MR images. General linear models demonstrated significant differences between GD and HC, and GD genotypes. Stereological measures, main contributors to the first two principal components (PCs), explained ~50% of data variation and were significantly different between males and females. Subsequent PCs textural measures were significantly different between GD patients and HC individuals. Textural measures also significantly differed between GD genotypes, and distinguished between GD patients with normal and pathologic DXA scores. PCA and SVM predictive analyses discriminated between GD and HC with maximum accuracy of 73% and area under ROC curve of 0.79. Trabecular STM differences can be quantified between GD patients and HC, and GD sub-types using micro-MRI and machine learning based analytics. Work is underway to expand this approach to evaluate GD disease burden and treatment efficacy. Copyright © 2016 Elsevier Ltd. All rights reserved.

  10. A novel approach to non-biased systematic random sampling: a stereologic estimate of Purkinje cells in the human cerebellum.

    Science.gov (United States)

    Agashiwala, Rajiv M; Louis, Elan D; Hof, Patrick R; Perl, Daniel P

    2008-10-21

    Non-biased systematic sampling using the principles of stereology provides accurate quantitative estimates of objects within neuroanatomic structures. However, the basic principles of stereology are not optimally suited for counting objects that selectively exist within a limited but complex and convoluted portion of the sample, such as occurs when counting cerebellar Purkinje cells. In an effort to quantify Purkinje cells in association with certain neurodegenerative disorders, we developed a new method for stereologic sampling of the cerebellar cortex, involving calculating the volume of the cerebellar tissues, identifying and isolating the Purkinje cell layer and using this information to extrapolate non-biased systematic sampling data to estimate the total number of Purkinje cells in the tissues. Using this approach, we counted Purkinje cells in the right cerebella of four human male control specimens, aged 41, 67, 70 and 84 years, and estimated the total Purkinje cell number for the four entire cerebella to be 27.03, 19.74, 20.44 and 22.03 million cells, respectively. The precision of the method is seen when comparing the density of the cells within the tissue: 266,274, 173,166, 167,603 and 183,575 cells/cm3, respectively. Prior literature documents Purkinje cell counts ranging from 14.8 to 30.5 million cells. These data demonstrate the accuracy of our approach. Our novel approach, which offers an improvement over previous methodologies, is of value for quantitative work of this nature. This approach could be applied to morphometric studies of other similarly complex tissues as well.

  11. Effect of praziquantel administration on hepatic stereology of mice infected with Schistosoma mansoni and fed a low-protein diet

    Directory of Open Access Journals (Sweden)

    L.A. Barros

    2009-09-01

    Full Text Available A study was undertaken to investigate the effect of administering praziquantel (PZQ, focusing on the liver stereological findings of malnourished mice infected with Schistosoma mansoni. Thirty female Swiss Webster mice (age: 21 days; weight: 8-14 g were fed either a low-protein diet (8% or standard chow (22% protein for 15 days. Five mice in each group were infected with 50 cercariae each of the BH strain (Brazil. PZQ therapy (80 mg/kg body weight, per day was started on the 50th day of infection and consisted of daily administration for 5 days. Volume density (hepatocytes, sinusoids and hepatic fibrosis was determined by stereology using a light microscope. Body weight gain and total serum albumin levels were always lower in undernourished mice. Our stereological study demonstrated that treatment increased both volume density of hepatocytes in mice fed standard chow (47.56%, treated group and 12.06%, control and low-protein chow (30.98%, treated group and 21.44%, control, and hepatic sinusoids [standard chow (12.52%, treated group and 9.06%, control, low-protein chow (14.42%, treated group and 8.46%, control], while hepatic fibrosis was reduced [standard chow (39.92%, treated group and 78.88%, control and low-protein chow (54.60%, treated group and 70.10%, control]. On the other hand, mice fed low-protein chow decreased density volume of hepatocytes and hepatic fibrosis. In conclusion, our findings indicate that treatment with PZQ ameliorates hepatic schistosomiasis pathology even in mice fed a low-protein diet.

  12. Stereologic, histopathologic, flow cytometric, and clinical parameters in the prognostic evaluation of 74 patients with intraoral squamous cell carcinomas

    DEFF Research Database (Denmark)

    Bundgaard, T; Sørensen, Flemming Brandt; Gaihede, M

    1992-01-01

    BACKGROUND AND METHODS: A consecutive series of all 78 incident cases of intraoral squamous cell carcinoma occurring during a 2-year period in a population of 1.4 million inhabitants were evaluated by histologic score (the modified classification of Jacobsson et al.), flow cytometry, stereology, ...

  13. Effect of Sodium Cyclamate on the Rat Fetal Exocrine Pancreas: a Karyometric and Stereological Study

    OpenAIRE

    MARTINS, Alex Tadeu; SANTOS, Fabiano de Sant`Ana dos; SCANNAVINO, Fabio Luiz Ferreira; PIRES, Juliana Rico; ZUZA, Elizangela Partata; PADOVANI JUNIOR, Joao Armando; AZOUBEL, Reinaldo; MATEO, Miguel Angel Sala Di; LOPES, Ruberval Armando

    2010-01-01

    The cyclamate, a sweetner substance derived from N-cyclo-hexyl-sulfamic acid, is largely utilized as a non-caloric artificial edulcorant in foods and beverages as well as in the pharmaceutical industry. The objective of this study was to evaluate karyometric and stereological alterations in the rat fetal pancreas resulting from the intraperitoneal administration of sodium cyclamate. The exocrine pancreas of ten fetuses of rats were evaluated, five treated and five controls chosen at random, i...

  14. Morphometric changes in the spinal cord during prenatal life: a stereological study in sheep.

    Science.gov (United States)

    Sadeghinezhad, Javad; Zadsar, Narges; Hasanzadeh, Beal

    2018-03-01

    This study describes the volumetric changes in the spinal cord during prenatal life in sheep using quantitative stereological methods. Twenty healthy sheep fetuses were included in the present study, divided into four groups representing 9-11, 12-14, 15-17, and 18-20 weeks of gestation. In each group, the spinal cord was dissected out and sampled according to the unbiased systematic random sampling method then used for stereological estimations. The total volume of spinal cord, volume of gray matter (GM), volume of white matter (WM), ratio of GM volume to WM volume, and volume of central canal (CC) were estimated in the whole spinal cord and its various regions using Cavalieri's principle. The total volume of the spinal cord increased 8 times from week 9 to week 20. The cervical region showed the greatest (9.7 times) and the sacral region the least (6.3 times) volumetric change. The CC volume of the whole spinal cord increased 5.8 times from week 9 to week 20. The cervical region developed faster (8.2 times) and the thoracic region slower (4.4 times) than the total spinal cord. During development, the volume ratio of GM to WM decreased from lower toward upper regions. The greatest volume changes occurred mostly in weeks 9-11 and 12-14. The cervical region showed the greatest volume changes in comparison with other regions of the spinal cord.

  15. No postnatal doubling of number of neurons in human Broca's areas (Brodmann areas 44 and 45)? A stereological study.

    Science.gov (United States)

    Uylings, H B M; Malofeeva, L I; Bogolepova, I N; Jacobsen, A M; Amunts, K; Zilles, K

    2005-01-01

    In this study we explored whether a postnatal doubling of the total number of neurons occurs in the human Brodmann areas 44 and 45 (Broca's area). We describe the most recent error prediction formulae and their application for the modern stereological estimators for volume and number of neurons. We estimated the number of neurons in 3D optical disector probes systematically random sampled throughout the entire Brodmann areas (BA) 44 and 45 in developing and young adult cases. In the relatively small number of male and female cases studied no substantial postnatal increase in total number of neurons occurred in areas 44 and 45; the volume of these areas reached adult values around 7 years. In addition, we did find indications that a shift from a right-over-left to a left-over-right asymmetry may occur in the volume of BA 45 during postnatal development. No major asymmetry in total number of neurons in BA 44 and 45 was detected.

  16. Stereological estimation of nuclear volume in benign and malignant melanocytic lesions of the skin. Inter- and intraobserver variability of malignancy grading

    DEFF Research Database (Denmark)

    Sørensen, Flemming Brandt; Ottosen, P D

    1991-01-01

    The volume-weighted, mean nuclear volume (nuclear vv) may be estimated without any assumptions regarding nuclear shape using modern stereological techniques. As a part of an investigation concerning the prospects of nuclear vv for classification and malignancy grading of cutaneous melanocytic tum...

  17. Stereological brain volume changes in post-weaned socially isolated rats

    DEFF Research Database (Denmark)

    Fabricius, Katrine; Helboe, Lone; Steiniger-Brach, Björn

    2010-01-01

    Lister Hooded rats isolated from postnatal day 25 for 15 weeks. We observed the expected gender differences in total brain volume with males having larger brains than females. Further, we found that isolated males had significantly smaller brains than group-housed controls and larger lateral ventricles...... have evaluated the neuroanatomical changes in this animal model in comparison to changes seen in schizophrenia. In this study, we applied stereological volume estimates to evaluate the total brain, the ventricular system, and the pyramidal and granular cell layers of the hippocampus in male and female...... than controls. However, this was not seen in female rats. Isolated males had a significant smaller hippocampus, dentate gyrus and CA2/3 where isolated females had a significant smaller CA1 compared to controls. Thus, our results indicate that long-term isolation of male rats leads to neuroanatomical...

  18. DNA level and stereologic estimates of nuclear volume in squamous cell carcinomas of the uterine cervix. A comparative study with analysis of prognostic impact

    DEFF Research Database (Denmark)

    Sørensen, Flemming Brandt; Bichel, P; Jakobsen, A

    1992-01-01

    Grading of malignancy in squamous cell carcinomas of the uterine cervix is based on qualitative, morphologic examination and suffers from poor reproducibility. Using modern stereology, unbiased estimates of the three-dimensional, volume-weighted mean nuclear volume (nuclear vv), were obtained...... in pretreatment biopsies from 51 patients treated for cervical cancer in clinical Stages I through III (mean age of 56 years, follow-up period greater than 5 years). In addition, conventional, two-dimensional morphometric estimates of nuclear and mitotic features were obtained. DNA indices (DI) were estimated...

  19. Settlement Prediction of Road Soft Foundation Using a Support Vector Machine (SVM Based on Measured Data

    Directory of Open Access Journals (Sweden)

    Yu Huiling

    2016-01-01

    Full Text Available The suppor1t vector machine (SVM is a relatively new artificial intelligence technique which is increasingly being applied to geotechnical problems and is yielding encouraging results. SVM is a new machine learning method based on the statistical learning theory. A case study based on road foundation engineering project shows that the forecast results are in good agreement with the measured data. The SVM model is also compared with BP artificial neural network model and traditional hyperbola method. The prediction results indicate that the SVM model has a better prediction ability than BP neural network model and hyperbola method. Therefore, settlement prediction based on SVM model can reflect actual settlement process more correctly. The results indicate that it is effective and feasible to use this method and the nonlinear mapping relation between foundation settlement and its influence factor can be expressed well. It will provide a new method to predict foundation settlement.

  20. Experimental validation of alternate integral-formulation method for predicting acoustic radiation based on particle velocity measurements.

    Science.gov (United States)

    Ni, Zhi; Wu, Sean F

    2010-09-01

    This paper presents experimental validation of an alternate integral-formulation method (AIM) for predicting acoustic radiation from an arbitrary structure based on the particle velocities specified on a hypothetical surface enclosing the target source. Both the normal and tangential components of the particle velocity on this hypothetical surface are measured and taken as the input to AIM codes to predict the acoustic pressures in both exterior and interior regions. The results obtained are compared with the benchmark values measured by microphones at the same locations. To gain some insight into practical applications of AIM, laser Doppler anemometer (LDA) and double hotwire sensor (DHS) are used as measurement devices to collect the particle velocities in the air. Measurement limitations of using LDA and DHS are discussed.

  1. A measurement-based method for predicting margins and uncertainties for unprotected accidents in the Integral Fast Reactor concept

    International Nuclear Information System (INIS)

    Vilim, R.B.

    1990-01-01

    A measurement-based method for predicting the response of an LMR core to unprotected accidents has been developed. The method processes plant measurements taken at normal operation to generate a stochastic model for the core dynamics. This model can be used to predict three sigma confidence intervals for the core temperature and power response. Preliminary numerical simulations performed for EBR-2 appear promising. 6 refs., 2 figs

  2. Stereological estimation of surface area and barrier thickness of fish gills in vertical sections.

    Science.gov (United States)

    Da Costa, Oscar T F; Pedretti, Ana Carolina E; Schmitz, Anke; Perry, Steven F; Fernandes, Marisa N

    2007-01-01

    Previous morphometric methods for estimation of the volume of components, surface area and thickness of the diffusion barrier in fish gills have taken advantage of the highly ordered structure of these organs for sampling and surface area estimations, whereas the thickness of the diffusion barrier has been measured orthogonally on perpendicularly sectioned material at subjectively selected sites. Although intuitively logical, these procedures do not have a demonstrated mathematical basis, do not involve random sampling and measurement techniques, and are not applicable to the gills of all fish. The present stereological methods apply the principles of surface area estimation in vertical uniform random sections to the gills of the Brazilian teleost Arapaima gigas. The tissue was taken from the entire gill apparatus of the right-hand or left-hand side (selected at random) of the fish by systematic random sampling and embedded in glycol methacrylate for light microscopy. Arches from the other side were embedded in Epoxy resin. Reference volume was estimated by the Cavalieri method in the same vertical sections that were used for surface density and volume density measurements. The harmonic mean barrier thickness of the water-blood diffusion barrier was calculated from measurements taken along randomly selected orientation lines that were sine-weighted relative to the vertical axis. The values thus obtained for the anatomical diffusion factor (surface area divided by barrier thickness) compare favourably with those obtained for other sluggish fish using existing methods.

  3. DNA level and stereologic estimates of nuclear volume in squamous cell carcinomas of the uterine cervix. A comparative study with analysis of prognostic impact

    DEFF Research Database (Denmark)

    Sørensen, Flemming Brandt; Bichel, P; Jakobsen, A

    1992-01-01

    Grading of malignancy in squamous cell carcinomas of the uterine cervix is based on qualitative, morphologic examination and suffers from poor reproducibility. Using modern stereology, unbiased estimates of the three-dimensional, volume-weighted mean nuclear volume (nuclear vv), were obtained...... in pretreatment biopsies from 51 patients treated for cervical cancer in clinical Stages I through III (mean age of 56 years, follow-up period greater than 5 years). In addition, conventional, two-dimensional morphometric estimates of nuclear and mitotic features were obtained. DNA indices (DI) were estimated...... carcinoma of the uterine cervix....

  4. The prediction of the LWR plant accident based on the measured plant data

    International Nuclear Information System (INIS)

    Miettinen, J.; Schmuck, P.

    2005-01-01

    In case of accident affecting a nuclear reactor, it is essential to anticipate the possible development of the situation to efficiently succeed in emergency response actions, i.e. firstly to be early warned, to get sufficient information on the plant: and as far as possible. The ASTRID (Assessment of Source Term for Emergency Response based on Installation Data) project consists in developing a methodology: of expertise to; structure the work of technical teams and to facilitate cross competence communications among EP players and a qualified computer tool that could be commonly used by the European countries to reliably predict source term in case of an accident in a light water reactor, using the information available on the plant. In many accident conditions the team of analysts may be located far away from the plant experiencing the accident and their decision making is based on the on-line plant data transmitted into the crisis centre in an interval of 30 - 600 seconds. The plant condition has to be diagnosed based on this information, In the ASTRID project the plant status diagnostics has been studied for the European reactor types including BWR, PWR and VVER plants. The directly measured plant data may be used for estimations of the break size from the primary system and its locations. The break size prediction may be based on the pressurizer level, reactor vessel level, primary pressure and steam generator level in the case of the steam generator tube rupture. In the ASTRID project the break predictions concept was developed and its validity for different plant types and is presented in the paper, when the plant data has been created with the plant specific thermohydraulic simulation model. The tracking simulator attempts to follow the plant behavior on-line based on the measured plant data for the main process parameters and most important boundary conditions. When the plant state tracking fails, the plant may be experiencing an accident, and the tracking

  5. Short-arc measurement and fitting based on the bidirectional prediction of observed data

    Science.gov (United States)

    Fei, Zhigen; Xu, Xiaojie; Georgiadis, Anthimos

    2016-02-01

    To measure a short arc is a notoriously difficult problem. In this study, the bidirectional prediction method based on the Radial Basis Function Neural Network (RBFNN) to the observed data distributed along a short arc is proposed to increase the corresponding arc length, and thus improve its fitting accuracy. Firstly, the rationality of regarding observed data as a time series is discussed in accordance with the definition of a time series. Secondly, the RBFNN is constructed to predict the observed data where the interpolation method is used for enlarging the size of training examples in order to improve the learning accuracy of the RBFNN’s parameters. Finally, in the numerical simulation section, we focus on simulating how the size of the training sample and noise level influence the learning error and prediction error of the built RBFNN. Typically, the observed data coming from a 5{}^\\circ short arc are used to evaluate the performance of the Hyper method known as the ‘unbiased fitting method of circle’ with a different noise level before and after prediction. A number of simulation experiments reveal that the fitting stability and accuracy of the Hyper method after prediction are far superior to the ones before prediction.

  6. Prediction of fermentation index of cocoa beans (Theobroma cacao L.) based on color measurement and artificial neural networks.

    Science.gov (United States)

    León-Roque, Noemí; Abderrahim, Mohamed; Nuñez-Alejos, Luis; Arribas, Silvia M; Condezo-Hoyos, Luis

    2016-12-01

    Several procedures are currently used to assess fermentation index (FI) of cocoa beans (Theobroma cacao L.) for quality control. However, all of them present several drawbacks. The aim of the present work was to develop and validate a simple image based quantitative procedure, using color measurement and artificial neural network (ANNs). ANN models based on color measurements were tested to predict fermentation index (FI) of fermented cocoa beans. The RGB values were measured from surface and center region of fermented beans in images obtained by camera and desktop scanner. The FI was defined as the ratio of total free amino acids in fermented versus non-fermented samples. The ANN model that included RGB color measurement of fermented cocoa surface and R/G ratio in cocoa bean of alkaline extracts was able to predict FI with no statistical difference compared with the experimental values. Performance of the ANN model was evaluated by the coefficient of determination, Bland-Altman plot and Passing-Bablok regression analyses. Moreover, in fermented beans, total sugar content and titratable acidity showed a similar pattern to the total free amino acid predicted through the color based ANN model. The results of the present work demonstrate that the proposed ANN model can be adopted as a low-cost and in situ procedure to predict FI in fermented cocoa beans through apps developed for mobile device. Copyright © 2016 Elsevier B.V. All rights reserved.

  7. Microcomputer-based tests for repeated-measures: Metric properties and predictive validities

    Science.gov (United States)

    Kennedy, Robert S.; Baltzley, Dennis R.; Dunlap, William P.; Wilkes, Robert L.; Kuntz, Lois-Ann

    1989-01-01

    A menu of psychomotor and mental acuity tests were refined. Field applications of such a battery are, for example, a study of the effects of toxic agents or exotic environments on performance readiness, or the determination of fitness for duty. The key requirement of these tasks is that they be suitable for repeated-measures applications, and so questions of stability and reliability are a continuing, central focus of this work. After the initial (practice) session, seven replications of 14 microcomputer-based performance tests (32 measures) were completed by 37 subjects. Each test in the battery had previously been shown to stabilize in less than five 90-second administrations and to possess retest reliabilities greater than r = 0.707 for three minutes of testing. However, all the tests had never been administered together as a battery and they had never been self-administered. In order to provide predictive validity for intelligence measurement, the Wechsler Adult Intelligence Scale-Revised and the Wonderlic Personnel Test were obtained on the same subjects.

  8. STEREOLOGICAL STUDIES ON FETAL VASCULAR DEVELOPMENT IN HUMAN PLACENTAL VILLI

    Directory of Open Access Journals (Sweden)

    Terry M Mayhew

    2011-05-01

    Full Text Available In human pregnancy, fetal well-being depends on the development of placental villi and the creation and maintenance of fetal microvessels within them. The aim of this study was to define stereological measures of the growth, capillarization and maturation of villi and of fetoplacental angiogenesis and capillary remodelling. Placentas were collected at 12-41 weeks of gestation and assigned to six age groups spanning equal age ranges. Tissue samples were randomised for position and orientation. Overall growth of peripheral (intermediate and terminal villi and their capillaries was evaluated using total volumes, surface areas and lengths. Measures of villous capillarization comprised capillary volume, surface and length densities and capillary:villus surface and length ratios. Size and shape remodelling of villi and capillaries was assessed using mean cross-sectional areas, perimeters and shape coefficients (perimeter2/area. Group comparisons were drawn by analysis of variance. Villous and capillary volumes, surfaces and lengths increased significantly throughout gestation. Villous maturation involved phasic (capillary:villus surface and length ratios or progressive (volume, surface and length densities increases in indices of villous capillarization. It also involved isomorphic thinning (cross-sectional areas and perimeters declined but shape coefficients did not alter. In contrast, growth of capillaries did not involve changes in luminal areas or perimeters. The results show that villous growth and fetal angiogenesis involve increases in overall length rather than calibre and that villous differentiation involves increased capillarization. Although they do not distinguish between increases in the lengths versus numbers of capillary segments, other studies have shown that capillaries switch from branching to non-branching angiogenesis during gestation. Combined with maintenance of capillary calibres, these processes will contribute to the reduced

  9. Simple area-based measurement for multidetector computed tomography to predict left ventricular size

    International Nuclear Information System (INIS)

    Schlett, Christopher L.; Kwait, Dylan C.; Mahabadi, Amir A.; Hoffmann, Udo; Bamberg, Fabian; O'Donnell, Christopher J.; Fox, Caroline S.

    2010-01-01

    Measures of left ventricular (LV) mass and dimensions are independent predictors of morbidity and mortality. We determined whether an axial area-based method by computed tomography (CT) provides an accurate estimate of LV mass and volume. A total of 45 subjects (49% female, 56.0 ± 12 years) with a wide range of LV geometry underwent contrast-enhanced 64-slice CT. LV mass and volume were derived from 3D data. 2D images were analysed to determine LV area, the direct transverse cardiac diameter (dTCD) and the cardiothoracic ratio (CTR). Furthermore, feasibility was confirmed in 100 Framingham Offspring Cohort subjects. 2D measures of LV area, dTCD and CTR were 47.3 ± 8 cm 2 , 14.7 ± 1.5 cm and 0.54 ± 0.05, respectively. 3D-derived LV volume (end-diastolic) and mass were 148.9 ± 45 cm 3 and 124.2 ± 34 g, respectively. Excellent inter- and intra-observer agreement were shown for 2D LV area measurements (both intraclass correlation coefficients (ICC) = 0.99, p 0.27). Compared with traditionally used CTR, LV size can be accurately predicted based on a simple and highly reproducible axial LV area-based measurement. (orig.)

  10. Three-dimensional stereology as a tool for evaluating bladder outlet obstruction

    DEFF Research Database (Denmark)

    Van Der Wijk, Jasper; Van Der Wijk, Jan; Horn, Thomas

    2008-01-01

    Objective. In a pilot study we evaluated whether implementation of a novel 3D stereologic technique can prove that bladder outlet obstruction (BOO) is associated with morphologic changes in the bladder wall. Material and methods. Ten males (mean age 69.7 years; range 58-84 years) with lower urinary...... tract symptoms (LUTS) suggestive of BOO and five controls (mean age 48.6 years; range 43-53 years) without LUTS were studied. All participants underwent a full examination, including determination of the International Prostate Symptom Score, laboratory analysis and a urodynamic evaluation. A cold....... Conclusions. This pilot study shows that, even with the implementation of subtle morphometric techniques, there seems to be no relationship between the severity of BOO and bladder wall morphology. It is possible that interstitial collagen in the bladder wall increases with age. It seems that bladder wall...

  11. Stereological estimates of nuclear volume in normal germ cells and carcinoma in situ of the human testis

    DEFF Research Database (Denmark)

    Sørensen, Flemming Brandt; Müller, J

    1990-01-01

    Carcinoma in situ of the testis may appear many years prior to the development of an invasive tumour. Using point-sampled intercepts, base-line data concerning unbiased stereological estimates of the volume-weighted mean nuclear volume (nuclear vV) were obtained in 50 retrospective serial...... testicular biopsies from 10 patients with carcinoma in situ. All but two patients eventually developed an invasive growth. Testicular biopsies from 10 normal adult individuals and five prepubertal boys were included as controls. Nuclear vV in testicular carcinoma in situ was significantly larger than...... that of morphologically normal spermatogonia (2P = 1.0 x 10(-19)), with only minor overlap. Normal spermatogonia from controls had, on average, smaller nuclear vV than morphologically normal spermatogonia in biopsies with ipsi- or contra-lateral carcinoma in situ (2P = 5.2 x 10(-3)). No difference in nuclear vV was found...

  12. Quantification of rat retinal growth and vascular population changes after single and split doses of proton irradiation: translational study using stereology methods

    Science.gov (United States)

    Mao, Xiao W.; Archambeau, John O.; Kubinova, Lucie; Boyle, Soames; Petersen, Georgia; Grove, Roger; Nelson, G. A. (Principal Investigator)

    2003-01-01

    This study quantified architectural and population changes in the rat retinal vasculature after proton irradiation using stereology. A 100 MeV conformal proton beam delivered 8, 14, 20 and 28 Gy as single and split doses to the whole eye. The vascular networks were prepared from retinal digests. Stereological methods were used to obtain the area of the retina and unbiased estimates of microvessel/artery/vein endothelial, pericyte and smooth muscle population, and vessel length. The retinal area increased progressively in the unirradiated, age-matched controls and in the retinas irradiated with 8 and 14 Gy, indicating uniform progressive retinal growth. No growth occurred after 20 and 28 Gy. Regression analysis of total endothelial cell number in all vessels (arteries, veins and capillaries) after irradiation documented a progressive time- and dose-dependent cell loss occurring over 15 to 24 months. The difference from controls was significant (Ppopulations after split doses. At 10 Gy, the rate of endothelial cell loss, a dose parameter used to characterize the time- and dose-dependent loss of the endothelial population, was doubled.

  13. Stereological quantification of lymphocytes in skin biopsies from atopic dermatitis patients

    DEFF Research Database (Denmark)

    Ellingsen, A R; Sørensen, F B; Larsen, Jytte Overgaard

    2001-01-01

    with active eczema in 8 adults with AD and from clinically normal skin from 4 of the patients. Five persons without allergy or skin disease served as controls. The mean number of lymphocytes in 4-mm skin biopsies was 469,000 and 124,000 in active eczema and in clinically normal skin, respectively. Compared......Atopic dermatitis (AD) is histologically characterized by lymphocytic infiltration of the skin and quantitative assessment is required. This study introduces stereological techniques to quantify the number of lymphocytes in skin biopsies. Four-millimetre punch biopsies were taken from skin...... with controls, the number of lymphocytes in biopsies increased by a factor of 6.8 in active eczema and a factor of 1.8 in clinically normal skin. If 20% of skin is affected by eczema the total number of lymphocytes located in the affected skin can be estimated to 1.27 x 10(10). A patient with clinically...

  14. Stereology of the thyroid gland in Indo-Pacific bottlenose dolphin (Tursiops aduncus in comparison with human (Homo sapiens: quantitative and functional implications.

    Directory of Open Access Journals (Sweden)

    Brian Chin Wing Kot

    Full Text Available The mammalian thyroid gland maintains basal metabolism in tissues for optimal function. Determining thyroid volume is important in assessing growth and involution. Volume estimation is also important in stereological studies. Direct measurements of colloid volume and nuclear-to-cytoplasmic ratio of the follicular cells may provide important information about thyroid gland function such as hormone storage and secretion, which helps understand the changes at morphological and functional levels. The present study determined the colloid volume using simple stereological principle and the nuclear-to-cytoplasmic ratio of 4 Indo-Pacific bottlenose dolphins and 2 human thyroid glands. In both dolphin and human thyroid glands, the size of the follicles tended to be quite variable. The distribution of large and small follicles within the thyroid gland was also found to be random in both the dolphin and human thyroid gland; however, the size of follicles appeared to decrease as a function of increasing age in the dolphin thyroid gland. The mean colloid volume of the dolphin thyroid gland and human thyroid gland was 1.22×10(5 µm(3 and 7.02×10(5 µm(3 respectively. The dolphin and human subjects had a significant difference in the mean colloid volume. The mean N/C ratio of the dolphin thyroid follicular epithelia and human follicular epithelia was 0.50 and 0.64 respectively. The dolphin and human subjects had a significant difference in the mean N/C ratio. This information contributes to understanding dolphin thyroid physiology and its structural adaptations to meet the physical demands of the aquatic environment, and aids with ultrasonography and corrective therapy in live subjects.

  15. Stereological estimation of nuclear volume and other quantitative histopathological parameters in the prognostic evaluation of supraglottic laryngeal squamous cell carcinoma

    DEFF Research Database (Denmark)

    Sørensen, Flemming Brandt; Bennedbaek, O; Pilgaard, J

    1989-01-01

    The aim of this study was to investigate various approaches to the grading of malignancy in pre-treatment biopsies from patients with supraglottic laryngeal squamous cell carcinoma. The prospects of objective malignancy grading based on stereological estimation of the volume-weighted mean nuclear...... volume, nuclear Vv, and nuclear volume fraction, Vv(nuc/tis), along with morphometrical 2-dimensional estimation of nuclear density index, NI, and mitotic activity index, MI, were investigated and compared with the current morphological, multifactorial grading system. The reproducibility among two...... observers of the latter was poor in the material which consisted of 35 biopsy specimens. Unbiased estimates of nuclear Vv were on the average 385 microns3 (CV = 0.44), with more than 90% of the associated variance attributable to differences in nuclear Vv among individual lesions. Nuclear Vv was positively...

  16. Stereological estimation of ovarian volume and number of follicles in low dose of Vitex agnus castus treated mice

    OpenAIRE

    HAMIDIAN, Gholamreza; YAHYAVI, Fariba

    2014-01-01

    Vitex agnus castus (VAC) has been proven to have a wide range of biological activities. It is commonly used in the treatment of menstrual disorders resulting from corpus luteum deficiency, including premenstrual symptoms and spasmodic dysmenorrheal, for certain menopausal conditions, and for insufficient lactation. The aim of this study was to investigate the effects of low dose of VAC essential oil on ovarian volume and oocyte number in mice by stereological technique. In this study 10 young...

  17. Stereological estimation of the mean and variance of nuclear volume from vertical sections

    DEFF Research Database (Denmark)

    Sørensen, Flemming Brandt

    1991-01-01

    The application of assumption-free, unbiased stereological techniques for estimation of the volume-weighted mean nuclear volume, nuclear vv, from vertical sections of benign and malignant nuclear aggregates in melanocytic skin tumours is described. Combining sampling of nuclei with uniform...... probability in a physical disector and Cavalieri's direct estimator of volume, the unbiased, number-weighted mean nuclear volume, nuclear vN, of the same benign and malignant nuclear populations is also estimated. Having obtained estimates of nuclear volume in both the volume- and number distribution...... of volume, a detailed investigation of nuclear size variability is possible. Benign and malignant nuclear populations show approximately the same relative variability with regard to nuclear volume, and the presented data are compatible with a simple size transformation from the smaller benign nuclei...

  18. DNA methylation-based measures of biological age: meta-analysis predicting time to death

    Science.gov (United States)

    Chen, Brian H.; Marioni, Riccardo E.; Colicino, Elena; Peters, Marjolein J.; Ward-Caviness, Cavin K.; Tsai, Pei-Chien; Roetker, Nicholas S.; Just, Allan C.; Demerath, Ellen W.; Guan, Weihua; Bressler, Jan; Fornage, Myriam; Studenski, Stephanie; Vandiver, Amy R.; Moore, Ann Zenobia; Tanaka, Toshiko; Kiel, Douglas P.; Liang, Liming; Vokonas, Pantel; Schwartz, Joel; Lunetta, Kathryn L.; Murabito, Joanne M.; Bandinelli, Stefania; Hernandez, Dena G.; Melzer, David; Nalls, Michael; Pilling, Luke C.; Price, Timothy R.; Singleton, Andrew B.; Gieger, Christian; Holle, Rolf; Kretschmer, Anja; Kronenberg, Florian; Kunze, Sonja; Linseisen, Jakob; Meisinger, Christine; Rathmann, Wolfgang; Waldenberger, Melanie; Visscher, Peter M.; Shah, Sonia; Wray, Naomi R.; McRae, Allan F.; Franco, Oscar H.; Hofman, Albert; Uitterlinden, André G.; Absher, Devin; Assimes, Themistocles; Levine, Morgan E.; Lu, Ake T.; Tsao, Philip S.; Hou, Lifang; Manson, JoAnn E.; Carty, Cara L.; LaCroix, Andrea Z.; Reiner, Alexander P.; Spector, Tim D.; Feinberg, Andrew P.; Levy, Daniel; Baccarelli, Andrea; van Meurs, Joyce; Bell, Jordana T.; Peters, Annette; Deary, Ian J.; Pankow, James S.; Ferrucci, Luigi; Horvath, Steve

    2016-01-01

    Estimates of biological age based on DNA methylation patterns, often referred to as “epigenetic age”, “DNAm age”, have been shown to be robust biomarkers of age in humans. We previously demonstrated that independent of chronological age, epigenetic age assessed in blood predicted all-cause mortality in four human cohorts. Here, we expanded our original observation to 13 different cohorts for a total sample size of 13,089 individuals, including three racial/ethnic groups. In addition, we examined whether incorporating information on blood cell composition into the epigenetic age metrics improves their predictive power for mortality. All considered measures of epigenetic age acceleration were predictive of mortality (p≤8.2×10−9), independent of chronological age, even after adjusting for additional risk factors (p<5.4×10−4), and within the racial/ethnic groups that we examined (non-Hispanic whites, Hispanics, African Americans). Epigenetic age estimates that incorporated information on blood cell composition led to the smallest p-values for time to death (p=7.5×10−43). Overall, this study a) strengthens the evidence that epigenetic age predicts all-cause mortality above and beyond chronological age and traditional risk factors, and b) demonstrates that epigenetic age estimates that incorporate information on blood cell counts lead to highly significant associations with all-cause mortality. PMID:27690265

  19. Confocal stereology and image analysis: methods for estimating geometrical characteristics of cells and tissues from three-dimensional confocal images

    Czech Academy of Sciences Publication Activity Database

    Kubínová, Lucie; Janáček, Jiří; Karen, Petr; Radochová, Barbora; Difato, Francesco; Krekule, Ivan

    2004-01-01

    Roč. 53, Suppl.1 (2004), s. S47-S55 ISSN 0862-8408 R&D Projects: GA ČR GA304/01/0257; GA ČR GA310/02/1470; GA AV ČR KJB6011309; GA AV ČR KJB5039302 Grant - others:SI - CZ(CZ) KONTAKT 001/2001 Institutional research plan: CEZ:AV0Z5011922 Keywords : confocal microscopy * image analysis * stereology Subject RIV: EA - Cell Biology Impact factor: 1.140, year: 2004

  20. Calorimeter prediction based on multiple exponentials

    International Nuclear Information System (INIS)

    Smith, M.K.; Bracken, D.S.

    2002-01-01

    Calorimetry allows very precise measurements of nuclear material to be carried out, but it also requires relatively long measurement times to do so. The ability to accurately predict the equilibrium response of a calorimeter would significantly reduce the amount of time required for calorimetric assays. An algorithm has been developed that is effective at predicting the equilibrium response. This multi-exponential prediction algorithm is based on an iterative technique using commercial fitting routines that fit a constant plus a variable number of exponential terms to calorimeter data. Details of the implementation and the results of trials on a large number of calorimeter data sets will be presented

  1. Intralesional and metastatic heterogeneity in malignant melanomas demonstrated by stereologic estimates of nuclear volume

    DEFF Research Database (Denmark)

    Sørensen, Flemming Brandt; Erlandsen, M

    1990-01-01

    Regional variability of nuclear 3-dimensional size can be estimated objectively using point-sampled intercepts obtained from different, defined zones within individual neoplasms. In the present study, stereologic estimates of the volume-weighted mean nuclear volume, nuclear vv, within peripheral...... on average larger in the peripheral zones of primary melanomas, than nuclear vv in central zones (2p = 6.7 x 10(-4), whereas no zonal differences were demonstrated in metastatic lesions (2p = 0.21). A marked intraindividual variation was demonstrated between primary and corresponding secondary melanomas (2p...... melanomas showed large interindividual variation. This finding emphasizes that unbiased estimates of nuclear vv are robust to regional heterogeneity of nuclear volume and thus suitable for purposes of objective, quantitative malignancy grading of melanomas....

  2. Assessment of MRI-Based Automated Fetal Cerebral Cortical Folding Measures in Prediction of Gestational Age in the Third Trimester.

    Science.gov (United States)

    Wu, J; Awate, S P; Licht, D J; Clouchoux, C; du Plessis, A J; Avants, B B; Vossough, A; Gee, J C; Limperopoulos, C

    2015-07-01

    Traditional methods of dating a pregnancy based on history or sonographic assessment have a large variation in the third trimester. We aimed to assess the ability of various quantitative measures of brain cortical folding on MR imaging in determining fetal gestational age in the third trimester. We evaluated 8 different quantitative cortical folding measures to predict gestational age in 33 healthy fetuses by using T2-weighted fetal MR imaging. We compared the accuracy of the prediction of gestational age by these cortical folding measures with the accuracy of prediction by brain volume measurement and by a previously reported semiquantitative visual scale of brain maturity. Regression models were constructed, and measurement biases and variances were determined via a cross-validation procedure. The cortical folding measures are accurate in the estimation and prediction of gestational age (mean of the absolute error, 0.43 ± 0.45 weeks) and perform better than (P = .024) brain volume (mean of the absolute error, 0.72 ± 0.61 weeks) or sonography measures (SDs approximately 1.5 weeks, as reported in literature). Prediction accuracy is comparable with that of the semiquantitative visual assessment score (mean, 0.57 ± 0.41 weeks). Quantitative cortical folding measures such as global average curvedness can be an accurate and reliable estimator of gestational age and brain maturity for healthy fetuses in the third trimester and have the potential to be an indicator of brain-growth delays for at-risk fetuses and preterm neonates. © 2015 by American Journal of Neuroradiology.

  3. Copula based prediction models: an application to an aortic regurgitation study

    Directory of Open Access Journals (Sweden)

    Shoukri Mohamed M

    2007-06-01

    Full Text Available Abstract Background: An important issue in prediction modeling of multivariate data is the measure of dependence structure. The use of Pearson's correlation as a dependence measure has several pitfalls and hence application of regression prediction models based on this correlation may not be an appropriate methodology. As an alternative, a copula based methodology for prediction modeling and an algorithm to simulate data are proposed. Methods: The method consists of introducing copulas as an alternative to the correlation coefficient commonly used as a measure of dependence. An algorithm based on the marginal distributions of random variables is applied to construct the Archimedean copulas. Monte Carlo simulations are carried out to replicate datasets, estimate prediction model parameters and validate them using Lin's concordance measure. Results: We have carried out a correlation-based regression analysis on data from 20 patients aged 17–82 years on pre-operative and post-operative ejection fractions after surgery and estimated the prediction model: Post-operative ejection fraction = - 0.0658 + 0.8403 (Pre-operative ejection fraction; p = 0.0008; 95% confidence interval of the slope coefficient (0.3998, 1.2808. From the exploratory data analysis, it is noted that both the pre-operative and post-operative ejection fractions measurements have slight departures from symmetry and are skewed to the left. It is also noted that the measurements tend to be widely spread and have shorter tails compared to normal distribution. Therefore predictions made from the correlation-based model corresponding to the pre-operative ejection fraction measurements in the lower range may not be accurate. Further it is found that the best approximated marginal distributions of pre-operative and post-operative ejection fractions (using q-q plots are gamma distributions. The copula based prediction model is estimated as: Post -operative ejection fraction = - 0.0933 + 0

  4. Stereological Study on the Positive Effect of Running Exercise on the Capillaries in the Hippocampus in a Depression Model

    Directory of Open Access Journals (Sweden)

    Linmu Chen

    2017-11-01

    Full Text Available Running exercise is an effective method to improve depressive symptoms when combined with drugs. However, the underlying mechanisms are not fully clear. Cerebral blood flow perfusion in depressed patients is significantly lower in the hippocampus. Physical activity can achieve cerebrovascular benefits. The purpose of this study was to evaluate the impacts of running exercise on capillaries in the hippocampal CA1 and dentate gyrus (DG regions. The chronic unpredictable stress (CUS depression model was used in this study. CUS rats were given 4 weeks of running exercise from the fifth week to the eighth week (20 min every day from Monday to Friday each week. The sucrose consumption test was used to measure anhedonia. Furthermore, stereological methods were used to investigate the capillary changes among the control group, CUS/Standard group and CUS/Running group. Sucrose consumption significantly increased in the CUS/Running group. Running exercise has positive effects on the capillaries parameters in the hippocampal CA1 and DG regions, such as the total volume, total length and total surface area. These results demonstrated that capillaries are protected by running exercise in the hippocampal CA1 and DG might be one of the structural bases for the exercise-induced treatment of depression-like behavior. These results suggest that drugs and behavior influence capillaries and may be considered as a new means for depression treatment in the future.

  5. Stereological estimation of nuclear volume and other quantitative histopathological parameters in the prognostic evaluation of supraglottic laryngeal squamous cell carcinoma

    DEFF Research Database (Denmark)

    Sørensen, Flemming Brandt; Bennedbaek, O; Pilgaard, J

    1989-01-01

    The aim of this study was to investigate various approaches to the grading of malignancy in pre-treatment biopsies from patients with supraglottic laryngeal squamous cell carcinoma. The prospects of objective malignancy grading based on stereological estimation of the volume-weighted mean nuclear...... observers of the latter was poor in the material which consisted of 35 biopsy specimens. Unbiased estimates of nuclear Vv were on the average 385 microns3 (CV = 0.44), with more than 90% of the associated variance attributable to differences in nuclear Vv among individual lesions. Nuclear Vv was positively....... None of the investigated categorical and quantitative parameters (cutoff points = means) reached the level of significance with respect to prognostic value. However, nuclear Vv showed the best information concerning survival (2p = 0.08), and this estimator offers optimal features for objective...

  6. Effects of bisphenol A treatment during pregnancy on kidney development in mice: a stereological and histopathological study.

    Science.gov (United States)

    Nuñez, P; Fernandez, T; García-Arévalo, M; Alonso-Magdalena, P; Nadal, A; Perillan, C; Arguelles, J

    2018-04-01

    Bisphenol A (BPA) is a chemical found in plastics that resembles oestrogen in organisms. Developmental exposure to endocrine-disrupting chemicals, such as BPA, increases the susceptibility to type 2 diabetes (T2DM) and cardiovascular diseases. Animal studies have reported a nephron deficit in offspring exposed to maternal diabetes. The aim of this study was to investigate the prenatal BPA exposure effects on nephrogenesis in a mouse model that was predisposed to T2DM. This study quantitatively evaluated the renal structural changes using stereology and histomorphometry methods. The OF1 pregnant mice were treated with a vehicle or BPA (10 or 100 μg/kg/day) during days 9-16 of gestation (early nephrogenesis). The 30-day-old offspring were sacrificed, and tissue samples were collected and prepared for histopathological and stereology studies. Glomerular abnormalities and reduced glomerular formation were observed in the BPA offspring. The kidneys of the BPA10 and BPA100 female offspring had a significantly lower glomerular number and density than those of the CONTROL female offspring. The glomerular histomorphometry revealed a significant difference between the female and male CONTROL offspring for the analysed glomerular parameters that disappeared in the BPA10 and BPA100 offspring. In addition, the kidney histopathological examination showed typical male cuboidal epithelial cells of the Bowman capsule in the female BPA offspring. Exposure to environmentally relevant doses of BPA during embryonic development altered nephrogenesis. These structural changes could be associated with an increased risk of developing cardiometabolic diseases later in life.

  7. Stereological estimation of the mean and variance of nuclear volume from vertical sections

    DEFF Research Database (Denmark)

    Sørensen, Flemming Brandt

    1991-01-01

    The application of assumption-free, unbiased stereological techniques for estimation of the volume-weighted mean nuclear volume, nuclear vv, from vertical sections of benign and malignant nuclear aggregates in melanocytic skin tumours is described. Combining sampling of nuclei with uniform...... probability in a physical disector and Cavalieri's direct estimator of volume, the unbiased, number-weighted mean nuclear volume, nuclear vN, of the same benign and malignant nuclear populations is also estimated. Having obtained estimates of nuclear volume in both the volume- and number distribution...... to the larger malignant nuclei. Finally, the variance in the volume distribution of nuclear volume is estimated by shape-independent estimates of the volume-weighted second moment of the nuclear volume, vv2, using both a manual and a computer-assisted approach. The working procedure for the description of 3-D...

  8. Ultrastructural and stereological analysis of trypanosomatidis of the genus Endotrypanum

    Directory of Open Access Journals (Sweden)

    Maurílio J. Soares

    1991-06-01

    Full Text Available Culture forms of four strains of Endotrypanum (E. schaudinni and E. monterogeii were processed for transmission electron microscopy and analyzed at the ultrastructural level. Quantitative data about some cytoplasmic organelles were obeined by stereology. All culture forms were promastigotes. In their cytoplasm four different organelles could be found: lipid inclusions (0,2-0,4 µm in diameter, mebrane-bounded vacuoles (0.10-0,28 µm in diameter, glycosomes (0,2-0,3 µm in diameter, and the mitochondrion. The kenetoplast appears as a thin band, except for the strain IM201, which possesses a broader structure, and possibly is not a member of this genus. Clusters of virus-like particles were seen in the cytoplasm of the strain LV88. The data obtained show that all strains have the typical morphological feature of the trypanosomatids. Only strain IM201 could be differentiated from the others, due to its larger kenetoplast-DNA network and its large mitochondrial and glycosomal relative volume. The morphometrical data did not allow the differentiation between E. schaudinni (strains IM217 and M6226 and E. monterogeii (strain LV88.

  9. Field Measurement-Based System Identification and Dynamic Response Prediction of a Unique MIT Building.

    Science.gov (United States)

    Cha, Young-Jin; Trocha, Peter; Büyüköztürk, Oral

    2016-07-01

    Tall buildings are ubiquitous in major cities and house the homes and workplaces of many individuals. However, relatively few studies have been carried out to study the dynamic characteristics of tall buildings based on field measurements. In this paper, the dynamic behavior of the Green Building, a unique 21-story tall structure located on the campus of the Massachusetts Institute of Technology (MIT, Cambridge, MA, USA), was characterized and modeled as a simplified lumped-mass beam model (SLMM), using data from a network of accelerometers. The accelerometer network was used to record structural responses due to ambient vibrations, blast loading, and the October 16th 2012 earthquake near Hollis Center (ME, USA). Spectral and signal coherence analysis of the collected data was used to identify natural frequencies, modes, foundation rocking behavior, and structural asymmetries. A relation between foundation rocking and structural natural frequencies was also found. Natural frequencies and structural acceleration from the field measurements were compared with those predicted by the SLMM which was updated by inverse solving based on advanced multiobjective optimization methods using the measured structural responses and found to have good agreement.

  10. Field Measurement-Based System Identification and Dynamic Response Prediction of a Unique MIT Building

    Directory of Open Access Journals (Sweden)

    Young-Jin Cha

    2016-07-01

    Full Text Available Tall buildings are ubiquitous in major cities and house the homes and workplaces of many individuals. However, relatively few studies have been carried out to study the dynamic characteristics of tall buildings based on field measurements. In this paper, the dynamic behavior of the Green Building, a unique 21-story tall structure located on the campus of the Massachusetts Institute of Technology (MIT, Cambridge, MA, USA, was characterized and modeled as a simplified lumped-mass beam model (SLMM, using data from a network of accelerometers. The accelerometer network was used to record structural responses due to ambient vibrations, blast loading, and the October 16th 2012 earthquake near Hollis Center (ME, USA. Spectral and signal coherence analysis of the collected data was used to identify natural frequencies, modes, foundation rocking behavior, and structural asymmetries. A relation between foundation rocking and structural natural frequencies was also found. Natural frequencies and structural acceleration from the field measurements were compared with those predicted by the SLMM which was updated by inverse solving based on advanced multiobjective optimization methods using the measured structural responses and found to have good agreement.

  11. Field Measurement-Based System Identification and Dynamic Response Prediction of a Unique MIT Building

    Science.gov (United States)

    Cha, Young-Jin; Trocha, Peter; Büyüköztürk, Oral

    2016-01-01

    Tall buildings are ubiquitous in major cities and house the homes and workplaces of many individuals. However, relatively few studies have been carried out to study the dynamic characteristics of tall buildings based on field measurements. In this paper, the dynamic behavior of the Green Building, a unique 21-story tall structure located on the campus of the Massachusetts Institute of Technology (MIT, Cambridge, MA, USA), was characterized and modeled as a simplified lumped-mass beam model (SLMM), using data from a network of accelerometers. The accelerometer network was used to record structural responses due to ambient vibrations, blast loading, and the October 16th 2012 earthquake near Hollis Center (ME, USA). Spectral and signal coherence analysis of the collected data was used to identify natural frequencies, modes, foundation rocking behavior, and structural asymmetries. A relation between foundation rocking and structural natural frequencies was also found. Natural frequencies and structural acceleration from the field measurements were compared with those predicted by the SLMM which was updated by inverse solving based on advanced multiobjective optimization methods using the measured structural responses and found to have good agreement. PMID:27376303

  12. STEREOLOGICAL ESTIMATION OF ITO CELLS FROM RAT LIVER USING THE OPTICAL FRACTIONATOR - A PRELIMINARY REPORT

    Directory of Open Access Journals (Sweden)

    Ricardo Marcos

    2011-05-01

    Full Text Available In the last two decades, much light has been shed on hepatic fibrosis, and the activation / proliferation of Ito cells (IC emerged to play a central role. Therefore, it is essential to have solid quantitative data in nonpathological statuses; yet, this data is scarce and confined to "number per area" or semiquantitative information. Moreover, the supposed heterogeneous distribution of IC in the hepatic lobule was never analysed with design-based (unbiased stereology. In the present study, the total number (N of IC in rat liver was estimated for the first time, by combining immunocytochemistry with the optical fractionator. Quantification was extended to the hepatocytes, to disclose the IC index, an often-used ratio in hepatology. Systematic uniform random liver sections were obtained from male Wistar rats (n = 3, and immunostained against glial fibrillary acidic protein (GFAP, a known specific marker for hepatic IC. For the first time, these were marked against GFAP in thick (30 μm paraffin sections. The estimated N of IC was 224E06; with a coefficient of error of 0.04 or 0.06, depending on the particular equation used (based on the so-called "quadratic approximation". The IC index was 91 IC/1000 hepatocytes. Concerning the lobular heterogeneity, it was proved the liver harbours a larger total number of periportal IC and hepatocytes.

  13. PREDICTION OF THE EXTREMAL SHAPE FACTOR OF SPHEROIDAL PARTICLES

    Directory of Open Access Journals (Sweden)

    Daniel Hlubinka

    2011-05-01

    Full Text Available In the stereological unfolding problem for spheroidal particles the extremal shape factor is predicted. The theory of extreme values has been used to show that extremes of the planar shape factor of particle sections tend to the same limit distribution as extremes of the original shape factor for both the conditional and marginal distribution. Attention is then paid to the extreme shape factor conditioned by the particle size. Normalizing constants are evaluated for a parametric model and the numerical procedure is tested on real data from metallography.

  14. A Geometrical-based Vertical Gain Correction for Signal Strength Prediction of Downtilted Base Station Antennas in Urban Areas

    DEFF Research Database (Denmark)

    Rodriguez, Ignacio; Nguyen, Huan Cong; Sørensen, Troels Bundgaard

    2012-01-01

    -based extension to standard empirical path loss prediction models can give quite reasonable accuracy in predicting the signal strength from tilted base station antennas in small urban macro-cells. Our evaluation is based on measurements on several sectors in a 2.6 GHz Long Term Evolution (LTE) cellular network......, with electrical antenna downtilt in the range from 0 to 10 degrees, as well as predictions based on ray-tracing and 3D building databases covering the measurement area. Although the calibrated ray-tracing predictions are highly accurate compared with the measured data, the combined LOS/NLOS COST-WI model...

  15. Comparison of continuous versus categorical tumor measurement-based metrics to predict overall survival in cancer treatment trials

    Science.gov (United States)

    An, Ming-Wen; Mandrekar, Sumithra J.; Branda, Megan E.; Hillman, Shauna L.; Adjei, Alex A.; Pitot, Henry; Goldberg, Richard M.; Sargent, Daniel J.

    2011-01-01

    Purpose The categorical definition of response assessed via the Response Evaluation Criteria in Solid Tumors has documented limitations. We sought to identify alternative metrics for tumor response that improve prediction of overall survival. Experimental Design Individual patient data from three North Central Cancer Treatment Group trials (N0026, n=117; N9741, n=1109; N9841, n=332) were used. Continuous metrics of tumor size based on longitudinal tumor measurements were considered in addition to a trichotomized response (TriTR: Response vs. Stable vs. Progression). Cox proportional hazards models, adjusted for treatment arm and baseline tumor burden, were used to assess the impact of the metrics on subsequent overall survival, using a landmark analysis approach at 12-, 16- and 24-weeks post baseline. Model discrimination was evaluated using the concordance (c) index. Results The overall best response rates for the three trials were 26%, 45%, and 25% respectively. While nearly all metrics were statistically significantly associated with overall survival at the different landmark time points, the c-indices for the traditional response metrics ranged from 0.59-0.65; for the continuous metrics from 0.60-0.66 and for the TriTR metrics from 0.64-0.69. The c-indices for TriTR at 12-weeks were comparable to those at 16- and 24-weeks. Conclusions Continuous tumor-measurement-based metrics provided no predictive improvement over traditional response based metrics or TriTR; TriTR had better predictive ability than best TriTR or confirmed response. If confirmed, TriTR represents a promising endpoint for future Phase II trials. PMID:21880789

  16. Stereologic, histopathologic, flow cytometric, and clinical parameters in the prognostic evaluation of 74 patients with intraoral squamous cell carcinomas

    DEFF Research Database (Denmark)

    Bundgaard, T; Sørensen, Flemming Brandt; Gaihede, M

    1992-01-01

    , tumor size, and the TNM classification. RESULTS: The investigation showed a significant difference between the volume-weighted mean nuclear volume (nuclear vv) of oral leukoplakia (n = 29) and oral squamous cell carcinomas (P = 0.001). The value of the parameters as prognostic indicators of survival......BACKGROUND AND METHODS: A consecutive series of all 78 incident cases of intraoral squamous cell carcinoma occurring during a 2-year period in a population of 1.4 million inhabitants were evaluated by histologic score (the modified classification of Jacobsson et al.), flow cytometry, stereology...

  17. Scaling-based prediction of magnetic anisotropy in grain-oriented steels

    Directory of Open Access Journals (Sweden)

    Najgebauer Mariusz

    2017-06-01

    Full Text Available The paper presents the scaling-based approach to analysis and prediction of magnetic anisotropy in grain-oriented steels. Results of the anisotropy scaling indicate the existence of two universality classes. The hybrid approach to prediction of magnetic anisotropy, combining the scaling analysis with the ODFs method, is proposed. This approach is examined in prediction of angular dependencies of magnetic induction as well as magnetization curves for the 111-35S5 steel. It is shown that it is possible to predict anisotropy of magnetic properties based on measurements in three arbitrary directions for φ = 0°, 60° and 90°. The relatively small errors between predicted and measured values of magnetic induction are obtained.

  18. Assessment of in vivo MR imaging compared to physical sections in vitro-A quantitative study of brain volumes using stereology

    DEFF Research Database (Denmark)

    Jelsing, Jacob; Rostrup, Egill; Markenroth, Karin

    2005-01-01

    The object of the present study was to compare stereological estimates of brain volumes obtained in vivo by magnetic resonance imaging (MRI) to corresponding volumes from physical sections in vitro. Brains of ten domestic pigs were imaged using a 3-T scanner. The volumes of different brain....... However, although intraobserver difference of MRI estimates was acceptable, the interobserver difference was not. A statistical highly significant difference of 11-41% was observed between observers for volume estimates of all compartments considered. The study demonstrates that quantitative MRI...

  19. The total number of Leydig and Sertoli cells in the testes of men across various age groups - a stereological study

    DEFF Research Database (Denmark)

    Petersen, Peter M; Seierøe, Karina; Pakkenberg, Bente

    2015-01-01

    is particularly sensitive to methodological problems. Therefore, using the optical fractionator technique and a sampling design specifically optimized for human testes, we estimated the total number of Sertoli and Leydig cells in the testes from 26 post mortem male subjects ranging in age from 16 to 80 years...... of Sertoli cells with age; no such decline was found for Leydig cells. Quantitative stereological analysis of post mortem tissue may help understand the influence of age or disease on the number of human testicular cells....

  20. EPOS1 - a multiparameter measuring system to earthquake prediction research

    Energy Technology Data Exchange (ETDEWEB)

    Streil, T.; Oeser, V. [SARAD GmbH, Dresden (Germany); Heinicke, J.; Koch, U.; Wiegand, J.

    1998-12-31

    The approach to earthquake prediction by geophysical, geochemical and hydrological measurements is a long and winding road. Nevertheless, the results show a progress in that field (e.g. Kobe). This progress is also a result of a new generation of measuring equipment. SARAD has developed a versatile measuring system (EPOS1) based on experiences and recent results from different research groups. It is able to record selected parameters suitable to earthquake prediction research. A micro-computer system handles data exchange, data management and control. It is connected to a modular sensor system. Sensor modules can be selected according to the actual needs at the measuring site. (author)

  1. STEREOLOGICAL QUANTITATION OF LEYDIG AND SERTOLI CELLS IN THE TESTIS FROM YOUNG AND OLD MEN

    Directory of Open Access Journals (Sweden)

    Peter M Petersen

    2011-05-01

    Full Text Available One of the newer stereological methods, the optical fractionator, was applied to the study of the effects of ageing on the human testis. The estimated total number of Sertoli and Leydig cells per testis in men younger than 30 years were 430×106 (CV = SD/mean = 0.35 and 117×106 (CV = 0.53, respectively, while in men older than 50 years the estimated total Sertoli cell number was 266×106 (CV = 0.46 and the mean Leydig cell number 83×106 (CV = 0.53. The difference between the number of Sertoli cells in men younger than 30 years compared with men older than 50 years was close to statistical significance (p = 0.052 while no differences was found in total Leydig cell number (p = 0.22.

  2. Quantification of microstructural features in α/β titanium alloys

    International Nuclear Information System (INIS)

    Tiley, J.; Searles, T.; Lee, E.; Kar, S.; Banerjee, R.; Russ, J.C.; Fraser, H.L.

    2004-01-01

    Mechanical properties of α/β Ti alloys are closely related to their microstructure. The complexity of the microstructural features involved makes it rather difficult to develop models for predicting properties of these alloys. Developing predictive rules-based models for α/β Ti alloys requires a huge database consisting of quantified microstructural data. This in turn requires the development of rigorous stereological procedures capable of quantifying the various microstructural features of interest imaged using optical and scanning electron microscopy (SEM) micrographs. In the present paper, rigorous stereological procedures have been developed for quantifying four important microstructural features in these alloys: thickness of Widmanstaetten α laths, colony scale factor, prior β grain size, and volume fraction of Widmanstaetten α laths

  3. Predicting and measuring fluid responsiveness with echocardiography

    Directory of Open Access Journals (Sweden)

    Ashley Miller

    2016-06-01

    Full Text Available Echocardiography is ideally suited to guide fluid resuscitation in critically ill patients. It can be used to assess fluid responsiveness by looking at the left ventricle, aortic outflow, inferior vena cava and right ventricle. Static measurements and dynamic variables based on heart–lung interactions all combine to predict and measure fluid responsiveness and assess response to intravenous fluid esuscitation. Thorough knowledge of these variables, the physiology behind them and the pitfalls in their use allows the echocardiographer to confidently assess these patients and in combination with clinical judgement manage them appropriately.

  4. Solar energy prediction and verification using operational model forecasts and ground-based solar measurements

    International Nuclear Information System (INIS)

    Kosmopoulos, P.G.; Kazadzis, S.; Lagouvardos, K.; Kotroni, V.; Bais, A.

    2015-01-01

    The present study focuses on the predictions and verification of these predictions of solar energy using ground-based solar measurements from the Hellenic Network for Solar Energy and the National Observatory of Athens network, as well as solar radiation operational forecasts provided by the MM5 mesoscale model. The evaluation was carried out independently for the different networks, for two forecast horizons (1 and 2 days ahead), for the seasons of the year, for varying solar elevation, for the indicative energy potential of the area, and for four classes of cloud cover based on the calculated clearness index (k_t): CS (clear sky), SC (scattered clouds), BC (broken clouds) and OC (overcast). The seasonal dependence presented relative rRMSE (Root Mean Square Error) values ranging from 15% (summer) to 60% (winter), while the solar elevation dependence revealed a high effectiveness and reliability near local noon (rRMSE ∼30%). An increment of the errors with cloudiness was also observed. For CS with mean GHI (global horizontal irradiance) ∼ 650 W/m"2 the errors are 8%, for SC 20% and for BC and OC the errors were greater (>40%) but correspond to much lower radiation levels (<120 W/m"2) of consequently lower energy potential impact. The total energy potential for each ground station ranges from 1.5 to 1.9 MWh/m"2, while the mean monthly forecast error was found to be consistently below 10%. - Highlights: • Long term measurements at different atmospheric cases are needed for energy forecasting model evaluations. • The total energy potential at the Greek sites presented ranges from 1.5 to 1.9 MWh/m"2. • Mean monthly energy forecast errors are within 10% for all cases analyzed. • Cloud presence results of an additional forecast error that varies with the cloud cover.

  5. DNA level and stereologic estimates of nuclear volume in squamous cell carcinomas of the uterine cervix. A comparative study with analysis of prognostic impact

    DEFF Research Database (Denmark)

    Sørensen, Flemming Brandt; Bichel, P; Jakobsen, A

    1992-01-01

    Grading of malignancy in squamous cell carcinomas of the uterine cervix is based on qualitative, morphologic examination and suffers from poor reproducibility. Using modern stereology, unbiased estimates of the three-dimensional, volume-weighted mean nuclear volume (nuclear vv), were obtained...... in pretreatment biopsies from 51 patients treated for cervical cancer in clinical Stages I through III (mean age of 56 years, follow-up period greater than 5 years). In addition, conventional, two-dimensional morphometric estimates of nuclear and mitotic features were obtained. DNA indices (DI) were estimated...... of nuclear vv were only of marginal prognostic significance (2P = 0.07). However, Cox multivariate regression analysis showed independent prognostic value of patient age and nuclear vv along with clinical stage and DI. All other investigated variables were rejected from the model. A prognostic index...

  6. A comparison of predictions and measurements for the Stripa simulated drift experiment

    International Nuclear Information System (INIS)

    Hodgkinson, D.

    1991-02-01

    This paper presents a comparison of measurements and predictions for the simulated drift experiment based on groundwater flow to the D-holes at the SCV site. The comparison was carried out on behalf of the Stripa task force on fracture flow modelling, as a learning exercise for the validation exercise to be based on flow to the validation drift. The paper summarises the characterisation data and their preliminary interpretation, and reviews the fracture flow modelling predictions made by teams from AEA Harwell, Golder Associates and Lawrence Berkeley Laboratory. The predictions are compared with each other and with the D-hole inflow measurements, and this experience is used to provide detailed feedback to future experimental and modelling work. (35 refs.)

  7. A comparison of porosity analysis using 2D stereology estimates and 3D serial sectioning for additively manufactured Ti 6Al 2Sn 4Zr 2Mo alloy; Vergleich der Porositaetsanalyse einer Ti 6Al 2Sn 4Zr 2Mo-Legierung aus additiver Fertigung mittels stereologischer Schaetzungen (2D) und mit Serienschnitten (3D)

    Energy Technology Data Exchange (ETDEWEB)

    Ganti, Satya R.; Velez, Michael A.; Geier, Brian A.; Hayes, Brian J.; Turner, Bryan J.; Jenkins, Elizabeth J. [UES Inc., Dayton, OH (United States)

    2017-02-15

    Porosity is a typical defect in additively manufactured (AM) parts. Such defects limit the properties and performance of AM parts, and therefore need to be characterized accurately. Current methods for characterization of defects and microstructure rely on classical stereological methods that extrapolate information from two dimensional images. The automation of serial sectioning provides an opportunity to precisely and accurately quantify porosity in three dimensions in materials. In this work, we analyzed the porosity of an additively manufactured Ti 6Al 2Sn 4Zr 2Mo sample using Robo-Met.3D {sup registered}, an automated serial sectioning system. Image processing for three dimensional reconstruction of the serial-sectioned two dimensional images was performed using open source image analysis software (Fiji/ImageJ, Dream.3D, Paraview). The results from this 3D serial sectioning analysis were then compared to classical 2D stereological methods (Saltykov stereological theory). We found that for this dataset, the classical 2D methods underestimated the porosity size and distributions of the larger pores; a critical attribute to fatigue behavior of the AM part. The results suggest that acquiring experimental data with equipment such as Robo-Met.3D {sup registered} to measure the number and size of particles such as pores in a volume irrespective of knowing their shape is a better choice.

  8. The Dimensions of the Orbital Cavity Based on High-Resolution Computed Tomography of Human Cadavers

    DEFF Research Database (Denmark)

    Felding, Ulrik Ascanius; Bloch, Sune Land; Buchwald, Christian von

    2016-01-01

    for surface area. To authors' knowledge, this study is the first to have measured the entire surface area of the orbital cavity.The volume and surface area of the orbital cavity were estimated in computed tomography scans of 11 human cadavers using unbiased stereological sampling techniques. The mean (± SD......) total volume and total surface area of the orbital cavities was 24.27 ± 3.88 cm and 32.47 ± 2.96 cm, respectively. There was no significant difference in volume (P = 0.315) or surface area (P = 0.566) between the 2 orbital cavities.The stereological technique proved to be a robust and unbiased method...... that may be used as a gold standard for comparison with automated computer software. Future imaging studies in blow-out fracture patients may be based on individual and relative calculation involving both herniated volume and fractured surface area in relation to the total volume and surface area...

  9. Spent-fuel composition: a comparison of predicted and measured data

    International Nuclear Information System (INIS)

    Thomas, C.C. Jr.; Cobb, D.D.; Ostenak, C.A.

    1981-03-01

    The uncertainty in predictions of the nuclear materials content of spent light-water reactor fuel was investigated to obtain guidelines for nondestructive spent-fuel verification and assay. Values predicted by the reactor operator were compared with measured values from fuel reprocessors for six reactors (three PWR and three BWR). The study indicates that total uranium, total plutonium, fissile uranium, fissile plutonium, and total fissile content can be predicted with biases ranging from 1 to 6% and variabilities (1-sigma) ranging from 2 to 7%. The higher values generally are associated with BWRs. Based on the results of this study, nondestructive assay measurements that are accurate and precise to 5 to 10% (1sigma) or better should be useful for quantitative analyses of typical spent fuel

  10. Data-Based Predictive Control with Multirate Prediction Step

    Science.gov (United States)

    Barlow, Jonathan S.

    2010-01-01

    Data-based predictive control is an emerging control method that stems from Model Predictive Control (MPC). MPC computes current control action based on a prediction of the system output a number of time steps into the future and is generally derived from a known model of the system. Data-based predictive control has the advantage of deriving predictive models and controller gains from input-output data. Thus, a controller can be designed from the outputs of complex simulation code or a physical system where no explicit model exists. If the output data happens to be corrupted by periodic disturbances, the designed controller will also have the built-in ability to reject these disturbances without the need to know them. When data-based predictive control is implemented online, it becomes a version of adaptive control. One challenge of MPC is computational requirements increasing with prediction horizon length. This paper develops a closed-loop dynamic output feedback controller that minimizes a multi-step-ahead receding-horizon cost function with multirate prediction step. One result is a reduced influence of prediction horizon and the number of system outputs on the computational requirements of the controller. Another result is an emphasis on portions of the prediction window that are sampled more frequently. A third result is the ability to include more outputs in the feedback path than in the cost function.

  11. Molecular Dynamics Simulations and Kinetic Measurements to Estimate and Predict Protein-Ligand Residence Times.

    Science.gov (United States)

    Mollica, Luca; Theret, Isabelle; Antoine, Mathias; Perron-Sierra, Françoise; Charton, Yves; Fourquez, Jean-Marie; Wierzbicki, Michel; Boutin, Jean A; Ferry, Gilles; Decherchi, Sergio; Bottegoni, Giovanni; Ducrot, Pierre; Cavalli, Andrea

    2016-08-11

    Ligand-target residence time is emerging as a key drug discovery parameter because it can reliably predict drug efficacy in vivo. Experimental approaches to binding and unbinding kinetics are nowadays available, but we still lack reliable computational tools for predicting kinetics and residence time. Most attempts have been based on brute-force molecular dynamics (MD) simulations, which are CPU-demanding and not yet particularly accurate. We recently reported a new scaled-MD-based protocol, which showed potential for residence time prediction in drug discovery. Here, we further challenged our procedure's predictive ability by applying our methodology to a series of glucokinase activators that could be useful for treating type 2 diabetes mellitus. We combined scaled MD with experimental kinetics measurements and X-ray crystallography, promptly checking the protocol's reliability by directly comparing computational predictions and experimental measures. The good agreement highlights the potential of our scaled-MD-based approach as an innovative method for computationally estimating and predicting drug residence times.

  12. Proliferation of Sertoli cells during development of the human testis assessed by stereological methods

    DEFF Research Database (Denmark)

    Cortes, D; Müller, J; Skakkebaek, N E

    1987-01-01

    Sertoli cells were studied using stereological methods in testes obtained from five children who were stillborn, and 31 individuals between 3 months and 40 years of age, who had suffered from sudden, unexpected death. The mean nuclear volume of the Sertoli cells, the numerical density of Sertoli...... cells, and the total number of Sertoli cells per individual were determined by point- and profile-counting of 0.5 micron sections. The nuclear volume of Sertoli cells increased from a median of 120 microns3 (range 53-130) during the period of 3 months to 10 years to 210 microns3 (170-260) in adults...... (greater than 25 years). The numerical density of Sertoli cells decreased from a median of 1200 X 10(6)/cm3 (870-1400) during childhood (3 months to 10 years) to 140 X 10(6)/cm3 (110-260) in adults (greater than 25 years). The total number of Sertoli cells per individual increased significantly from...

  13. Assessing Therapist Competence: Development of a Performance-Based Measure and Its Comparison With a Web-Based Measure.

    Science.gov (United States)

    Cooper, Zafra; Doll, Helen; Bailey-Straebler, Suzanne; Bohn, Kristin; de Vries, Dian; Murphy, Rebecca; O'Connor, Marianne E; Fairburn, Christopher G

    2017-10-31

    Recent research interest in how best to train therapists to deliver psychological treatments has highlighted the need for rigorous, but scalable, means of measuring therapist competence. There are at least two components involved in assessing therapist competence: the assessment of their knowledge of the treatment concerned, including how and when to use its strategies and procedures, and an evaluation of their ability to apply such knowledge skillfully in practice. While the assessment of therapists' knowledge has the potential to be completed efficiently on the Web, the assessment of skill has generally involved a labor-intensive process carried out by clinicians, and as such, may not be suitable for assessing training outcome in certain circumstances. The aims of this study were to develop and evaluate a role-play-based measure of skill suitable for assessing training outcome and to compare its performance with a highly scalable Web-based measure of applied knowledge. Using enhanced cognitive behavioral therapy (CBT-E) for eating disorders as an exemplar, clinical scenarios for role-play assessment were developed and piloted together with a rating scheme for assessing trainee therapists' performance. These scenarios were evaluated by examining the performance of 93 therapists from different professional backgrounds and at different levels of training in implementing CBT-E. These therapists also completed a previously developed Web-based measure of applied knowledge, and the ability of the Web-based measure to efficiently predict competence on the role-play measure was investigated. The role-play measure assessed performance at implementing a range of CBT-E procedures. The majority of the therapists rated their performance as moderately or closely resembling their usual clinical performance. Trained raters were able to achieve good-to-excellent reliability for averaged competence, with intraclass correlation coefficients ranging from .653 to 909. The measure was

  14. RLV-TD Flight Measured Aeroacoustic Levels and its Comparison with Predictions

    Science.gov (United States)

    Manokaran, K.; Prasath, M.; Venkata Subrahmanyam, B.; Ganesan, V. R.; Ravindran, Archana; Babu, C.

    2017-12-01

    The Reusable Launch Vehicle-Technology Demonstrator (RLV-TD) is a wing body configuration successfully flight tested. One of the important flight measurements is the acoustic levels. There were five external microphones, mounted on the fuselage-forebody, wing, vertical tail, inter-stage (ITS) and core base shroud to measure the acoustic levels from lift-off to splash down. In the ascent phase, core base shroud recorded the overall maximum at both lift-off and transonic conditions. In-flight noise levels measured on the wing is second highest, followed by fuselage and vertical tail. Predictions for flight trajectory compare well at all locations except for vertical tail (4.5 dB). In the descent phase, maximum measured OASPL occurs at transonic condition for the wing, followed by vertical tail and fuselage. Predictions for flight trajectory compare well at all locations except for wing (- 6.0 dB). Spectrum comparison is good in the ascent phase compared to descent phase. Roll Reaction control system (RCS) thruster firing signature is seen in the acoustic measurements on the wing and vertical tail during lift-off.

  15. Predictive Software Measures based on Z Specifications - A Case Study

    Directory of Open Access Journals (Sweden)

    Andreas Bollin

    2012-07-01

    Full Text Available Estimating the effort and quality of a system is a critical step at the beginning of every software project. It is necessary to have reliable ways of calculating these measures, and, it is even better when the calculation can be done as early as possible in the development life-cycle. Having this in mind, metrics for formal specifications are examined with a view to correlations to complexity and quality-based code measures. A case study, based on a Z specification and its implementation in ADA, analyzes the practicability of these metrics as predictors.

  16. Model-based prediction of myelosuppression and recovery based on frequent neutrophil monitoring.

    Science.gov (United States)

    Netterberg, Ida; Nielsen, Elisabet I; Friberg, Lena E; Karlsson, Mats O

    2017-08-01

    To investigate whether a more frequent monitoring of the absolute neutrophil counts (ANC) during myelosuppressive chemotherapy, together with model-based predictions, can improve therapy management, compared to the limited clinical monitoring typically applied today. Daily ANC in chemotherapy-treated cancer patients were simulated from a previously published population model describing docetaxel-induced myelosuppression. The simulated values were used to generate predictions of the individual ANC time-courses, given the myelosuppression model. The accuracy of the predicted ANC was evaluated under a range of conditions with reduced amount of ANC measurements. The predictions were most accurate when more data were available for generating the predictions and when making short forecasts. The inaccuracy of ANC predictions was highest around nadir, although a high sensitivity (≥90%) was demonstrated to forecast Grade 4 neutropenia before it occurred. The time for a patient to recover to baseline could be well forecasted 6 days (±1 day) before the typical value occurred on day 17. Daily monitoring of the ANC, together with model-based predictions, could improve anticancer drug treatment by identifying patients at risk for severe neutropenia and predicting when the next cycle could be initiated.

  17. Reliability of twin-dependent triple junction distributions measured from a section plane

    International Nuclear Information System (INIS)

    Hardy, Graden B.; Field, David P.

    2016-01-01

    Numerous studies indicate polycrystalline triple junctions are independent microstructural features with distinct properties from their constituent grain boundaries. Despite the influence of triple junctions on material properties, it is impractical to characterize triple junctions on a large scale using current three-dimensional methods. This work demonstrates the ability to characterize twin-dependent triple junction distributions from a section plane by adopting a grain boundary plane stereology. The technique is validated through simulated distributions and simulated electron back-scatter diffraction (EBSD) data. Measures of validation and convergence are adopted to demonstrate the quantitative reliability of the technique as well as the convergence behavior of twin-dependent triple junction distributions. This technique expands the characterization power of EBSD and prepares the way for characterizing general triple junction distributions from a section plane. - Graphical abstract: The distribution of planes forming a triple junction with a given twin boundary is shown partially in the stereographic projections below from a given projection. The plot on the left shows the ideal/measured distribution and the plot on the right shows the distribution obtained from the stereological method presented here.

  18. Effect of length of measurement period on accuracy of predicted annual heating energy consumption of buildings

    International Nuclear Information System (INIS)

    Cho, Sung-Hwan; Kim, Won-Tae; Tae, Choon-Soeb; Zaheeruddin, M.

    2004-01-01

    This study examined the temperature dependent regression models of energy consumption as a function of the length of the measurement period. The methodology applied was to construct linear regression models of daily energy consumption from 1 day to 3 months data sets and compare the annual heating energy consumption predicted by these models with actual annual heating energy consumption. A commercial building in Daejon was selected, and the energy consumption was measured over a heating season. The results from the investigation show that the predicted energy consumption based on 1 day of measurements to build the regression model could lead to errors of 100% or more. The prediction error decreased to 30% when 1 week of data was used to build the regression model. Likewise, the regression model based on 3 months of measured data predicted the annual energy consumption within 6% of the measured energy consumption. These analyses show that the length of the measurement period has a significant impact on the accuracy of the predicted annual energy consumption of buildings

  19. Ground-truthing predicted indoor radon concentrations by using soil-gas radon measurements

    International Nuclear Information System (INIS)

    Reimer, G.M.

    2001-01-01

    Predicting indoor radon potential has gained in importance even as the national radon programs began to wane. A cooperative study to produce radon potential maps was conducted by the Environmental Protection Agency (EPA), U.S. Geological Survey (USGS), Department of Energy (DOE), and Lawrence Berkeley Laboratory (LBL) with the latter taking the lead role. A county-wide predictive model based dominantly on the National Uranium Resource Evaluation (NURE) aerorad data and secondly on geology, both small-scale data bases was developed. However, that model breaks down in counties of complex geology and does not provide a means to evaluate the potential of an individual home or building site. Soil-gas radon measurements on a large scale are currently shown to provide information for estimating radon potential at individual sites sort out the complex geology so that the small-scale prediction index can be validated. An example from Frederick County, Maryland indicates a positive correlation between indoor measurements and soil-gas data. The method does not rely on a single measurement, but a series that incorporate seasonal and meteorological considerations. (author)

  20. Prediction of Landing Gear Noise Reduction and Comparison to Measurements

    Science.gov (United States)

    Lopes, Leonard V.

    2010-01-01

    Noise continues to be an ongoing problem for existing aircraft in flight and is projected to be a concern for next generation designs. During landing, when the engines are operating at reduced power, the noise from the airframe, of which landing gear noise is an important part, is equal to the engine noise. There are several methods of predicting landing gear noise, but none have been applied to predict the change in noise due to a change in landing gear design. The current effort uses the Landing Gear Model and Acoustic Prediction (LGMAP) code, developed at The Pennsylvania State University to predict the noise from landing gear. These predictions include the influence of noise reduction concepts on the landing gear noise. LGMAP is compared to wind tunnel experiments of a 6.3%-scale Boeing 777 main gear performed in the Quiet Flow Facility (QFF) at NASA Langley. The geometries tested in the QFF include the landing gear with and without a toboggan fairing and the door. It is shown that LGMAP is able to predict the noise directives and spectra from the model-scale test for the baseline configuration as accurately as current gear prediction methods. However, LGMAP is also able to predict the difference in noise caused by the toboggan fairing and by removing the landing gear door. LGMAP is also compared to far-field ground-based flush-mounted microphone measurements from the 2005 Quiet Technology Demonstrator 2 (QTD 2) flight test. These comparisons include a Boeing 777-300ER with and without a toboggan fairing that demonstrate that LGMAP can be applied to full-scale flyover measurements. LGMAP predictions of the noise generated by the nose gear on the main gear measurements are also shown.

  1. Predicting clinical concussion measures at baseline based on motivation and academic profile.

    Science.gov (United States)

    Trinidad, Katrina J; Schmidt, Julianne D; Register-Mihalik, Johna K; Groff, Diane; Goto, Shiho; Guskiewicz, Kevin M

    2013-11-01

    The purpose of this study was to predict baseline neurocognitive and postural control performance using a measure of motivation, high school grade point average (hsGPA), and Scholastic Aptitude Test (SAT) score. Cross-sectional. Clinical research center. Eighty-eight National Collegiate Athletic Association Division I incoming student-athletes (freshman and transfers). Participants completed baseline clinical concussion measures, including a neurocognitive test battery (CNS Vital Signs), a balance assessment [Sensory Organization Test (SOT)], and motivation testing (Rey Dot Counting). Participants granted permission to access hsGPA and SAT total score. Standard scores for each CNS Vital Signs domain and SOT composite score. Baseline motivation, hsGPA, and SAT explained a small percentage of the variance of complex attention (11%), processing speed (12%), and composite SOT score (20%). Motivation, hsGPA, and total SAT score do not explain a significant amount of the variance in neurocognitive and postural control measures but may still be valuable to consider when interpreting neurocognitive and postural control measures.

  2. Model Predictive Control of Wind Turbines using Uncertain LIDAR Measurements

    DEFF Research Database (Denmark)

    Mirzaei, Mahmood; Soltani, Mohsen; Poulsen, Niels Kjølstad

    2013-01-01

    , we simplify state prediction for the MPC. Consequently, the control problem of the nonlinear system is simplified into a quadratic programming. We consider uncertainty in the wind propagation time, which is the traveling time of wind from the LIDAR measurement point to the rotor. An algorithm based......The problem of Model predictive control (MPC) of wind turbines using uncertain LIDAR (LIght Detection And Ranging) measurements is considered. A nonlinear dynamical model of the wind turbine is obtained. We linearize the obtained nonlinear model for different operating points, which are determined...... on wind speed estimation and measurements from the LIDAR is devised to find an estimate of the delay and compensate for it before it is used in the controller. Comparisons between the MPC with error compensation, the MPC without error compensation and an MPC with re-linearization at each sample point...

  3. Analysis of energy-based algorithms for RNA secondary structure prediction

    Directory of Open Access Journals (Sweden)

    Hajiaghayi Monir

    2012-02-01

    Full Text Available Abstract Background RNA molecules play critical roles in the cells of organisms, including roles in gene regulation, catalysis, and synthesis of proteins. Since RNA function depends in large part on its folded structures, much effort has been invested in developing accurate methods for prediction of RNA secondary structure from the base sequence. Minimum free energy (MFE predictions are widely used, based on nearest neighbor thermodynamic parameters of Mathews, Turner et al. or those of Andronescu et al. Some recently proposed alternatives that leverage partition function calculations find the structure with maximum expected accuracy (MEA or pseudo-expected accuracy (pseudo-MEA methods. Advances in prediction methods are typically benchmarked using sensitivity, positive predictive value and their harmonic mean, namely F-measure, on datasets of known reference structures. Since such benchmarks document progress in improving accuracy of computational prediction methods, it is important to understand how measures of accuracy vary as a function of the reference datasets and whether advances in algorithms or thermodynamic parameters yield statistically significant improvements. Our work advances such understanding for the MFE and (pseudo-MEA-based methods, with respect to the latest datasets and energy parameters. Results We present three main findings. First, using the bootstrap percentile method, we show that the average F-measure accuracy of the MFE and (pseudo-MEA-based algorithms, as measured on our largest datasets with over 2000 RNAs from diverse families, is a reliable estimate (within a 2% range with high confidence of the accuracy of a population of RNA molecules represented by this set. However, average accuracy on smaller classes of RNAs such as a class of 89 Group I introns used previously in benchmarking algorithm accuracy is not reliable enough to draw meaningful conclusions about the relative merits of the MFE and MEA-based algorithms

  4. Prediction-based Dynamic Energy Management in Wireless Sensor Networks

    Science.gov (United States)

    Wang, Xue; Ma, Jun-Jie; Wang, Sheng; Bi, Dao-Wei

    2007-01-01

    Energy consumption is a critical constraint in wireless sensor networks. Focusing on the energy efficiency problem of wireless sensor networks, this paper proposes a method of prediction-based dynamic energy management. A particle filter was introduced to predict a target state, which was adopted to awaken wireless sensor nodes so that their sleep time was prolonged. With the distributed computing capability of nodes, an optimization approach of distributed genetic algorithm and simulated annealing was proposed to minimize the energy consumption of measurement. Considering the application of target tracking, we implemented target position prediction, node sleep scheduling and optimal sensing node selection. Moreover, a routing scheme of forwarding nodes was presented to achieve extra energy conservation. Experimental results of target tracking verified that energy-efficiency is enhanced by prediction-based dynamic energy management.

  5. Prediction-based Dynamic Energy Management in Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Dao-Wei Bi

    2007-03-01

    Full Text Available Energy consumption is a critical constraint in wireless sensor networks. Focusing on the energy efficiency problem of wireless sensor networks, this paper proposes a method of prediction-based dynamic energy management. A particle filter was introduced to predict a target state, which was adopted to awaken wireless sensor nodes so that their sleep time was prolonged. With the distributed computing capability of nodes, an optimization approach of distributed genetic algorithm and simulated annealing was proposed to minimize the energy consumption of measurement. Considering the application of target tracking, we implemented target position prediction, node sleep scheduling and optimal sensing node selection. Moreover, a routing scheme of forwarding nodes was presented to achieve extra energy conservation. Experimental results of target tracking verified that energy-efficiency is enhanced by prediction-based dynamic energy management.

  6. Mast cells and atopic dermatitis. Stereological quantification of mast cells in atopic dermatitis and normal human skin

    DEFF Research Database (Denmark)

    Damsgaard, T E; Olesen, A B; Sørensen, Flemming Brandt

    1997-01-01

    Stereological quantification of mast cell numbers was applied to sections of punch biopsies from lesional and nonlesional skin of atopic dermatitis patients and skin of healthy volunteers. We also investigated whether the method of staining and/or the fixative influenced the results...... of the determination of the mast cell profile numbers. The punch biopsies were taken from the same four locations in both atopic dermatitis patients and normal individuals. The locations were the scalp, neck and flexure of the elbow (lesional skin), and nates (nonlesional skin). Clinical scoring was carried out...... yielded the following results: (1) in atopic dermatitis lesional skin an increased number of mast cell profiles was found as compared with nonlesional skin, (2) comparing atopic dermatitis skin with normal skin, a significantly increased number of mast cell profiles per millimetre squared was found...

  7. Mast cells and atopic dermatitis. Stereological quantification of mast cells in atopic dermatitis and normal human skin

    DEFF Research Database (Denmark)

    Damsgaard, T E; Olesen, A B; Sørensen, Flemming Brandt

    1997-01-01

    Stereological quantification of mast cell numbers was applied to sections of punch biopsies from lesional and nonlesional skin of atopic dermatitis patients and skin of healthy volunteers. We also investigated whether the method of staining and/or the fixative influenced the results...... of the determination of the mast cell profile numbers. The punch biopsies were taken from the same four locations in both atopic dermatitis patients and normal individuals. The locations were the scalp, neck and flexure of the elbow (lesional skin), and nates (nonlesional skin). Clinical scoring was carried out...... at the site of each biopsy. After fixation and plastic embedding, the biopsies were cut into 2 microns serial sections. Ten sections, 30 microns apart, from each biopsy were examined and stained alternately with either toluidine blue or Giemsa stain and mast cell profile numbers were determined. The study...

  8. Gas Concentration Prediction Based on the Measured Data of a Coal Mine Rescue Robot

    Directory of Open Access Journals (Sweden)

    Xiliang Ma

    2016-01-01

    Full Text Available The coal mine environment is complex and dangerous after gas accident; then a timely and effective rescue and relief work is necessary. Hence prediction of gas concentration in front of coal mine rescue robot is an important significance to ensure that the coal mine rescue robot carries out the exploration and search and rescue mission. In this paper, a gray neural network is proposed to predict the gas concentration 10 meters in front of the coal mine rescue robot based on the gas concentration, temperature, and wind speed of the current position and 1 meter in front. Subsequently the quantum genetic algorithm optimization gray neural network parameters of the gas concentration prediction method are proposed to get more accurate prediction of the gas concentration in the roadway. Experimental results show that a gray neural network optimized by the quantum genetic algorithm is more accurate for predicting the gas concentration. The overall prediction error is 9.12%, and the largest forecasting error is 11.36%; compared with gray neural network, the gas concentration prediction error increases by 55.23%. This means that the proposed method can better allow the coal mine rescue robot to accurately predict the gas concentration in the coal mine roadway.

  9. Predicting online ratings based on the opinion spreading process

    Science.gov (United States)

    He, Xing-Sheng; Zhou, Ming-Yang; Zhuo, Zhao; Fu, Zhong-Qian; Liu, Jian-Guo

    2015-10-01

    Predicting users' online ratings is always a challenge issue and has drawn lots of attention. In this paper, we present a rating prediction method by combining the user opinion spreading process with the collaborative filtering algorithm, where user similarity is defined by measuring the amount of opinion a user transfers to another based on the primitive user-item rating matrix. The proposed method could produce a more precise rating prediction for each unrated user-item pair. In addition, we introduce a tunable parameter λ to regulate the preferential diffusion relevant to the degree of both opinion sender and receiver. The numerical results for Movielens and Netflix data sets show that this algorithm has a better accuracy than the standard user-based collaborative filtering algorithm using Cosine and Pearson correlation without increasing computational complexity. By tuning λ, our method could further boost the prediction accuracy when using Mean Absolute Error (MAE) and Root Mean Squared Error (RMSE) as measurements. In the optimal cases, on Movielens and Netflix data sets, the corresponding algorithmic accuracy (MAE and RMSE) are improved 11.26% and 8.84%, 13.49% and 10.52% compared to the item average method, respectively.

  10. Stereological comparison of oocyte recruitment and batch fecundity estimates from paraffin and resin sections using spawning albacore (Thunnus alalunga) ovaries as a case study

    Science.gov (United States)

    Saber, Sámar; Macías, David; Ortiz de Urbina, Josetxu; Kjesbu, Olav Sigurd

    2015-01-01

    Traditional histological protocols in marine fish reproductive laboratories using paraffin as the embedding medium are now increasingly being replaced with protocols using resin instead. These procedures entail different degrees of tissue shrinkage complicating direct comparisons of measurement results across laboratories or articles. In this work we selected ovaries of spawning Mediterranean albacore (Thunnus alalunga) as the subject of our study to address the issue of structural changes, by contrasting values on oocyte recruitment and final batch fecundity given from the same tissue samples in both paraffin and resin. A modern stereological method, the oocyte packing density (OPD) theory, was used supported by initial studies on ovarian tissue sampling and measurement design. Examples of differences in the volume fraction of oocyte stages, free space and connective tissue were found between the embedding media. Mean oocyte diameters were smaller in paraffin than in resin with differences ranging between 0.5% in primary growth and 24.3% in hydration (HYD) stage oocytes. Fresh oocyte measurements showed that oocytes shrank as a consequence of the embedding process, reaching the maximal degree of shrinkage for oocytes in the HYD stage (45.8% in paraffin and 26.5% in resin). In order to assess the effect of oocyte shrinkage on the OPD result, and thereby on relative batch fecundity (Fr), oocyte diameters corrected and uncorrected for shrinkage, were used for estimations. Statistical significant differences were found (P based on either oocytes in the germinal vesicle migration stage or HYD stage. As a valuable adjunct, the present use of the OPD theory made it possible to document that the oocyte recruitment of spawning ovaries of Mediterranean albacore followed the typical pattern of an asynchronous oocyte development and indeterminate fecundity.

  11. Computational prediction of drug-drug interactions based on drugs functional similarities.

    Science.gov (United States)

    Ferdousi, Reza; Safdari, Reza; Omidi, Yadollah

    2017-06-01

    Therapeutic activities of drugs are often influenced by co-administration of drugs that may cause inevitable drug-drug interactions (DDIs) and inadvertent side effects. Prediction and identification of DDIs are extremely vital for the patient safety and success of treatment modalities. A number of computational methods have been employed for the prediction of DDIs based on drugs structures and/or functions. Here, we report on a computational method for DDIs prediction based on functional similarity of drugs. The model was set based on key biological elements including carriers, transporters, enzymes and targets (CTET). The model was applied for 2189 approved drugs. For each drug, all the associated CTETs were collected, and the corresponding binary vectors were constructed to determine the DDIs. Various similarity measures were conducted to detect DDIs. Of the examined similarity methods, the inner product-based similarity measures (IPSMs) were found to provide improved prediction values. Altogether, 2,394,766 potential drug pairs interactions were studied. The model was able to predict over 250,000 unknown potential DDIs. Upon our findings, we propose the current method as a robust, yet simple and fast, universal in silico approach for identification of DDIs. We envision that this proposed method can be used as a practical technique for the detection of possible DDIs based on the functional similarities of drugs. Copyright © 2017. Published by Elsevier Inc.

  12. Method and timing of tumor volume measurement for outcome prediction in cervical cancer using magnetic resonance imaging

    International Nuclear Information System (INIS)

    Mayr, Nina A.; Taoka, Toshiaki; Yuh, William T.C.; Denning, Leah M.; Zhen, Weining K.; Paulino, Arnold C.; Gaston, Robert C.; Sorosky, Joel I.; Meeks, Sanford L.; Walker, Joan L.; Mannel, Robert S.; Buatti, John M.

    2002-01-01

    Purpose: Recently, imaging-based tumor volume before, during, and after radiation therapy (RT) has been shown to predict tumor response in cervical cancer. However, the effectiveness of different methods and timing of imaging-based tumor size assessment have not been investigated. The purpose of this study was to compare the predictive value for treatment outcome derived from simple diameter-based ellipsoid tumor volume measurement using orthogonal diameters (with ellipsoid computation) with that derived from more complex contour tracing/region-of-interest (ROI) analysis 3D tumor volumetry. Methods and Materials: Serial magnetic resonance imaging (MRI) examinations were prospectively performed in 60 patients with advanced cervical cancer (Stages IB 2 -IVB/recurrent) at the start of RT, during early RT (20-25 Gy), mid-RT (45-50 Gy), and at follow-up (1-2 months after RT completion). ROI-based volumetry was derived by tracing the entire tumor region in each MR slice on the computer work station. For the diameter-based surrogate ''ellipsoid volume,'' the three orthogonal diameters (d 1 , d 2 , d 3 ) were measured on film hard copies to calculate volume as an ellipsoid (d 1 x d 2 x d 3 x π/6). Serial tumor volumes and regression rates determined by each method were correlated with local control, disease-free and overall survival, and the results were compared between the two measuring methods. Median post-therapy follow-up was 4.9 years (range, 2.0-8.2 years). Results: The best method and time point of tumor size measurement for the prediction of outcome was the tumor regression rate in the mid-therapy MRI examination (at 45-50 Gy) using 3D ROI volumetry. For the pre-RT measurement both the diameter-based method and ROI volumetry provided similar predictive accuracy, particularly for patients with small ( 3 ) and large (≥100 cm 3 ) pre-RT tumor size. However, the pre-RT tumor size measured by either method had much less predictive value for the intermediate-size (40

  13. Estimation of genetic connectedness diagnostics based on prediction errors without the prediction error variance-covariance matrix.

    Science.gov (United States)

    Holmes, John B; Dodds, Ken G; Lee, Michael A

    2017-03-02

    An important issue in genetic evaluation is the comparability of random effects (breeding values), particularly between pairs of animals in different contemporary groups. This is usually referred to as genetic connectedness. While various measures of connectedness have been proposed in the literature, there is general agreement that the most appropriate measure is some function of the prediction error variance-covariance matrix. However, obtaining the prediction error variance-covariance matrix is computationally demanding for large-scale genetic evaluations. Many alternative statistics have been proposed that avoid the computational cost of obtaining the prediction error variance-covariance matrix, such as counts of genetic links between contemporary groups, gene flow matrices, and functions of the variance-covariance matrix of estimated contemporary group fixed effects. In this paper, we show that a correction to the variance-covariance matrix of estimated contemporary group fixed effects will produce the exact prediction error variance-covariance matrix averaged by contemporary group for univariate models in the presence of single or multiple fixed effects and one random effect. We demonstrate the correction for a series of models and show that approximations to the prediction error matrix based solely on the variance-covariance matrix of estimated contemporary group fixed effects are inappropriate in certain circumstances. Our method allows for the calculation of a connectedness measure based on the prediction error variance-covariance matrix by calculating only the variance-covariance matrix of estimated fixed effects. Since the number of fixed effects in genetic evaluation is usually orders of magnitudes smaller than the number of random effect levels, the computational requirements for our method should be reduced.

  14. Comparison of sorption measurements on argillaceous rocks and bentonite with predictions using the SGT-E2 approach to derive sorption data bases

    Energy Technology Data Exchange (ETDEWEB)

    Bradbury, M. H.; Baeyens, B; Marques Fernandes, M.

    2014-11-15

    In Stage 1 of the Sectoral Plan for Deep Geological Repositories, four rock types have been identified as being suitable host rocks for a radioactive waste repository, namely, Opalinus Clay for a high-level (HLW) and a low- and intermediate-level (L/ILW) repository, and 'Brauner Dogger', Effingen Member and Helvetic Marls for a L/ILW repository. Sorption data bases (SDBs) for all of these host rocks are required for the provisional safety analyses, including all of the bounding porewater and mineralogical composition combinations. In addition, SDBs are needed for the rock formations lying below Opalinus Clay (lower confining units) and for the bentonite backfill in the HLW repository. A detailed procedure was developed for deriving SDBs for argillaceous rocks (and bentonite) based on sorption edge measurements on illite (and montmorillonite), the hypothesis that 2:1 clay minerals are the dominant sorbents and a series of so called conversion factors which take into account the different radionuclide speciations in the different porewaters. Since this methodology for generating SDBs is relatively new, a validation and demonstration of the robustness and reliability of the sorption values derived was required. This report describes an extensive piece of work in which blind predictions of sorption values were compared with measured ones. Sorption isotherms were measured for the following metal ions Cs(I), Co(II), Ni(II), Eu(III), Th(IV) and U(VI) in a range of realistic porewater chemistries for a range of host rock mineralogies. In the end 53 isotherm data sets were measured. For each of these isotherms a prediction was made of the sorption at trace concentrations using the SDB derivation methodology. A comparison between measured and predicted values for each case was then made. This validation study shows that the methodology used for the derivation of the sorption data bases for argillaceous rocks and bentonite produces reliable sorption values. (authors)

  15. Advancing viral RNA structure prediction: measuring the thermodynamics of pyrimidine-rich internal loops.

    Science.gov (United States)

    Phan, Andy; Mailey, Katherine; Saeki, Jessica; Gu, Xiaobo; Schroeder, Susan J

    2017-05-01

    Accurate thermodynamic parameters improve RNA structure predictions and thus accelerate understanding of RNA function and the identification of RNA drug binding sites. Many viral RNA structures, such as internal ribosome entry sites, have internal loops and bulges that are potential drug target sites. Current models used to predict internal loops are biased toward small, symmetric purine loops, and thus poorly predict asymmetric, pyrimidine-rich loops with >6 nucleotides (nt) that occur frequently in viral RNA. This article presents new thermodynamic data for 40 pyrimidine loops, many of which can form UU or protonated CC base pairs. Uracil and protonated cytosine base pairs stabilize asymmetric internal loops. Accurate prediction rules are presented that account for all thermodynamic measurements of RNA asymmetric internal loops. New loop initiation terms for loops with >6 nt are presented that do not follow previous assumptions that increasing asymmetry destabilizes loops. Since the last 2004 update, 126 new loops with asymmetry or sizes greater than 2 × 2 have been measured. These new measurements significantly deepen and diversify the thermodynamic database for RNA. These results will help better predict internal loops that are larger, pyrimidine-rich, and occur within viral structures such as internal ribosome entry sites. © 2017 Phan et al.; Published by Cold Spring Harbor Laboratory Press for the RNA Society.

  16. Comparative studies of the ITU-T prediction model for radiofrequency radiation emission and real time measurements at some selected mobile base transceiver stations in Accra, Ghana

    International Nuclear Information System (INIS)

    Obeng, S. O

    2014-07-01

    Recent developments in the electronics industry have led to the widespread use of radiofrequency (RF) devices in various areas including telecommunications. The increasing numbers of mobile base station (BTS) as well as their proximity to residential areas have been accompanied by public health concerns due to the radiation exposure. The main objective of this research was to compare and modify the ITU- T predictive model for radiofrequency radiation emission for BTS with measured data at some selected cell sites in Accra, Ghana. Theoretical and experimental assessment of radiofrequency exposures due to mobile base station antennas have been analysed. The maximum and minimum average power density measured from individual base station in the town was 1. 86µW/m2 and 0.00961µW/m2 respectively. The ITU-T Predictive model power density ranged between 6.40mW/m 2 and 0.344W/m 2 . Results obtained showed a variation between measured power density levels and the ITU-T predictive model. The ITU-T model power density levels decrease with increase in radial distance while real time measurements do not due to fluctuations during measurement. The ITU-T model overestimated the power density levels by a factor l0 5 as compared to real time measurements. The ITU-T model was modified to reduce the level of overestimation. The result showed that radiation intensity varies from one base station to another even at the same distance. Occupational exposure quotient ranged between 5.43E-10 and 1.89E-08 whilst general public exposure quotient ranged between 2.72E-09 and 9.44E-08. From the results, it shows that the RF exposure levels in Accra from these mobile phone base station antennas are below the permitted RF exposure limit to the general public recommended by the International Commission on Non-Ionizing Radiation Protection. (au)

  17. Prediction-based dynamic load-sharing heuristics

    Science.gov (United States)

    Goswami, Kumar K.; Devarakonda, Murthy; Iyer, Ravishankar K.

    1993-01-01

    The authors present dynamic load-sharing heuristics that use predicted resource requirements of processes to manage workloads in a distributed system. A previously developed statistical pattern-recognition method is employed for resource prediction. While nonprediction-based heuristics depend on a rapidly changing system status, the new heuristics depend on slowly changing program resource usage patterns. Furthermore, prediction-based heuristics can be more effective since they use future requirements rather than just the current system state. Four prediction-based heuristics, two centralized and two distributed, are presented. Using trace driven simulations, they are compared against random scheduling and two effective nonprediction based heuristics. Results show that the prediction-based centralized heuristics achieve up to 30 percent better response times than the nonprediction centralized heuristic, and that the prediction-based distributed heuristics achieve up to 50 percent improvements relative to their nonprediction counterpart.

  18. Using different assumptions of aerosol mixing state and chemical composition to predict CCN concentrations based on field measurements in urban Beijing

    Science.gov (United States)

    Ren, Jingye; Zhang, Fang; Wang, Yuying; Collins, Don; Fan, Xinxin; Jin, Xiaoai; Xu, Weiqi; Sun, Yele; Cribb, Maureen; Li, Zhanqing

    2018-05-01

    Understanding the impacts of aerosol chemical composition and mixing state on cloud condensation nuclei (CCN) activity in polluted areas is crucial for accurately predicting CCN number concentrations (NCCN). In this study, we predict NCCN under five assumed schemes of aerosol chemical composition and mixing state based on field measurements in Beijing during the winter of 2016. Our results show that the best closure is achieved with the assumption of size dependent chemical composition for which sulfate, nitrate, secondary organic aerosols, and aged black carbon are internally mixed with each other but externally mixed with primary organic aerosol and fresh black carbon (external-internal size-resolved, abbreviated as EI-SR scheme). The resulting ratios of predicted-to-measured NCCN (RCCN_p/m) were 0.90 - 0.98 under both clean and polluted conditions. Assumption of an internal mixture and bulk chemical composition (INT-BK scheme) shows good closure with RCCN_p/m of 1.0 -1.16 under clean conditions, implying that it is adequate for CCN prediction in continental clean regions. On polluted days, assuming the aerosol is internally mixed and has a chemical composition that is size dependent (INT-SR scheme) achieves better closure than the INT-BK scheme due to the heterogeneity and variation in particle composition at different sizes. The improved closure achieved using the EI-SR and INT-SR assumptions highlight the importance of measuring size-resolved chemical composition for CCN predictions in polluted regions. NCCN is significantly underestimated (with RCCN_p/m of 0.66 - 0.75) when using the schemes of external mixtures with bulk (EXT-BK scheme) or size-resolved composition (EXT-SR scheme), implying that primary particles experience rapid aging and physical mixing processes in urban Beijing. However, our results show that the aerosol mixing state plays a minor role in CCN prediction when the κorg exceeds 0.1.

  19. Using Implicit and Explicit Measures to Predict Nonsuicidal Self-Injury Among Adolescent Inpatients.

    Science.gov (United States)

    Cha, Christine B; Augenstein, Tara M; Frost, Katherine H; Gallagher, Katie; D'Angelo, Eugene J; Nock, Matthew K

    2016-01-01

    To examine the use of implicit and explicit measures to predict adolescent nonsuicidal self-injury (NSSI) before, during, and after inpatient hospitalization. Participants were 123 adolescent psychiatric inpatients who completed measures at hospital admission and discharge. The implicit measure (Self-Injury Implicit Association Test [SI-IAT]) and one of the explicit measures pertained to the NSSI method of cutting. Patients were interviewed at multiple time points at which they reported whether they had engaged in NSSI before their hospital stay, during their hospital stay, and within 3 months after discharge. At baseline, SI-IAT scores differentiated past-year self-injurers and noninjurers (t121 = 4.02, p < .001, d = 0.73). These SI-IAT effects were stronger among patients who engaged in cutting (versus noncutting NSSI methods). Controlling for NSSI history and prospective risk factors, SI-IAT scores predicted patients' subsequent cutting behavior during their hospital stay (odds ratio (OR) = 8.19, CI = 1.56-42.98, p < .05). Patients' explicit self-report uniquely predicted hospital-based and postdischarge cutting, even after controlling for SI-IAT scores (ORs = 1.82-2.34, CIs = 1.25-3.87, p values <.01). Exploratory analyses revealed that in specific cases in which patients explicitly reported low likelihood of NSSI, SI-IAT scores still predicted hospital-based cutting. The SI-IAT is an implicit measure that is outcome-specific, a short-term predictor above and beyond NSSI history, and potentially helpful in cases in which patients at risk for NSSI explicitly report that they would not do so in the future. Ultimately, both implicit and explicit measures can help to predict future incidents of cutting among adolescent inpatients. Copyright © 2016 American Academy of Child and Adolescent Psychiatry. Published by Elsevier Inc. All rights reserved.

  20. Latency-Based and Psychophysiological Measures of Sexual Interest Show Convergent and Concurrent Validity.

    Science.gov (United States)

    Ó Ciardha, Caoilte; Attard-Johnson, Janice; Bindemann, Markus

    2018-04-01

    Latency-based measures of sexual interest require additional evidence of validity, as do newer pupil dilation approaches. A total of 102 community men completed six latency-based measures of sexual interest. Pupillary responses were recorded during three of these tasks and in an additional task where no participant response was required. For adult stimuli, there was a high degree of intercorrelation between measures, suggesting that tasks may be measuring the same underlying construct (convergent validity). In addition to being correlated with one another, measures also predicted participants' self-reported sexual interest, demonstrating concurrent validity (i.e., the ability of a task to predict a more validated, simultaneously recorded, measure). Latency-based and pupillometric approaches also showed preliminary evidence of concurrent validity in predicting both self-reported interest in child molestation and viewing pornographic material containing children. Taken together, the study findings build on the evidence base for the validity of latency-based and pupillometric measures of sexual interest.

  1. Safety prediction for basic components of safety-critical software based on static testing

    International Nuclear Information System (INIS)

    Son, H.S.; Seong, P.H.

    2000-01-01

    The purpose of this work is to develop a safety prediction method, with which we can predict the risk of software components based on static testing results at the early development stage. The predictive model combines the major factor with the quality factor for the components, which are calculated based on the measures proposed in this work. The application to a safety-critical software system demonstrates the feasibility of the safety prediction method. (authors)

  2. Both Reaction Time and Accuracy Measures of Intraindividual Variability Predict Cognitive Performance in Alzheimer's Disease

    Directory of Open Access Journals (Sweden)

    Björn U. Christ

    2018-04-01

    Full Text Available Dementia researchers around the world prioritize the urgent need for sensitive measurement tools that can detect cognitive and functional change at the earliest stages of Alzheimer's disease (AD. Sensitive indicators of underlying neural pathology assist in the early detection of cognitive change and are thus important for the evaluation of early-intervention clinical trials. One method that may be particularly well-suited to help achieve this goal involves the quantification of intraindividual variability (IIV in cognitive performance. The current study aimed to directly compare two methods of estimating IIV (fluctuations in accuracy-based scores vs. those in latency-based scores to predict cognitive performance in AD. Specifically, we directly compared the relative sensitivity of reaction time (RT—and accuracy-based estimates of IIV to cognitive compromise. The novelty of the present study, however, centered on the patients we tested [a group of patients with Alzheimer's disease (AD] and the outcome measures we used (a measure of general cognitive function and a measure of episodic memory function. Hence, we compared intraindividual standard deviations (iSDs from two RT tasks and three accuracy-based memory tasks in patients with possible or probable Alzheimer's dementia (n = 23 and matched healthy controls (n = 25. The main analyses modeled the relative contributions of RT vs. accuracy-based measures of IIV toward the prediction of performance on measures of (a overall cognitive functioning, and (b episodic memory functioning. Results indicated that RT-based IIV measures are superior predictors of neurocognitive impairment (as indexed by overall cognitive and memory performance than accuracy-based IIV measures, even after adjusting for the timescale of measurement. However, one accuracy-based IIV measure (derived from a recognition memory test also differentiated patients with AD from controls, and significantly predicted episodic memory

  3. Brain Volume Estimation Enhancement by Morphological Image Processing Tools

    Directory of Open Access Journals (Sweden)

    Zeinali R.

    2017-12-01

    Full Text Available Background: Volume estimation of brain is important for many neurological applications. It is necessary in measuring brain growth and changes in brain in normal/ abnormal patients. Thus, accurate brain volume measurement is very important. Magnetic resonance imaging (MRI is the method of choice for volume quantification due to excellent levels of image resolution and between-tissue contrast. Stereology method is a good method for estimating volume but it requires to segment enough MRI slices and have a good resolution. In this study, it is desired to enhance stereology method for volume estimation of brain using less MRI slices with less resolution. Methods: In this study, a program for calculating volume using stereology method has been introduced. After morphologic method, dilation was applied and the stereology method enhanced. For the evaluation of this method, we used T1-wighted MR images from digital phantom in BrainWeb which had ground truth. Results: The volume of 20 normal brain extracted from BrainWeb, was calculated. The volumes of white matter, gray matter and cerebrospinal fluid with given dimension were estimated correctly. Volume calculation from Stereology method in different cases was made. In three cases, Root Mean Square Error (RMSE was measured. Case I with T=5, d=5, Case II with T=10, D=10 and Case III with T=20, d=20 (T=slice thickness, d=resolution as stereology parameters. By comparing these results of two methods, it is obvious that RMSE values for our proposed method are smaller than Stereology method. Conclusion: Using morphological operation, dilation allows to enhance the estimation volume method, Stereology. In the case with less MRI slices and less test points, this method works much better compared to Stereology method.

  4. Safety prediction for basic components of safety critical software based on static testing

    International Nuclear Information System (INIS)

    Son, H.S.; Seong, P.H.

    2001-01-01

    The purpose of this work is to develop a safety prediction method, with which we can predict the risk of software components based on static testing results at the early development stage. The predictive model combines the major factor with the quality factor for the components, both of which are calculated based on the measures proposed in this work. The application to a safety-critical software system demonstrates the feasibility of the safety prediction method. (authors)

  5. Measurement and prediction of sensitization development in austenitic stainless steels

    International Nuclear Information System (INIS)

    Bruemmer, S.M.; Charlot, L.A.; Atteridge, D.G.

    1985-10-01

    The effects of thermal and thermomechanical treatments on sensitization development in Type 304 and 316 stainless steels have been measured and compared to model predictions. Sensitization development resulting from isothermal, continuous cooling and pipe welding treatments has been evaluated. An empirically-modified, theoretically-based model is shown to accurately predict material degree of sensitization (DOS) as expressed by the electrochemical potentiokinetic reactivation (EPR) test after both simple and complex treatments. Material DOS is also examined using analytical electron microscopy to document grain boundary chromium depletion and is compared to EPR test results. 9 refs., 13 figs

  6. Long-Term Deflection Prediction from Computer Vision-Measured Data History for High-Speed Railway Bridges

    Directory of Open Access Journals (Sweden)

    Jaebeom Lee

    2018-05-01

    Full Text Available Management of the vertical long-term deflection of a high-speed railway bridge is a crucial factor to guarantee traffic safety and passenger comfort. Therefore, there have been efforts to predict the vertical deflection of a railway bridge based on physics-based models representing various influential factors to vertical deflection such as concrete creep and shrinkage. However, it is not an easy task because the vertical deflection of a railway bridge generally involves several sources of uncertainty. This paper proposes a probabilistic method that employs a Gaussian process to construct a model to predict the vertical deflection of a railway bridge based on actual vision-based measurement and temperature. To deal with the sources of uncertainty which may cause prediction errors, a Gaussian process is modeled with multiple kernels and hyperparameters. Once the hyperparameters are identified through the Gaussian process regression using training data, the proposed method provides a 95% prediction interval as well as a predictive mean about the vertical deflection of the bridge. The proposed method is applied to an arch bridge under operation for high-speed trains in South Korea. The analysis results obtained from the proposed method show good agreement with the actual measurement data on the vertical deflection of the example bridge, and the prediction results can be utilized for decision-making on railway bridge maintenance.

  7. Long-Term Deflection Prediction from Computer Vision-Measured Data History for High-Speed Railway Bridges.

    Science.gov (United States)

    Lee, Jaebeom; Lee, Kyoung-Chan; Lee, Young-Joo

    2018-05-09

    Management of the vertical long-term deflection of a high-speed railway bridge is a crucial factor to guarantee traffic safety and passenger comfort. Therefore, there have been efforts to predict the vertical deflection of a railway bridge based on physics-based models representing various influential factors to vertical deflection such as concrete creep and shrinkage. However, it is not an easy task because the vertical deflection of a railway bridge generally involves several sources of uncertainty. This paper proposes a probabilistic method that employs a Gaussian process to construct a model to predict the vertical deflection of a railway bridge based on actual vision-based measurement and temperature. To deal with the sources of uncertainty which may cause prediction errors, a Gaussian process is modeled with multiple kernels and hyperparameters. Once the hyperparameters are identified through the Gaussian process regression using training data, the proposed method provides a 95% prediction interval as well as a predictive mean about the vertical deflection of the bridge. The proposed method is applied to an arch bridge under operation for high-speed trains in South Korea. The analysis results obtained from the proposed method show good agreement with the actual measurement data on the vertical deflection of the example bridge, and the prediction results can be utilized for decision-making on railway bridge maintenance.

  8. Stereological quantification of tumor volume, mean nuclear volume and total number of melanoma cells correlated with morbidity and mortality

    DEFF Research Database (Denmark)

    Bønnelykke-Behrndtz, Marie Louise; Sørensen, Flemming Brandt; Damsgaard, Tine Engberg

    2008-01-01

    potential indicators of prognosis. Sixty patients who underwent surgery at the Department of Plastic Surgery, Aarhus University Hospital, from 1991 to 1994 were included in the study. Total tumor volume was estimated by the Cavalieri technique, total number of tumor cells by the optical dissector principle...... showed a significant impact on both disease-free survival (p=0.001) and mortality (p=0.009). In conclusion, tumor volume and total number of cancer cells were highly reproducible but did not add additional, independent prognostic information regarding the study population.......Stereological quantification of tumor volume, total number of tumor cells and mean nuclear volume provides unbiased data, regardless of the three-dimensional shape of the melanocytic lesion. The aim of the present study was to investigate whether these variables are reproducible and may represent...

  9. Step Prediction During Perturbed Standing Using Center Of Pressure Measurements

    Directory of Open Access Journals (Sweden)

    Milos R. Popovic

    2007-04-01

    Full Text Available The development of a sensor that can measure balance during quiet standing and predict stepping response in the event of perturbation has many clinically relevant applica- tions, including closed-loop control of a neuroprothesis for standing. This study investigated the feasibility of an algorithm that can predict in real-time when an able-bodied individual who is quietly standing will have to make a step to compensate for an external perturbation. Anterior and posterior perturbations were performed on 16 able-bodied subjects using a pul- ley system with a dropped weight. A linear relationship was found between the peak center of pressure (COP velocity and the peak COP displacement caused by the perturbation. This result suggests that one can predict when a person will have to make a step based on COP velocity measurements alone. Another important feature of this finding is that the peak COP velocity occurs considerably before the peak COP displacement. As a result, one can predict if a subject will have to make a step in response to a perturbation sufficiently ahead of the time when the subject is actually forced to make the step. The proposed instability detection algorithm will be implemented in a sensor system using insole sheets in shoes with minitur- ized pressure sensors by which the COPv can be continuously measured. The sensor system will be integrated in a closed-loop feedback system with a neuroprosthesis for standing in the near future.

  10. Trust-based collective view prediction

    CERN Document Server

    Luo, Tiejian; Xu, Guandong; Zhou, Jia

    2013-01-01

    Collective view prediction is to judge the opinions of an active web user based on unknown elements by referring to the collective mind of the whole community. Content-based recommendation and collaborative filtering are two mainstream collective view prediction techniques. They generate predictions by analyzing the text features of the target object or the similarity of users' past behaviors. Still, these techniques are vulnerable to the artificially-injected noise data, because they are not able to judge the reliability and credibility of the information sources. Trust-based Collective View

  11. Fetal size monitoring and birth-weight prediction: a new population-based approach.

    Science.gov (United States)

    Gjessing, H K; Grøttum, P; Økland, I; Eik-Nes, S H

    2017-04-01

    To develop a complete, population-based system for ultrasound-based fetal size monitoring and birth-weight prediction for use in the second and third trimesters of pregnancy. Using 31 516 ultrasound examinations from a population-based Norwegian clinical database, we constructed fetal size charts for biparietal diameter, femur length and abdominal circumference from 24 to 42 weeks' gestation. A reference curve of median birth weight for gestational age was estimated using 45 037 birth weights. We determined how individual deviations from the expected ultrasound measures predicted individual percentage deviations from expected birth weight. The predictive quality was assessed by explained variance of birth weight and receiver-operating characteristics curves for prediction of small-for-gestational age. A curve for intrauterine estimated fetal weight was constructed. Charts were smoothed using the gamlss non-linear regression method. The population-based approach, using bias-free ultrasound gestational age, produces stable estimates of size-for-age and weight-for-age curves in the range 24-42 weeks' gestation. There is a close correspondence between percentage deviations and percentiles of birth weight by gestational age, making it easy to convert between the two. The variance of birth weight that can be 'explained' by ultrasound increases from 8% at 20 weeks up to 67% around term. Intrauterine estimated fetal weight is 0-106 g higher than median birth weight in the preterm period. The new population-based birth-weight prediction model provides a simple summary measure, the 'percentage birth-weight deviation', to be used for fetal size monitoring throughout the third trimester. Predictive quality of the model can be measured directly from the population data. The model computes both median observed birth weight and intrauterine estimated fetal weight. Copyright © 2016 ISUOG. Published by John Wiley & Sons Ltd. Copyright © 2016 ISUOG. Published by John

  12. Predicting drug-target interaction for new drugs using enhanced similarity measures and super-target clustering.

    Science.gov (United States)

    Shi, Jian-Yu; Yiu, Siu-Ming; Li, Yiming; Leung, Henry C M; Chin, Francis Y L

    2015-07-15

    Predicting drug-target interaction using computational approaches is an important step in drug discovery and repositioning. To predict whether there will be an interaction between a drug and a target, most existing methods identify similar drugs and targets in the database. The prediction is then made based on the known interactions of these drugs and targets. This idea is promising. However, there are two shortcomings that have not yet been addressed appropriately. Firstly, most of the methods only use 2D chemical structures and protein sequences to measure the similarity of drugs and targets respectively. However, this information may not fully capture the characteristics determining whether a drug will interact with a target. Secondly, there are very few known interactions, i.e. many interactions are "missing" in the database. Existing approaches are biased towards known interactions and have no good solutions to handle possibly missing interactions which affect the accuracy of the prediction. In this paper, we enhance the similarity measures to include non-structural (and non-sequence-based) information and introduce the concept of a "super-target" to handle the problem of possibly missing interactions. Based on evaluations on real data, we show that our similarity measure is better than the existing measures and our approach is able to achieve higher accuracy than the two best existing algorithms, WNN-GIP and KBMF2K. Our approach is available at http://web.hku.hk/∼liym1018/projects/drug/drug.html or http://www.bmlnwpu.org/us/tools/PredictingDTI_S2/METHODS.html. Copyright © 2015 Elsevier Inc. All rights reserved.

  13. Analysis of a Shock-Associated Noise Prediction Model Using Measured Jet Far-Field Noise Data

    Science.gov (United States)

    Dahl, Milo D.; Sharpe, Jacob A.

    2014-01-01

    A code for predicting supersonic jet broadband shock-associated noise was assessed using a database containing noise measurements of a jet issuing from a convergent nozzle. The jet was operated at 24 conditions covering six fully expanded Mach numbers with four total temperature ratios. To enable comparisons of the predicted shock-associated noise component spectra with data, the measured total jet noise spectra were separated into mixing noise and shock-associated noise component spectra. Comparisons between predicted and measured shock-associated noise component spectra were used to identify deficiencies in the prediction model. Proposed revisions to the model, based on a study of the overall sound pressure levels for the shock-associated noise component of the measured data, a sensitivity analysis of the model parameters with emphasis on the definition of the convection velocity parameter, and a least-squares fit of the predicted to the measured shock-associated noise component spectra, resulted in a new definition for the source strength spectrum in the model. An error analysis showed that the average error in the predicted spectra was reduced by as much as 3.5 dB for the revised model relative to the average error for the original model.

  14. Chaos Time Series Prediction Based on Membrane Optimization Algorithms

    Directory of Open Access Journals (Sweden)

    Meng Li

    2015-01-01

    Full Text Available This paper puts forward a prediction model based on membrane computing optimization algorithm for chaos time series; the model optimizes simultaneously the parameters of phase space reconstruction (τ,m and least squares support vector machine (LS-SVM (γ,σ by using membrane computing optimization algorithm. It is an important basis for spectrum management to predict accurately the change trend of parameters in the electromagnetic environment, which can help decision makers to adopt an optimal action. Then, the model presented in this paper is used to forecast band occupancy rate of frequency modulation (FM broadcasting band and interphone band. To show the applicability and superiority of the proposed model, this paper will compare the forecast model presented in it with conventional similar models. The experimental results show that whether single-step prediction or multistep prediction, the proposed model performs best based on three error measures, namely, normalized mean square error (NMSE, root mean square error (RMSE, and mean absolute percentage error (MAPE.

  15. Graph-theoretic measures of multivariate association and prediction

    International Nuclear Information System (INIS)

    Friedman, J.H.; Rafsky, L.C.

    1983-01-01

    Interpoint-distance-based graphs can be used to define measures of association that extend Kendall's notion of a generalized correlation coefficient. The authors present particular statistics that provide distribution-free tests of independence sensitive to alternatives involving non-monotonic relationships. Moreover, since ordering plays no essential role, the ideas that fully applicable in a multivariate setting. The authors also define an asymmetric coefficient measuring the extent to which (a vector) X can be used to make single-valued predictions of (a vector) Y. The authors discuss various techniques for proving that such statistics are asymptotically normal. As an example of the effectiveness of their approach, the authors present an application to the examination of residuals from multiple regression. 18 references, 2 figures, 1 table

  16. Prediction of creamy mouthfeel based on texture attribute ratings of dairy desserts

    NARCIS (Netherlands)

    Weenen, H.; Jellema, R.H.; Wijk, de R.A.

    2006-01-01

    A quantitative predictive model for creamy mouthfeel in dairy desserts was developed, using PLS multivariate analysis of texture attributes. Based on 40 experimental custard desserts, a good correlation was obtained between measured and predicted creamy mouthfeel ratings. The model was validated by

  17. Detecting determinism with improved sensitivity in time series: rank-based nonlinear predictability score.

    Science.gov (United States)

    Naro, Daniel; Rummel, Christian; Schindler, Kaspar; Andrzejak, Ralph G

    2014-09-01

    The rank-based nonlinear predictability score was recently introduced as a test for determinism in point processes. We here adapt this measure to time series sampled from time-continuous flows. We use noisy Lorenz signals to compare this approach against a classical amplitude-based nonlinear prediction error. Both measures show an almost identical robustness against Gaussian white noise. In contrast, when the amplitude distribution of the noise has a narrower central peak and heavier tails than the normal distribution, the rank-based nonlinear predictability score outperforms the amplitude-based nonlinear prediction error. For this type of noise, the nonlinear predictability score has a higher sensitivity for deterministic structure in noisy signals. It also yields a higher statistical power in a surrogate test of the null hypothesis of linear stochastic correlated signals. We show the high relevance of this improved performance in an application to electroencephalographic (EEG) recordings from epilepsy patients. Here the nonlinear predictability score again appears of higher sensitivity to nonrandomness. Importantly, it yields an improved contrast between signals recorded from brain areas where the first ictal EEG signal changes were detected (focal EEG signals) versus signals recorded from brain areas that were not involved at seizure onset (nonfocal EEG signals).

  18. A stereological approach for measuring the groove angles of intergranular corrosion

    International Nuclear Information System (INIS)

    Gwinner, B.; Borgard, J.-M.; Dumonteil, E.; Zoia, A.

    2017-01-01

    Highlights: • The ICG morphology has been characterized in 3D by X-ray μ-tomography. • The measurement of the angles of the IGC groove on 2D cross sections induces a bias. • A methodology is proposed to estimate the true value of the IGC groove angles in 3D. - Abstract: Non-sensitized austenitic stainless steels can be prone to intergranular corrosion when they are in contact with an oxidizing medium like nitric acid. Intergranular corrosion is characterized by the formation of grooves along the grain boundaries. The angle of these grooves is a key parameter, which directly informs of the intergranular corrosion kinetics. Most of the time, the angles of the grooves are experimentally measured on 2-dimensional cross sections of the corroded samples. This study discusses the relationship between the groove angle measured on 2-dimensional sections and the true groove angle in 3-dimensional space. This approach could also be easily extended to the study of crack angle in the domains of corrosion-fatigue, stress corrosion cracking or mechanical fracture.

  19. Comparison of measurement- and proxy-based Vs30 values in California

    Science.gov (United States)

    Yong, Alan K.

    2016-01-01

    This study was prompted by the recent availability of a significant amount of openly accessible measured VS30 values and the desire to investigate the trend of using proxy-based models to predict VS30 in the absence of measurements. Comparisons between measured and model-based values were performed. The measured data included 503 VS30 values collected from various projects for 482 seismographic station sites in California. Six proxy-based models—employing geologic mapping, topographic slope, and terrain classification—were also considered. Included was a new terrain class model based on the Yong et al. (2012) approach but recalibrated with updated measured VS30 values. Using the measured VS30 data as the metric for performance, the predictive capabilities of the six models were determined to be statistically indistinguishable. This study also found three models that tend to underpredict VS30 at lower velocities (NEHRP Site Classes D–E) and overpredict at higher velocities (Site Classes B–C).

  20. Reliable Prediction of Insulin Resistance by a School-Based Fitness Test in Middle-School Children

    Directory of Open Access Journals (Sweden)

    Allen DavidB

    2009-09-01

    Full Text Available Objectives. (1 Determine the predictive value of a school-based test of cardiovascular fitness (CVF for insulin resistance (IR; (2 compare a "school-based" prediction of IR to a "laboratory-based" prediction, using various measures of fitness and body composition. Methods. Middle school children ( performed the Progressive Aerobic Cardiovascular Endurance Run (PACER, a school-based CVF test, and underwent evaluation of maximal oxygen consumption treadmill testing ( max, body composition (percent body fat and BMI z score, and IR (derived homeostasis model assessment index []. Results. PACER showed a strong correlation with max/kg ( = 0.83, and with ( = , . Multivariate regression analysis revealed that a school-based model (using PACER and BMI z score predicted IR similar to a laboratory-based model (using max/kg of lean body mass and percent body fat. Conclusions. The PACER is a valid school-based test of CVF, is predictive of IR, and has a similar relationship to IR when compared to complex laboratory-based testing. Simple school-based measures of childhood fitness (PACER and fatness (BMI z score could be used to identify childhood risk for IR and evaluate interventions.

  1. Stereological estimate of the total number of neurons in spinal segment D9 of the red-eared turtle

    DEFF Research Database (Denmark)

    Walløe, Solveig; Nissen, Ulla Vig; Berg, Rune W

    2011-01-01

    The red-eared turtle is an important animal model for investigating the neural activity in the spinal circuit that generates motor behavior. However, basic anatomical features, including the number of neurons in the spinal segments involved, are unknown. In the present study, we estimate the total...... number of neurons in segment D9 of the spinal cord in the red-eared turtle (Trachemys scripta elegans) using stereological cell counting methods. In transverse spinal cord sections stained with modified Giemsa, motoneurons (MNs), interneurons (INs), and non-neuronal cells were distinguished according...... to location and morphology. Each cell type was then counted separately using an optical disector with the cell nucleus as counting item. The number of cells in segment D9 was as follows (mean ± SE): MNs, 2049 ± 74; INs, 16,135 ± 316; non-neuronal cells, 47,504 ± 478 (n = 6). These results provide the first...

  2. Comparison of quantitative estimation of intracerebral hemorrhage and infarct volumes after thromboembolism in an embolic stroke model

    DEFF Research Database (Denmark)

    Eriksen, Nina; Rasmussen, Rune Skovgaard; Overgaard, Karsten

    2014-01-01

    . Group 1 was treated with saline, and group 2 was treated with 20 mg/kg recombinant tissue plasminogen activator to promote intracerebral hemorrhages. Stereology, semiautomated computer estimation, and manual erythrocyte counting were used to test the precision and efficiency of determining the size...... measurements, the stereological method was the most efficient and advantageous. CONCLUSIONS: We found that stereology was the superior method for quantification of hemorrhagic volume, especially for rodent petechial bleeding, which is otherwise difficult to measure. Our results suggest the possibility...

  3. Comparison of Simple Versus Performance-Based Fall Prediction Models

    Directory of Open Access Journals (Sweden)

    Shekhar K. Gadkaree BS

    2015-05-01

    Full Text Available Objective: To compare the predictive ability of standard falls prediction models based on physical performance assessments with more parsimonious prediction models based on self-reported data. Design: We developed a series of fall prediction models progressing in complexity and compared area under the receiver operating characteristic curve (AUC across models. Setting: National Health and Aging Trends Study (NHATS, which surveyed a nationally representative sample of Medicare enrollees (age ≥65 at baseline (Round 1: 2011-2012 and 1-year follow-up (Round 2: 2012-2013. Participants: In all, 6,056 community-dwelling individuals participated in Rounds 1 and 2 of NHATS. Measurements: Primary outcomes were 1-year incidence of “ any fall ” and “ recurrent falls .” Prediction models were compared and validated in development and validation sets, respectively. Results: A prediction model that included demographic information, self-reported problems with balance and coordination, and previous fall history was the most parsimonious model that optimized AUC for both any fall (AUC = 0.69, 95% confidence interval [CI] = [0.67, 0.71] and recurrent falls (AUC = 0.77, 95% CI = [0.74, 0.79] in the development set. Physical performance testing provided a marginal additional predictive value. Conclusion: A simple clinical prediction model that does not include physical performance testing could facilitate routine, widespread falls risk screening in the ambulatory care setting.

  4. Effect of hindlimb unloading on stereological parameters of the motor cortex and hippocampus in male rats.

    Science.gov (United States)

    Salehi, Mohammad Saied; Mirzaii-Dizgah, Iraj; Vasaghi-Gharamaleki, Behnoosh; Zamiri, Mohammad Javad

    2016-11-09

    Hindlimb unloading (HU) can cause motion and cognition dysfunction, although its cellular and molecular mechanisms are not well understood. The aim of the present study was to determine the stereological parameters of the brain areas involved in motion (motor cortex) and spatial learning - memory (hippocampus) under an HU condition. Sixteen adult male rats, kept under a 12 : 12 h light-dark cycle, were divided into two groups of freely moving (n=8) and HU (n=8) rats. The volume of motor cortex and hippocampus, the numerical cell density of neurons in layers I, II-III, V, and VI of the motor cortex, the entire motor cortex as well as the primary motor cortex, and the numerical density of the CA1, CA3, and dentate gyrus subregions of the hippocampus were estimated. No significant differences were observed in the evaluated parameters. Our results thus indicated that motor cortical and hippocampal atrophy and cell loss may not necessarily be involved in the motion and spatial learning memory impairment in the rat.

  5. Subcopula-based measure of asymmetric association for contingency tables.

    Science.gov (United States)

    Wei, Zheng; Kim, Daeyoung

    2017-10-30

    For the analysis of a two-way contingency table, a new asymmetric association measure is developed. The proposed method uses the subcopula-based regression between the discrete variables to measure the asymmetric predictive powers of the variables of interest. Unlike the existing measures of asymmetric association, the subcopula-based measure is insensitive to the number of categories in a variable, and thus, the magnitude of the proposed measure can be interpreted as the degree of asymmetric association in the contingency table. The theoretical properties of the proposed subcopula-based asymmetric association measure are investigated. We illustrate the performance and advantages of the proposed measure using simulation studies and real data examples. Copyright © 2017 John Wiley & Sons, Ltd.

  6. FIBRIN-TYPE FIBRINOID IN HUMAN PLACENTA: A STEREOLOGICAL ANALYSIS OF ITS ASSOCIATION WITH INTERVILLOUS VOLUME AND VILLOUS SURFACE AREA

    Directory of Open Access Journals (Sweden)

    Terry M Mayhew

    2011-05-01

    Full Text Available Stereological methods were used to examine fibrin-type fibrinoid deposition in the intervillous spaces of human placentas collected during gestation (12-41 weeks and from term pregnancies at low (400 m and high (3.6 km altitude. The main aim was to test predictions about the relationships between fibrinoid deposits and either the volume of intervillous space or the surface area of (intermediate + terminal villi. Fields of view on Masson trichrome-stained paraffin sections were selected as part of a systematic sampling design which randomised section location and orientation. Relative and absolute volumes were estimated by test point counting and surfaces by intersection counting. Apparent differences were tested by analyses of variance and relationships by correlation and regression analysis. Fibrinoid volume increased during gestation and correlated positively with intervillous volume and villous surface area. However, relative to intervillous volume, the main increase in fibrinoid occurred towards term (36-41 weeks. At high altitude, placentas contained more intervillous space but less fibrinoid. At both altitudes, there were significant correlations between fibrinoid volume and villous surface area. In all cases, changes in fibrinoid volume were commensurate with changes in villous surface area. Whilst findings lend support to the notion that fibrinoid deposition during normal gestation is influenced by the quality of vascular perfusion, they also emphasise that the extent of the villous surface is a more generally important factor. The villous surface may influence the steady state between coagulation and fibrinolysis since some pro-coagulatory events operate at the trophoblastic epithelium. They occur notably at sites of trophoblast de-epithelialisation and these arise following trauma or during the extrusion phase of normal epithelial turnover.

  7. Researches of fruit quality prediction model based on near infrared spectrum

    Science.gov (United States)

    Shen, Yulin; Li, Lian

    2018-04-01

    With the improvement in standards for food quality and safety, people pay more attention to the internal quality of fruits, therefore the measurement of fruit internal quality is increasingly imperative. In general, nondestructive soluble solid content (SSC) and total acid content (TAC) analysis of fruits is vital and effective for quality measurement in global fresh produce markets, so in this paper, we aim at establishing a novel fruit internal quality prediction model based on SSC and TAC for Near Infrared Spectrum. Firstly, the model of fruit quality prediction based on PCA + BP neural network, PCA + GRNN network, PCA + BP adaboost strong classifier, PCA + ELM and PCA + LS_SVM classifier are designed and implemented respectively; then, in the NSCT domain, the median filter and the SavitzkyGolay filter are used to preprocess the spectral signal, Kennard-Stone algorithm is used to automatically select the training samples and test samples; thirdly, we achieve the optimal models by comparing 15 kinds of prediction model based on the theory of multi-classifier competition mechanism, specifically, the non-parametric estimation is introduced to measure the effectiveness of proposed model, the reliability and variance of nonparametric estimation evaluation of each prediction model to evaluate the prediction result, while the estimated value and confidence interval regard as a reference, the experimental results demonstrate that this model can better achieve the optimal evaluation of the internal quality of fruit; finally, we employ cat swarm optimization to optimize two optimal models above obtained from nonparametric estimation, empirical testing indicates that the proposed method can provide more accurate and effective results than other forecasting methods.

  8. Fast glomerular quantification of whole ex vivo mouse kidneys using Magnetic Resonance Imaging at 9.4 Tesla

    Energy Technology Data Exchange (ETDEWEB)

    Chacon-Caldera, Jorge; Kraemer, Philipp; Schad, Lothar R. [Heidelberg Univ., Mannheim (Germany). Computer Assisted Clinical Medicine; Geraci, Stefania; Gretz, Norbert [Heidelberg Univ., Mannheim (Germany). Medical Research Centre; Cullen-McEwen, Luise; Bertram, John F. [Monash Univ., Melbourne, VIC (Australia). Development and Stem Cells Program and Dept. of Anatomy and Developmental Biology

    2016-05-01

    A method to measure total glomerular number (N{sub glom}) in whole mouse kidneys using MRI is presented. The method relies on efficient acquisition times. A 9.4 T preclinical MRI system with a surface cryogenic coil and a 3D gradient echo sequence were used to image nine whole ex vivo BALB/c mouse kidneys labelled with cationized-ferritin (CF). A novel method to segment the glomeruli was developed. The quantification of glomeruli was achieved by identifying and fitting the probability distribution of glomeruli thus reducing variations due to noise. For validation, N{sub glom} of the same kidneys were also obtained using the gold standard: design-based stereology. Excellent agreement was found between the MRI and stereological measurements of N{sub glom}, with values differing by less than 4%: (mean ± SD) MRI = 15 606 ± 1 178; stereology = 16 273 ± 1 523. Using a robust segmentation method and a reliable quantification method, it was possible to acquire N{sub glom} with a scanning time of 33 minutes and 20 seconds. This was more than 8 times faster than previously presented MRI-based methods. Thus, an efficient approach to measure N{sub glom} ex vivo in health and disease is provided.

  9. Crataegus Monogyna Aqueous Extract Ameliorates Cyclophosphamide-Induced Toxicity in Rat Testis: Stereological Evidences

    Directory of Open Access Journals (Sweden)

    Hassan Malekinejad

    2012-01-01

    Full Text Available Cyclophosphamide (CP is extensively used as an antineoplastic agent for the treatment of various cancers, as well as an immunosuppressive agent. However, despite its wide spectrum of clinical uses, CP is known to cause several adverse effects including reproductive toxicity. Crataegus monogyna is one of the oldest pharmaceutical plants that have been shown to be cytoprotective by scavenging free radicals. The present study was conducted to assess whether Crataegus monogyna fruits aqueous extract with anti-oxidant properties, could serve as a protective agent against reproductive toxicity during CP treatment in a rat model. Male Wistar rats were categorized into four groups. Two groups of rats were administered CP at a dose of 5 mg in 5 ml saline/kg/day for 28 days by oral gavages. One of these groups received Crataegus monogyna aqueous extract at a dose of 20 mg/kg/day orally four hours after cyclophosphamide administration. A vehicle treated control group and a Crataegus monogyna control group were also included. The CP-treated group showed significant decreases in the body, testes and epididymides weights as well as many histological alterations. Stereological parameters and spermatogenic activities (Sertoli cell, repopulation and miotic indices were also significantly decreased by CP treatment. Notably, Crataegus coadministration caused a partial recovery in above-mentined parameters. These findings indicate that Crataegus monogyna may be partially protective against CP-induced testicular toxicity.

  10. Effects of melatonin on diclofenac sodium treated rat kidney: a stereological and histopathological study.

    Science.gov (United States)

    Khoshvakhti, Habib; Yurt, K Kübra; Altunkaynak, B Zuhal; Türkmen, Aysın P; Elibol, Ebru; Aydın, Işınsu; Kıvrak, Elfide G; Önger, M Emin; Kaplan, Süleyman

    2015-01-01

    In this study, we aimed to investigate the effect of diclofenac sodium (DS) and melatonin (MEL) on kidney of the prenatally administered rats. Pregnant rats were divided into the control, physiological saline, DS, and DS + MEL groups. All injections were given beginning from the 5th day after mating to the 15th day of the pregnancy. Physical dissector and Cavalieri principle were used to estimate the numerical density and total number of glomeruli and the volumetric parameters of kidney, respectively. Our stereological results indicated that DS application during the pregnancy lead to decrease in the mean volume, numerical density, and total number of the glomeruli (p  0.05). Light microscopic investigation showed congestion in blood vessels and shrinkage of the Bowman's space in the DS group. Moreover, there was degeneration in nephrons including glomerulosclerosis and tubular defects, and an increase in the connective tissue in the kidneys of the DS-treated group. However, usage of the MEL with the DS caused preventing of these pathological alterations in the kidney. We suggested that DS might lead to adverse effects in the kidneys of the rats that are prenatally subjected to this drug. Fortunately, these adverse effects can be prevented by the melatonin supplementation.

  11. Measurement, analysis and prediction of topical UV filter bioavailability.

    Science.gov (United States)

    Roussel, L; Gilbert, E; Salmon, D; Serre, C; Gabard, B; Haftek, M; Maibach, H I; Pirot, F

    2015-01-30

    The aim of the present study was to objectively quantify and predict bioavailability of three sunscreen agents (i.e., benzophenone-3, 2-ethylhexylsalicylate, and 2 ethylhexyl-4-methoxycinnamate) in epidermis treated by petrolatum and emulsion-based formulations for 7 and 30min on four human volunteers. Profiles of sunscreen agents through stratum corneum (SC), derived from the assessment of chemical amounts in SC layers collected by successive adhesive tape-stripping, were successfully fitted to Fick's second law of diffusion. Therefore, permeability coefficients of sunscreen agents were found lower with petrolatum than with emulsion based formulations confirming the crucial role of vehicle in topical delivery. Furthermore, the robustness of that methodology was confirmed by the linear relationship between the chemical absorption measured after 30min and that predicted from the 7-min exposure experiment. Interestingly, in this dermatopharmacokinetic method, the deconvolution of permeability coefficients in their respective partition coefficients and absorption constants allowed a better understanding of vehicle effects upon topical bioavailability mechanisms and bioequivalence of skin products. Copyright © 2014 Elsevier B.V. All rights reserved.

  12. Long live the liver: immunohistochemical and stereological study of hepatocytes, liver sinusoidal endothelial cells, Kupffer cells and hepatic stellate cells of male and female rats throughout ageing.

    Science.gov (United States)

    Marcos, Ricardo; Correia-Gomes, Carla

    2016-12-01

    Male/female differences in enzyme activity and gene expression in the liver are known to be attenuated with ageing. Nevertheless, the effect of ageing on liver structure and quantitative cell morphology remains unknown. Male and female Wistar rats aged 2, 6, 12 and 18 months were examined by means of stereological techniques and immunohistochemical tagging of hepatocytes (HEP), liver sinusoidal endothelial cells (LSEC), Kupffer cells (KC) and hepatic stellate cells (HSC) in order to assess the total number and number per gram of these cells throughout life. The mean cell volume of HEP and HSC, the lobular position and the collagen content of the liver were also evaluated with stereological techniques. The number per gram of HSC was similar for both genders and was maintained throughout ageing. The mean volume of HSC was also conserved but differences in the cell body and lobular location were observed. Statistically significant gender differences in HEP were noted in young rats (females had smaller and more binucleated HEP) but were attenuated with ageing. The same occurred for KC and LSEC, since the higher number per gram in young females disappeared in older animals. Liver collagen increased with ageing but only in males. Thus, the numbers of these four cell types are related throughout ageing, with well-defined cell ratios. The shape and lobular position of HSC change with ageing in both males and females. Gender dimorphism in HEP, KC and LSEC of young rat liver disappears with ageing.

  13. A Comparison Between Measured and Predicted Hydrodynamic Damping for a Jack-Up Rig Model

    DEFF Research Database (Denmark)

    Laursen, Thomas; Rohbock, Lars; Jensen, Jørgen Juncher

    1996-01-01

    An extensive set of measurements funded by the EU project Large Scale Facilities Program has been carried out on a model of a jack-up rig at the Danish Hydraulic Institute. The test serieswere conducted by MSC and include determination of base shears and overturning moments in both regular...... methods.In the comparison between the model test results and the theoretical predictions, thehydro-dynamic damping proves to be the most important uncertain parameter. It is shown thata relative large hydrodynamic damping must be assumed in the theoretical calculations in orderto predict the measured...

  14. Do implicit measures of attitudes incrementally predict snacking behaviour over explicit affect-related measures?

    Science.gov (United States)

    Ayres, Karen; Conner, Mark T; Prestwich, Andrew; Smith, Paul

    2012-06-01

    Various studies have demonstrated an association between implicit measures of attitudes and dietary-related behaviours. However, no study has tested whether implicit measures of attitudes predict dietary behaviour after controlling for explicit measures of palatability. In a prospective design, two studies assessed the validity of measures of implicit attitude (Implicit Association Test, IAT) and explicit measures of palatability and health-related attitudes on self-reported (Studies 1 and 2) and objective food (fruit vs. chocolate) choice (Study 2). Following regression analyses, in both studies, implicit measures of attitudes were correlated with food choice but failed to significantly predict food choice when controlling specifically for explicit measures of palatability. These consistent relationships emerged despite using different category labels within the IAT in the two studies. The current research suggests implicit measures of attitudes may not predict dietary behaviours after taking into account the palatability of food. This is important in order to establish determinants that explain unique variance in dietary behaviours and to inform dietary change interventions. Copyright © 2012 Elsevier Ltd. All rights reserved.

  15. A Validation of Subchannel Based CHF Prediction Model for Rod Bundles

    International Nuclear Information System (INIS)

    Hwang, Dae-Hyun; Kim, Seong-Jin

    2015-01-01

    A large number of CHF data base were procured from various sources which included square and non-square lattice test bundles. CHF prediction accuracy was evaluated for various models including CHF lookup table method, empirical correlations, and phenomenological DNB models. The parametric effect of the mass velocity and unheated wall has been investigated from the experimental result, and incorporated into the development of local parameter CHF correlation applicable to APWR conditions. According to the CHF design criterion, the CHF should not occur at the hottest rod in the reactor core during normal operation and anticipated operational occurrences with at least a 95% probability at a 95% confidence level. This is accomplished by assuring that the minimum DNBR (Departure from Nucleate Boiling Ratio) in the reactor core is greater than the limit DNBR which accounts for the accuracy of CHF prediction model. The limit DNBR can be determined from the inverse of the lower tolerance limit of M/P that is evaluated from the measured-to-predicted CHF ratios for the relevant CHF data base. It is important to evaluate an adequacy of the CHF prediction model for application to the actual reactor core conditions. Validation of CHF prediction model provides the degree of accuracy inferred from the comparison of solution and data. To achieve a required accuracy for the CHF prediction model, it may be necessary to calibrate the model parameters by employing the validation results. If the accuracy of the model is acceptable, then it is applied to the real complex system with the inferred accuracy of the model. In a conventional approach, the accuracy of CHF prediction model was evaluated from the M/P statistics for relevant CHF data base, which was evaluated by comparing the nominal values of the predicted and measured CHFs. The experimental uncertainty for the CHF data was not considered in this approach to determine the limit DNBR. When a subchannel based CHF prediction model

  16. Research on cross - Project software defect prediction based on transfer learning

    Science.gov (United States)

    Chen, Ya; Ding, Xiaoming

    2018-04-01

    According to the two challenges in the prediction of cross-project software defects, the distribution differences between the source project and the target project dataset and the class imbalance in the dataset, proposing a cross-project software defect prediction method based on transfer learning, named NTrA. Firstly, solving the source project data's class imbalance based on the Augmented Neighborhood Cleaning Algorithm. Secondly, the data gravity method is used to give different weights on the basis of the attribute similarity of source project and target project data. Finally, a defect prediction model is constructed by using Trad boost algorithm. Experiments were conducted using data, come from NASA and SOFTLAB respectively, from a published PROMISE dataset. The results show that the method has achieved good values of recall and F-measure, and achieved good prediction results.

  17. Predicting Liaison: an Example-Based Approach

    NARCIS (Netherlands)

    Greefhorst, A.P.M.; Bosch, A.P.J. van den

    2016-01-01

    Predicting liaison in French is a non-trivial problem to model. We compare a memory-based machine-learning algorithm with a rule-based baseline. The memory-based learner is trained to predict whether liaison occurs between two words on the basis of lexical, orthographic, morphosyntactic, and

  18. Empirical Approaches to Measuring the Intelligibility of Different Varieties of English in Predicting Listener Comprehension

    Science.gov (United States)

    Kang, Okim; Thomson, Ron I.; Moran, Meghan

    2018-01-01

    This study compared five research-based intelligibility measures as they were applied to six varieties of English. The objective was to determine which approach to measuring intelligibility would be most reliable for predicting listener comprehension, as measured through a listening comprehension test similar to the Test of English as a Foreign…

  19. Prediction and Validation of Mars Pathfinder Hypersonic Aerodynamic Data Base

    Science.gov (United States)

    Gnoffo, Peter A.; Braun, Robert D.; Weilmuenster, K. James; Mitcheltree, Robert A.; Engelund, Walter C.; Powell, Richard W.

    1998-01-01

    Postflight analysis of the Mars Pathfinder hypersonic, continuum aerodynamic data base is presented. Measured data include accelerations along the body axis and axis normal directions. Comparisons of preflight simulation and measurements show good agreement. The prediction of two static instabilities associated with movement of the sonic line from the shoulder to the nose and back was confirmed by measured normal accelerations. Reconstruction of atmospheric density during entry has an uncertainty directly proportional to the uncertainty in the predicted axial coefficient. The sensitivity of the moment coefficient to freestream density, kinetic models and center-of-gravity location are examined to provide additional consistency checks of the simulation with flight data. The atmospheric density as derived from axial coefficient and measured axial accelerations falls within the range required for sonic line shift and static stability transition as independently determined from normal accelerations.

  20. Prediction of Force Measurements of a Microbend Sensor Based on an Artificial Neural Network

    Directory of Open Access Journals (Sweden)

    Kemal Fidanboylu

    2009-09-01

    Full Text Available Artificial neural network (ANN based prediction of the response of a microbend fiber optic sensor is presented. To the best of our knowledge no similar work has been previously reported in the literature. Parallel corrugated plates with three deformation cycles, 6 mm thickness of the spacer material and 16 mm mechanical periodicity between deformations were used in the microbend sensor. Multilayer Perceptron (MLP with different training algorithms, Radial Basis Function (RBF network and General Regression Neural Network (GRNN are used as ANN models in this work. All of these models can predict the sensor responses with considerable errors. RBF has the best performance with the smallest mean square error (MSE values of training and test results. Among the MLP algorithms and GRNN the Levenberg-Marquardt algorithm has good results. These models successfully predict the sensor responses, hence ANNs can be used as useful tool in the design of more robust fiber optic sensors.

  1. Comparison of LOFT zero power physics testing measurement results with predicted values

    International Nuclear Information System (INIS)

    Rushton, B.L.; Howe, T.M.

    1978-01-01

    The results of zero power physics testing measurements in LOFT have been evaluated to assess the adequacy of the physics data used in the safety analyses performed for the LOFT FSAR and Technical Specifications. Comparisons of measured data with computed data were made for control rod worths, temperature coefficients, boron worths, and pressure coefficients. Measured boron concentrations at exact critical points were compared with predicted concentrations. Based on these comparisons, the reactivity parameter values used in the LOFT safety analyses were assessed for conservatism

  2. Petrophysical properties of greensand as predicted from NMR measurements

    DEFF Research Database (Denmark)

    Hossain, Zakir; Grattoni, Carlos A.; Solymar, Mikael

    2011-01-01

    ABSTRACT: Nuclear magnetic resonance (NMR) is a useful tool in reservoir evaluation. The objective of this study is to predict petrophysical properties from NMR T2 distributions. A series of laboratory experiments including core analysis, capillary pressure measurements, NMR T2 measurements...... with macro-pores. Permeability may be predicted from NMR by using Kozeny's equation when surface relaxivity is known. Capillary pressure drainage curves may be predicted from NMR T2 distribution when pore size distribution within a sample is homogeneous....

  3. Enhancing performance of next generation FSO communication systems using soft computing-based predictions.

    Science.gov (United States)

    Kazaura, Kamugisha; Omae, Kazunori; Suzuki, Toshiji; Matsumoto, Mitsuji; Mutafungwa, Edward; Korhonen, Timo O; Murakami, Tadaaki; Takahashi, Koichi; Matsumoto, Hideki; Wakamori, Kazuhiko; Arimoto, Yoshinori

    2006-06-12

    The deterioration and deformation of a free-space optical beam wave-front as it propagates through the atmosphere can reduce the link availability and may introduce burst errors thus degrading the performance of the system. We investigate the suitability of utilizing soft-computing (SC) based tools for improving performance of free-space optical (FSO) communications systems. The SC based tools are used for the prediction of key parameters of a FSO communications system. Measured data collected from an experimental FSO communication system is used as training and testing data for a proposed multi-layer neural network predictor (MNNP) used to predict future parameter values. The predicted parameters are essential for reducing transmission errors by improving the antenna's accuracy of tracking data beams. This is particularly essential for periods considered to be of strong atmospheric turbulence. The parameter values predicted using the proposed tool show acceptable conformity with original measurements.

  4. Ensemble-based prediction of RNA secondary structures.

    Science.gov (United States)

    Aghaeepour, Nima; Hoos, Holger H

    2013-04-24

    Accurate structure prediction methods play an important role for the understanding of RNA function. Energy-based, pseudoknot-free secondary structure prediction is one of the most widely used and versatile approaches, and improved methods for this task have received much attention over the past five years. Despite the impressive progress that as been achieved in this area, existing evaluations of the prediction accuracy achieved by various algorithms do not provide a comprehensive, statistically sound assessment. Furthermore, while there is increasing evidence that no prediction algorithm consistently outperforms all others, no work has been done to exploit the complementary strengths of multiple approaches. In this work, we present two contributions to the area of RNA secondary structure prediction. Firstly, we use state-of-the-art, resampling-based statistical methods together with a previously published and increasingly widely used dataset of high-quality RNA structures to conduct a comprehensive evaluation of existing RNA secondary structure prediction procedures. The results from this evaluation clarify the performance relationship between ten well-known existing energy-based pseudoknot-free RNA secondary structure prediction methods and clearly demonstrate the progress that has been achieved in recent years. Secondly, we introduce AveRNA, a generic and powerful method for combining a set of existing secondary structure prediction procedures into an ensemble-based method that achieves significantly higher prediction accuracies than obtained from any of its component procedures. Our new, ensemble-based method, AveRNA, improves the state of the art for energy-based, pseudoknot-free RNA secondary structure prediction by exploiting the complementary strengths of multiple existing prediction procedures, as demonstrated using a state-of-the-art statistical resampling approach. In addition, AveRNA allows an intuitive and effective control of the trade-off between

  5. Can radon gas measurements be used to predict earthquakes?

    International Nuclear Information System (INIS)

    2009-01-01

    After the tragic earthquake of April 6, 2009 in Aquila (Abruzzo), a debate has begun in Italy regarding the alleged prediction of this earthquake by a scientist working in the Gran Sasso National Laboratory, based on radon content measurements. Radon is a radioactive gas originating from the decay of natural radioactive elements present in the soil. IRSN specialists are actively involved in ongoing research projects on the impact of mechanical stresses on radon emissions from underground structures, and some of their results dating from several years ago are being brought up in this debate. These specialists are therefore currently presenting their perspective on the relationships between radon emissions and seismic activity, based on publications on the subject. (authors)

  6. Prediction of objectively measured physical activity and sedentariness among blue-collar workers using survey questionnaires

    DEFF Research Database (Denmark)

    Gupta, Nidhi; Heiden, Marina; Mathiassen, Svend Erik

    2016-01-01

    responded to a questionnaire containing information about personal and work related variables, available in most large epidemiological studies and surveys. Workers also wore accelerometers for 1-4 days measuring time spent sedentary and in physical activity, defined as non-sedentary time. Least......-squares linear regression models were developed, predicting objectively measured exposures from selected predictors in the questionnaire. RESULTS: A full prediction model based on age, gender, body mass index, job group, self-reported occupational physical activity (OPA), and self-reported occupational sedentary...

  7. Highway traffic noise prediction based on GIS

    Science.gov (United States)

    Zhao, Jianghua; Qin, Qiming

    2014-05-01

    Before building a new road, we need to predict the traffic noise generated by vehicles. Traditional traffic noise prediction methods are based on certain locations and they are not only time-consuming, high cost, but also cannot be visualized. Geographical Information System (GIS) can not only solve the problem of manual data processing, but also can get noise values at any point. The paper selected a road segment from Wenxi to Heyang. According to the geographical overview of the study area and the comparison between several models, we combine the JTG B03-2006 model and the HJ2.4-2009 model to predict the traffic noise depending on the circumstances. Finally, we interpolate the noise values at each prediction point and then generate contours of noise. By overlaying the village data on the noise contour layer, we can get the thematic maps. The use of GIS for road traffic noise prediction greatly facilitates the decision-makers because of GIS spatial analysis function and visualization capabilities. We can clearly see the districts where noise are excessive, and thus it becomes convenient to optimize the road line and take noise reduction measures such as installing sound barriers and relocating villages and so on.

  8. Precision comparison of the erosion rates derived from 137Cs measurements models with predictions based on empirical relationship

    International Nuclear Information System (INIS)

    Yang Mingyi; Liu Puling; Li Liqing

    2004-01-01

    The soil samples were collected in 6 cultivated runoff plots with grid sampling method, and the soil erosion rates derived from 137 Cs measurements were calculated. The models precision of Zhang Xinbao, Zhou Weizhi, Yang Hao and Walling were compared with predictions based on empirical relationship, data showed that the precision of 4 models is high within 50m slope length except for the slope with low slope angle and short length. Relatively, the precision of Walling's model is better than that of Zhang Xinbao, Zhou Weizhi and Yang Hao. In addition, the relationship between parameter Γ in Walling's improved model and slope angle was analyzed, the ralation is: Y=0.0109 X 1.0072 . (authors)

  9. Total numbers of neurons and glial cells in cortex and basal ganglia of aged brains with Down syndrome--a stereological study.

    Science.gov (United States)

    Karlsen, Anna Schou; Pakkenberg, Bente

    2011-11-01

    The total numbers of neurons and glial cells in the neocortex and basal ganglia in adults with Down syndrome (DS) were estimated with design-based stereological methods, providing quantitative data on brains affected by delayed development and accelerated aging. Cell numbers, volume of regions, and densities of neurons and glial cell subtypes were estimated in brains from 4 female DS subjects (mean age 66 years) and 6 female controls (mean age 70 years). The DS subjects were estimated to have about 40% fewer neocortical neurons in total (11.1 × 10(9) vs. 17.8 × 10(9), 2p ≤ 0.001) and almost 30% fewer neocortical glial cells with no overlap to controls (12.8 × 10(9) vs. 18.2 × 10(9), 2p = 0.004). In contrast, the total number of neurons in the basal ganglia was the same in the 2 groups, whereas the number of oligodendrocytes in the basal ganglia was reduced by almost 50% in DS (405 × 10(6) vs. 816 × 10(6), 2p = 0.01). We conclude that trisomy 21 affects cortical structures more than central gray matter emphasizing the differential impairment of brain development. Despite concomitant Alzheimer-like pathology, the neurodegenerative outcome in a DS brain deviates from common Alzheimer disease.

  10. Prediction-error of Prediction Error (PPE)-based Reversible Data Hiding

    OpenAIRE

    Wu, Han-Zhou; Wang, Hong-Xia; Shi, Yun-Qing

    2016-01-01

    This paper presents a novel reversible data hiding (RDH) algorithm for gray-scaled images, in which the prediction-error of prediction error (PPE) of a pixel is used to carry the secret data. In the proposed method, the pixels to be embedded are firstly predicted with their neighboring pixels to obtain the corresponding prediction errors (PEs). Then, by exploiting the PEs of the neighboring pixels, the prediction of the PEs of the pixels can be determined. And, a sorting technique based on th...

  11. Stereological estimates of nuclear volume and other quantitative variables in supratentorial brain tumors. Practical technique and use in prognostic evaluation

    DEFF Research Database (Denmark)

    Sørensen, Flemming Brandt; Braendgaard, H; Chistiansen, A O

    1991-01-01

    The use of morphometry and modern stereology in malignancy grading of brain tumors is only poorly investigated. The aim of this study was to present these quantitative methods. A retrospective feasibility study of 46 patients with supratentorial brain tumors was carried out to demonstrate...... the practical technique. The continuous variables were correlated with the subjective, qualitative WHO classification of brain tumors, and the prognostic value of the parameters was assessed. Well differentiated astrocytomas (n = 14) had smaller estimates of the volume-weighted mean nuclear volume and mean...... nuclear profile area, than those of anaplastic astrocytomas (n = 13) (2p = 3.1.10(-3) and 2p = 4.8.10(-3), respectively). No differences were seen between the latter type of tumor and glioblastomas (n = 19). The nuclear index was of the same magnitude in all three tumor types, whereas the mitotic index...

  12. Reliable Prediction of Insulin Resistance by a School-Based Fitness Test in Middle-School Children

    Directory of Open Access Journals (Sweden)

    Todd Varness

    2009-01-01

    Full Text Available Objectives. (1 Determine the predictive value of a school-based test of cardiovascular fitness (CVF for insulin resistance (IR; (2 compare a “school-based” prediction of IR to a “laboratory-based” prediction, using various measures of fitness and body composition. Methods. Middle school children (n=82 performed the Progressive Aerobic Cardiovascular Endurance Run (PACER, a school-based CVF test, and underwent evaluation of maximal oxygen consumption treadmill testing (VO2 max, body composition (percent body fat and BMI z score, and IR (derived homeostasis model assessment index [HOMAIR]. Results. PACER showed a strong correlation with VO2 max/kg (rs = 0.83, P<.001 and with HOMAIR (rs = −0.60, P<.001. Multivariate regression analysis revealed that a school-based model (using PACER and BMI z score predicted IR similar to a laboratory-based model (using VO2 max/kg of lean body mass and percent body fat. Conclusions. The PACER is a valid school-based test of CVF, is predictive of IR, and has a similar relationship to IR when compared to complex laboratory-based testing. Simple school-based measures of childhood fitness (PACER and fatness (BMI z score could be used to identify childhood risk for IR and evaluate interventions.

  13. A Wavelet Kernel-Based Primal Twin Support Vector Machine for Economic Development Prediction

    Directory of Open Access Journals (Sweden)

    Fang Su

    2013-01-01

    Full Text Available Economic development forecasting allows planners to choose the right strategies for the future. This study is to propose economic development prediction method based on the wavelet kernel-based primal twin support vector machine algorithm. As gross domestic product (GDP is an important indicator to measure economic development, economic development prediction means GDP prediction in this study. The wavelet kernel-based primal twin support vector machine algorithm can solve two smaller sized quadratic programming problems instead of solving a large one as in the traditional support vector machine algorithm. Economic development data of Anhui province from 1992 to 2009 are used to study the prediction performance of the wavelet kernel-based primal twin support vector machine algorithm. The comparison of mean error of economic development prediction between wavelet kernel-based primal twin support vector machine and traditional support vector machine models trained by the training samples with the 3–5 dimensional input vectors, respectively, is given in this paper. The testing results show that the economic development prediction accuracy of the wavelet kernel-based primal twin support vector machine model is better than that of traditional support vector machine.

  14. Accurate bearing remaining useful life prediction based on Weibull distribution and artificial neural network

    Science.gov (United States)

    Ben Ali, Jaouher; Chebel-Morello, Brigitte; Saidi, Lotfi; Malinowski, Simon; Fnaiech, Farhat

    2015-05-01

    Accurate remaining useful life (RUL) prediction of critical assets is an important challenge in condition based maintenance to improve reliability and decrease machine's breakdown and maintenance's cost. Bearing is one of the most important components in industries which need to be monitored and the user should predict its RUL. The challenge of this study is to propose an original feature able to evaluate the health state of bearings and to estimate their RUL by Prognostics and Health Management (PHM) techniques. In this paper, the proposed method is based on the data-driven prognostic approach. The combination of Simplified Fuzzy Adaptive Resonance Theory Map (SFAM) neural network and Weibull distribution (WD) is explored. WD is used just in the training phase to fit measurement and to avoid areas of fluctuation in the time domain. SFAM training process is based on fitted measurements at present and previous inspection time points as input. However, the SFAM testing process is based on real measurements at present and previous inspections. Thanks to the fuzzy learning process, SFAM has an important ability and a good performance to learn nonlinear time series. As output, seven classes are defined; healthy bearing and six states for bearing degradation. In order to find the optimal RUL prediction, a smoothing phase is proposed in this paper. Experimental results show that the proposed method can reliably predict the RUL of rolling element bearings (REBs) based on vibration signals. The proposed prediction approach can be applied to prognostic other various mechanical assets.

  15. Prediction of lung tumour position based on spirometry and on abdominal displacement: Accuracy and reproducibility

    International Nuclear Information System (INIS)

    Hoisak, Jeremy D.P.; Sixel, Katharina E.; Tirona, Romeo; Cheung, Patrick C.F.; Pignol, Jean-Philippe

    2006-01-01

    Background and purpose: A simulation investigating the accuracy and reproducibility of a tumour motion prediction model over clinical time frames is presented. The model is formed from surrogate and tumour motion measurements, and used to predict the future position of the tumour from surrogate measurements alone. Patients and methods: Data were acquired from five non-small cell lung cancer patients, on 3 days. Measurements of respiratory volume by spirometry and abdominal displacement by a real-time position tracking system were acquired simultaneously with X-ray fluoroscopy measurements of superior-inferior tumour displacement. A model of tumour motion was established and used to predict future tumour position, based on surrogate input data. The calculated position was compared against true tumour motion as seen on fluoroscopy. Three different imaging strategies, pre-treatment, pre-fraction and intrafractional imaging, were employed in establishing the fitting parameters of the prediction model. The impact of each imaging strategy upon accuracy and reproducibility was quantified. Results: When establishing the predictive model using pre-treatment imaging, four of five patients exhibited poor interfractional reproducibility for either surrogate in subsequent sessions. Simulating the formulation of the predictive model prior to each fraction resulted in improved interfractional reproducibility. The accuracy of the prediction model was only improved in one of five patients when intrafractional imaging was used. Conclusions: Employing a prediction model established from measurements acquired at planning resulted in localization errors. Pre-fractional imaging improved the accuracy and reproducibility of the prediction model. Intrafractional imaging was of less value, suggesting that the accuracy limit of a surrogate-based prediction model is reached with once-daily imaging

  16. Long lasting structural changes in primary motor cortex after motor skill learning: a behavioural and stereological study

    Directory of Open Access Journals (Sweden)

    PAOLA MORALES

    2008-12-01

    Full Text Available Many motor skills, once acquired, are stored over a long time period, probably sustained by permanent neuronal changes. Thus, in this paper we have investigated with quantitative stereology the generation and persistence of neuronal density changes in primary motor cortex (MI following motor skill learning (skilled reaching task. Rats were trained a lateralised reaching task during an "early" (22-31 days oíd or "late" (362-371 days oíd postnatal period. The trained and corresponding control rats were sacrificed at day 372, immediately after the behavioural testing. The "early" trained group preserved the learned skilled reaching task when tested at day 372, without requiring any additional training. The "late" trained group showed a similar capacity to that of the "early" trained group for learning the skilled reaching task. All trained animáis ("early" and "late" trained groups showed a significant Ínter hemispheric decrease of neuronal density in the corresponding motor forelimb representation área of MI (cortical layers II-III

  17. Thin-film-based CdTe photovoltaic module characterization: measurements and energy prediction improvement.

    Science.gov (United States)

    Lay-Ekuakille, A; Arnesano, A; Vergallo, P

    2013-01-01

    Photovoltaic characterization is a topic of major interest in the field of renewable energy. Monocrystalline and polycrystalline modules are mostly used and, hence characterized since many laboratories have data of them. Conversely, cadmium telluride (CdTe), as thin-film module are, in some circumstances, difficult to be used for energy prediction. This work covers outdoor testing of photovoltaic modules, in particular that regarding CdTe ones. The scope is to obtain temperature coefficients that best predict the energy production. A First Solar (K-275) module has been used for the purposes of this research. Outdoor characterizations were performed at Department of Innovation Engineering, University of Salento, Lecce, Italy. The location of Lecce city represents a typical site in the South Italy. The module was exposed outdoor and tested under clear sky conditions as well as under cloudy sky ones. During testing, the global-inclined irradiance varied between 0 and 1500 W/m(2). About 37,000 I-V characteristics were acquired, allowing to process temperature coefficients as a function of irradiance and ambient temperature. The module was characterized by measuring the full temperature-irradiance matrix in the range from 50 to 1300 W/m(2) and from -1 to 40 W/m(2) from October 2011 to February 2012. Afterwards, the module energy output, under real conditions, was calculated with the "matrix method" of SUPSI-ISAAC and the results were compared with the five months energy output data of the same module measured with the outdoor energy yield facility in Lecce.

  18. Robust Predictive Functional Control for Flight Vehicles Based on Nonlinear Disturbance Observer

    Directory of Open Access Journals (Sweden)

    Yinhui Zhang

    2015-01-01

    Full Text Available A novel robust predictive functional control based on nonlinear disturbance observer is investigated in order to address the control system design for flight vehicles with significant uncertainties, external disturbances, and measurement noise. Firstly, the nonlinear longitudinal dynamics of the flight vehicle are transformed into linear-like state-space equations with state-dependent coefficient matrices. And then the lumped disturbances are considered in the linear structure predictive model of the predictive functional control to increase the precision of the predictive output and resolve the intractable mismatched disturbance problem. As the lumped disturbances cannot be derived or measured directly, the nonlinear disturbance observer is applied to estimate the lumped disturbances, which are then introduced to the predictive functional control to replace the unknown actual lumped disturbances. Consequently, the robust predictive functional control for the flight vehicle is proposed. Compared with the existing designs, the effectiveness and robustness of the proposed flight control are illustrated and validated in various simulation conditions.

  19. Predicting lake responses to phosphorus loading with measurement-based characterization of P recycling in sediments

    Science.gov (United States)

    Katsev, S.; Li, J.

    2017-12-01

    Predicting the time scales on which lake ecosystems respond to changes in anthropogenic phosphorus loadings is critical for devising efficient management strategies and setting regulatory limits on loading. Internal loading of phosphorus from sediments, however, can significantly contribute to the lake P budget and may delay recovery from eutrophication. The efficiency of mineralization and recycling of settled P in bottom sediments, which is ultimately responsible for this loading, is often poorly known and is surprisingly poorly characterized in the societally important systems such as the Great Lakes. We show that a simple mass-balance model that uses only a minimum number of parameters, all of which are measurable, can successfully predict the time scales over which the total phosphorus (TP) content of lakes responds to changes in external loadings, in a range of situations. The model also predicts the eventual TP levels attained under stable loading conditions. We characterize the efficiency of P recycling in Lake Superior based on a detailed characterization of sediments at 13 locations that includes chemical extractions for P and Fe fractions and characterization of sediment-water exchange fluxes of P. Despite the low efficiency of P remobilization in these deeply oxygenated sediments (only 12% of deposited P is recycled), effluxes of dissolved phosphorus (2.5-7.0 μmol m-2 d-1) still contribute 37% to total P inputs into the water column. In this oligotrophic large lake, phosphate effluxes are regulated by organic sedimentation rather than sediment redox conditions. By adjusting the recycling efficiency to conditions in other Laurentian Great Lakes, we show that the model reproduces the historical data for total phosphorus levels. Analysis further suggests that, in the Lower Lakes, the rate of P sequestration from water column into sediments has undergone a significant change in recent decades, possibly in response to their invasion by quagga mussels

  20. The Prediction of Metal Slopping in LD Converter on Base an Acoustic Signal

    Directory of Open Access Journals (Sweden)

    Kostúr, K.

    2006-01-01

    Full Text Available The negative influences of slopping in a BOF are pollution to the environment. They give lower yield and cause equipment damage. The prediction of these phenomena is based on information processing from the measuring microphone. The change of frequency in certain range is done by a signal for the prediction of slopping. In this paper two methods for prediction of slopping are described. The first method is based on measuring and processing of sound emitted from the vessel during the blow. The second method utilizes Fourier’s transformation for processing of acoustic signal from sonic meter. The success rate of prediction has been evaluated by help of five criterions. It is possible to forecast the slopping on selected frequency (band. It is the essence of the second method, because this method has high success (criterion K1. Note, that criterion K5 defines acknowledgment of duration slopping. This criterion has the highest value.

  1. Review and evaluation of performance measures for survival prediction models in external validation settings

    Directory of Open Access Journals (Sweden)

    M. Shafiqur Rahman

    2017-04-01

    Full Text Available Abstract Background When developing a prediction model for survival data it is essential to validate its performance in external validation settings using appropriate performance measures. Although a number of such measures have been proposed, there is only limited guidance regarding their use in the context of model validation. This paper reviewed and evaluated a wide range of performance measures to provide some guidelines for their use in practice. Methods An extensive simulation study based on two clinical datasets was conducted to investigate the performance of the measures in external validation settings. Measures were selected from categories that assess the overall performance, discrimination and calibration of a survival prediction model. Some of these have been modified to allow their use with validation data, and a case study is provided to describe how these measures can be estimated in practice. The measures were evaluated with respect to their robustness to censoring and ease of interpretation. All measures are implemented, or are straightforward to implement, in statistical software. Results Most of the performance measures were reasonably robust to moderate levels of censoring. One exception was Harrell’s concordance measure which tended to increase as censoring increased. Conclusions We recommend that Uno’s concordance measure is used to quantify concordance when there are moderate levels of censoring. Alternatively, Gönen and Heller’s measure could be considered, especially if censoring is very high, but we suggest that the prediction model is re-calibrated first. We also recommend that Royston’s D is routinely reported to assess discrimination since it has an appealing interpretation. The calibration slope is useful for both internal and external validation settings and recommended to report routinely. Our recommendation would be to use any of the predictive accuracy measures and provide the corresponding predictive

  2. Glycated Hemoglobin Measurement and Prediction of Cardiovascular Disease

    DEFF Research Database (Denmark)

    Di Angelantonio, Emanuele; Gao, Pei; Khan, Hassan

    2014-01-01

    IMPORTANCE: The value of measuring levels of glycated hemoglobin (HbA1c) for the prediction of first cardiovascular events is uncertain. OBJECTIVE: To determine whether adding information on HbA1c values to conventional cardiovascular risk factors is associated with improvement in prediction of c...

  3. Measuring and prediction of global solar ultraviolet radiation (0295-0385 μ m) under clear and cloudless skies

    International Nuclear Information System (INIS)

    Wright, Jaime

    2008-01-01

    Values of global solar ultraviolet radiation were measured with an ultraviolet radiometer and also predicted with a atmospheric spectral model. The values obtained with the atmospheric spectral model, based physically, were analyzed and compared with experimental values measured in situ. Measurements were performed for different zenith angles in conditions of clear skies in Heredia, Costa Rica. The necessary input data include latitude, altitude, surface albedo, Earth-Sun distance, as well as atmospheric characteristics: atmospheric turbidity, precipitable water and atmospheric ozone. The comparison between measured and predicted values have been successful. (author) [es

  4. Method of fission product beta spectra measurements for predicting reactor anti-neutrino emission

    Energy Technology Data Exchange (ETDEWEB)

    Asner, D.M.; Burns, K.; Campbell, L.W.; Greenfield, B.; Kos, M.S., E-mail: markskos@gmail.com; Orrell, J.L.; Schram, M.; VanDevender, B.; Wood, L.S.; Wootan, D.W.

    2015-03-11

    The nuclear fission process that occurs in the core of nuclear reactors results in unstable, neutron-rich fission products that subsequently beta decay and emit electron antineutrinos. These reactor neutrinos have served neutrino physics research from the initial discovery of the neutrino to today's precision measurements of neutrino mixing angles. The prediction of the absolute flux and energy spectrum of the emitted reactor neutrinos hinges upon a series of seminal papers based on measurements performed in the 1970s and 1980s. The steadily improving reactor neutrino measurement techniques and recent reconsiderations of the agreement between the predicted and observed reactor neutrino flux motivates revisiting the underlying beta spectra measurements. A method is proposed to use an accelerator proton beam delivered to an engineered target to yield a neutron field tailored to reproduce the neutron energy spectrum present in the core of an operating nuclear reactor. Foils of the primary reactor fissionable isotopes placed in this tailored neutron flux will ultimately emit beta particles from the resultant fission products. Measurement of these beta particles in a time projection chamber with a perpendicular magnetic field provides a distinctive set of systematic considerations for comparison to the original seminal beta spectra measurements. Ancillary measurements such as gamma-ray emission and post-irradiation radiochemical analysis will further constrain the absolute normalization of beta emissions per fission. The requirements for unfolding the beta spectra measured with this method into a predicted reactor neutrino spectrum are explored.

  5. Prediction of HDR quality by combining perceptually transformed display measurements with machine learning

    Science.gov (United States)

    Choudhury, Anustup; Farrell, Suzanne; Atkins, Robin; Daly, Scott

    2017-09-01

    We present an approach to predict overall HDR display quality as a function of key HDR display parameters. We first performed subjective experiments on a high quality HDR display that explored five key HDR display parameters: maximum luminance, minimum luminance, color gamut, bit-depth and local contrast. Subjects rated overall quality for different combinations of these display parameters. We explored two models | a physical model solely based on physically measured display characteristics and a perceptual model that transforms physical parameters using human vision system models. For the perceptual model, we use a family of metrics based on a recently published color volume model (ICT-CP), which consists of the PQ luminance non-linearity (ST2084) and LMS-based opponent color, as well as an estimate of the display point spread function. To predict overall visual quality, we apply linear regression and machine learning techniques such as Multilayer Perceptron, RBF and SVM networks. We use RMSE and Pearson/Spearman correlation coefficients to quantify performance. We found that the perceptual model is better at predicting subjective quality than the physical model and that SVM is better at prediction than linear regression. The significance and contribution of each display parameter was investigated. In addition, we found that combined parameters such as contrast do not improve prediction. Traditional perceptual models were also evaluated and we found that models based on the PQ non-linearity performed better.

  6. Using instrumental (CIE and reflectance) measures to predict consumers' acceptance of beef colour.

    Science.gov (United States)

    Holman, Benjamin W B; van de Ven, Remy J; Mao, Yanwei; Coombs, Cassius E O; Hopkins, David L

    2017-05-01

    We aimed to establish colorimetric thresholds based upon the capacity for instrumental measures to predict consumer satisfaction with beef colour. A web-based survey was used to distribute standardised photographs of beef M. longissimus lumborum with known colorimetrics (L*, a*, b*, hue, chroma, ratio of reflectance at 630nm and 580nm, and estimated deoxymyoglobin, oxymyoglobin and metmyoglobin concentrations) for scrutiny. Consumer demographics and perceived importance of colour to beef value were also evaluated. It was found that a* provided the most simple and robust prediction of beef colour acceptability. Beef colour was considered acceptable (with 95% acceptance) when a* values were equal to or above 14.5. Demographic effects on this threshold were negligible, but consumer nationality and gender did contribute to variation in the relative importance of colour to beef value. These results provide future beef colour studies with context to interpret objective colour measures in terms of consumer acceptance and market appeal. Crown Copyright © 2017. Published by Elsevier Ltd. All rights reserved.

  7. Knowledge-based Fragment Binding Prediction

    Science.gov (United States)

    Tang, Grace W.; Altman, Russ B.

    2014-01-01

    Target-based drug discovery must assess many drug-like compounds for potential activity. Focusing on low-molecular-weight compounds (fragments) can dramatically reduce the chemical search space. However, approaches for determining protein-fragment interactions have limitations. Experimental assays are time-consuming, expensive, and not always applicable. At the same time, computational approaches using physics-based methods have limited accuracy. With increasing high-resolution structural data for protein-ligand complexes, there is now an opportunity for data-driven approaches to fragment binding prediction. We present FragFEATURE, a machine learning approach to predict small molecule fragments preferred by a target protein structure. We first create a knowledge base of protein structural environments annotated with the small molecule substructures they bind. These substructures have low-molecular weight and serve as a proxy for fragments. FragFEATURE then compares the structural environments within a target protein to those in the knowledge base to retrieve statistically preferred fragments. It merges information across diverse ligands with shared substructures to generate predictions. Our results demonstrate FragFEATURE's ability to rediscover fragments corresponding to the ligand bound with 74% precision and 82% recall on average. For many protein targets, it identifies high scoring fragments that are substructures of known inhibitors. FragFEATURE thus predicts fragments that can serve as inputs to fragment-based drug design or serve as refinement criteria for creating target-specific compound libraries for experimental or computational screening. PMID:24762971

  8. Size-based predictions of food web patterns

    DEFF Research Database (Denmark)

    Zhang, Lai; Hartvig, Martin; Knudsen, Kim

    2014-01-01

    We employ size-based theoretical arguments to derive simple analytic predictions of ecological patterns and properties of natural communities: size-spectrum exponent, maximum trophic level, and susceptibility to invasive species. The predictions are brought about by assuming that an infinite number...... of species are continuously distributed on a size-trait axis. It is, however, an open question whether such predictions are valid for a food web with a finite number of species embedded in a network structure. We address this question by comparing the size-based predictions to results from dynamic food web...... simulations with varying species richness. To this end, we develop a new size- and trait-based food web model that can be simplified into an analytically solvable size-based model. We confirm existing solutions for the size distribution and derive novel predictions for maximum trophic level and invasion...

  9. Degeneration and regeneration of motor and sensory nerves: a stereological study of crush lesions in rat facial and mental nerves

    DEFF Research Database (Denmark)

    Barghash, Ziad; Larsen, Jytte Overgaard; Al-Bishri, Awad

    2013-01-01

    The aim of this study was to evaluate the degeneration and regeneration of a sensory nerve and a motor nerve at the histological level after a crush injury. Twenty-five female Wistar rats had their mental nerve and the buccal branch of their facial nerve compressed unilaterally against a glass rod...... for 30 s. Specimens of the compressed nerves and the corresponding control nerves were dissected at 3, 7, and 19 days after surgery. Nerve cross-sections were stained with osmium tetroxide and toluidine blue and analysed using two-dimensional stereology. We found differences between the two nerves both...... in the normal anatomy and in the regenerative pattern. The mental nerve had a larger cross-sectional area including all tissue components. The mental nerve had a larger volume fraction of myelinated axons and a correspondingly smaller volume fraction of endoneurium. No differences were observed...

  10. Predicting responses from Rasch measures.

    Science.gov (United States)

    Linacre, John M

    2010-01-01

    There is a growing family of Rasch models for polytomous observations. Selecting a suitable model for an existing dataset, estimating its parameters and evaluating its fit is now routine. Problems arise when the model parameters are to be estimated from the current data, but used to predict future data. In particular, ambiguities in the nature of the current data, or overfit of the model to the current dataset, may mean that better fit to the current data may lead to worse fit to future data. The predictive power of several Rasch and Rasch-related models are discussed in the context of the Netflix Prize. Rasch-related models are proposed based on Singular Value Decomposition (SVD) and Boltzmann Machines.

  11. Correction for Measurement Error from Genotyping-by-Sequencing in Genomic Variance and Genomic Prediction Models

    DEFF Research Database (Denmark)

    Ashraf, Bilal; Janss, Luc; Jensen, Just

    sample). The GBSeq data can be used directly in genomic models in the form of individual SNP allele-frequency estimates (e.g., reference reads/total reads per polymorphic site per individual), but is subject to measurement error due to the low sequencing depth per individual. Due to technical reasons....... In the current work we show how the correction for measurement error in GBSeq can also be applied in whole genome genomic variance and genomic prediction models. Bayesian whole-genome random regression models are proposed to allow implementation of large-scale SNP-based models with a per-SNP correction...... for measurement error. We show correct retrieval of genomic explained variance, and improved genomic prediction when accounting for the measurement error in GBSeq data...

  12. Using quantitative breath sound measurements to predict lung function following resection

    Directory of Open Access Journals (Sweden)

    Keus Leendert

    2010-10-01

    Full Text Available Abstract Background Predicting postoperative lung function is important for estimating the risk of complications and long-term disability after pulmonary resection. We investigated the capability of vibration response imaging (VRI as an alternative to lung scintigraphy for prediction of postoperative lung function in patients with intrathoracic malignancies. Methods Eighty-five patients with intrathoracic malignancies, considered candidates for lung resection, were prospectively studied. The projected postoperative (ppo lung function was calculated using: perfusion scintigraphy, ventilation scintigraphy, and VRI. Two sets of assessments made: one for lobectomy and one for pneumonectomy. Clinical concordance was defined as both methods agreeing that either a patient was or was not a surgical candidate based on a ppoFEV1% and ppoDLCO% > 40%. Results Limits of agreement between scintigraphy and VRI for ppo following lobectomy were -16.47% to 15.08% (mean difference = -0.70%;95%CI = -2.51% to 1.12% and for pneumonectomy were -23.79% to 19.04% (mean difference = -2.38%;95%CI = -4.69% to -0.07%. Clinical concordance between VRI and scintigraphy was 73% for pneumonectomy and 98% for lobectomy. For patients who had surgery and postoperative lung function testing (n = 31, ppoFEV1% using scintigraphic methods correlated with measured postoperative values better than projections using VRI, (adjusted R2 = 0.32 scintigraphy; 0.20 VRI, however the difference between methods failed to reach statistical significance. Limits of agreement between measured FEV1% postoperatively and ppoFEV1% based on perfusion scintigraphy were -16.86% to 23.73% (mean difference = 3.44%;95%CI = -0.29% to 7.16%; based on VRI were -19.56% to 28.99% (mean difference = 4.72%;95%CI = 0.27% to 9.17%. Conclusions Further investigation of VRI as an alternative to lung scintigraphy for prediction of postoperative lung function is warranted.

  13. Predictive value of noninvasive measures of atherosclerosis for incident myocardial infarction - The Rotterdam study

    NARCIS (Netherlands)

    van der Meer, IM; Bots, ML; Hofman, A; del Sol, AI; van der Kuip, DAM; Witteman, JCM

    2004-01-01

    Background - Several noninvasive methods are available to investigate the severity of extracoronary atherosclerotic disease. No population- based study has yet examined whether differences exist between these measures with regard to their predictive value for myocardial infarction (MI) or whether a

  14. An instantaneous spatiotemporal model to predict a bicyclist's Black Carbon exposure based on mobile noise measurements

    Science.gov (United States)

    Dekoninck, Luc; Botteldooren, Dick; Int Panis, Luc

    2013-11-01

    Several studies have shown that a significant amount of daily air pollution exposure, in particular Black Carbon (BC), is inhaled during trips. Assessing this contribution to exposure remains difficult because on the one hand local air pollution maps lack spatio-temporal resolution, at the other hand direct measurement of particulate matter concentration remains expensive. This paper proposes to use in-traffic noise measurements in combination with geographical and meteorological information for predicting BC exposure during commuting trips. Mobile noise measurements are cheaper and easier to perform than mobile air pollution measurements and can easily be used in participatory sensing campaigns. The uniqueness of the proposed model lies in the choice of noise indicators that goes beyond the traditional overall A-weighted noise level used in previous work. Noise and BC exposures are both related to the traffic intensity but also to traffic speed and traffic dynamics. Inspired by theoretical knowledge on the emission of noise and BC, the low frequency engine related noise and the difference between high frequency and low frequency noise that indicates the traffic speed, are introduced in the model. In addition, it is shown that splitting BC in a local and a background component significantly improves the model. The coefficients of the proposed model are extracted from 200 commuter bicycle trips. The predicted average exposure over a single trip correlates with measurements with a Pearson coefficient of 0.78 using only four parameters: the low frequency noise level, wind speed, the difference between high and low frequency noise and a street canyon index expressing local air pollution dispersion properties.

  15. Comparison of Physician-Predicted to Measured Low Vision Outcomes

    Science.gov (United States)

    Chan, Tiffany L.; Goldstein, Judith E.; Massof, Robert W.

    2013-01-01

    Purpose To compare low vision rehabilitation (LVR) physicians’ predictions of the probability of success of LVR to patients’ self-reported outcomes after provision of usual outpatient LVR services; and to determine if patients’ traits influence physician ratings. Methods The Activity Inventory (AI), a self-report visual function questionnaire, was administered pre and post-LVR to 316 low vision patients served by 28 LVR centers that participated in a collaborative observational study. The physical component of the Short Form-36, Geriatric Depression Scale, and Telephone Interview for Cognitive Status were also administered pre-LVR to measure physical capability, depression and cognitive status. Following patient evaluation, 38 LVR physicians estimated the probability of outcome success (POS), using their own criteria. The POS ratings and change in functional ability were used to assess the effects of patients’ baseline traits on predicted outcomes. Results A regression analysis with a hierarchical random effects model showed no relationship between LVR physician POS estimates and AI-based outcomes. In another analysis, Kappa statistics were calculated to determine the probability of agreement between POS and AI-based outcomes for different outcome criteria. Across all comparisons, none of the kappa values were significantly different from 0, which indicates the rate of agreement is equivalent to chance. In an exploratory analysis, hierarchical mixed effects regression models show that POS ratings are associated with information about the patient’s cognitive functioning and the combination of visual acuity and functional ability, as opposed to visual acuity or functional ability alone. Conclusions Physicians’ predictions of LVR outcomes appear to be influenced by knowledge of patients’ cognitive functioning and the combination of visual acuity and functional ability - information physicians acquire from the patient’s history and examination. However

  16. Uncertainty analysis of neural network based flood forecasting models: An ensemble based approach for constructing prediction interval

    Science.gov (United States)

    Kasiviswanathan, K.; Sudheer, K.

    2013-05-01

    Artificial neural network (ANN) based hydrologic models have gained lot of attention among water resources engineers and scientists, owing to their potential for accurate prediction of flood flows as compared to conceptual or physics based hydrologic models. The ANN approximates the non-linear functional relationship between the complex hydrologic variables in arriving at the river flow forecast values. Despite a large number of applications, there is still some criticism that ANN's point prediction lacks in reliability since the uncertainty of predictions are not quantified, and it limits its use in practical applications. A major concern in application of traditional uncertainty analysis techniques on neural network framework is its parallel computing architecture with large degrees of freedom, which makes the uncertainty assessment a challenging task. Very limited studies have considered assessment of predictive uncertainty of ANN based hydrologic models. In this study, a novel method is proposed that help construct the prediction interval of ANN flood forecasting model during calibration itself. The method is designed to have two stages of optimization during calibration: at stage 1, the ANN model is trained with genetic algorithm (GA) to obtain optimal set of weights and biases vector, and during stage 2, the optimal variability of ANN parameters (obtained in stage 1) is identified so as to create an ensemble of predictions. During the 2nd stage, the optimization is performed with multiple objectives, (i) minimum residual variance for the ensemble mean, (ii) maximum measured data points to fall within the estimated prediction interval and (iii) minimum width of prediction interval. The method is illustrated using a real world case study of an Indian basin. The method was able to produce an ensemble that has an average prediction interval width of 23.03 m3/s, with 97.17% of the total validation data points (measured) lying within the interval. The derived

  17. Mechanism-Based Classification of PAH Mixtures to Predict Carcinogenic Potential.

    Science.gov (United States)

    Tilton, Susan C; Siddens, Lisbeth K; Krueger, Sharon K; Larkin, Andrew J; Löhr, Christiane V; Williams, David E; Baird, William M; Waters, Katrina M

    2015-07-01

    We have previously shown that relative potency factors and DNA adduct measurements are inadequate for predicting carcinogenicity of certain polycyclic aromatic hydrocarbons (PAHs) and PAH mixtures, particularly those that function through alternate pathways or exhibit greater promotional activity compared to benzo[a]pyrene (BaP). Therefore, we developed a pathway-based approach for classification of tumor outcome after dermal exposure to PAH/mixtures. FVB/N mice were exposed to dibenzo[def,p]chrysene (DBC), BaP, or environmental PAH mixtures (Mix 1-3) following a 2-stage initiation/promotion skin tumor protocol. Resulting tumor incidence could be categorized by carcinogenic potency as DBC > BaP = Mix2 = Mix3 > Mix1 = Control, based on statistical significance. Gene expression profiles measured in skin of mice collected 12 h post-initiation were compared with tumor outcome for identification of short-term bioactivity profiles. A Bayesian integration model was utilized to identify biological pathways predictive of PAH carcinogenic potential during initiation. Integration of probability matrices from four enriched pathways (P PAH mixtures. These data further provide a 'source-to-outcome' model that could be used to predict PAH interactions during tumorigenesis and provide an example of how mode-of-action-based risk assessment could be employed for environmental PAH mixtures. © The Author 2015. Published by Oxford University Press on behalf of the Society of Toxicology. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  18. Measurement of lung volume by lung perfusion scanning using SPECT and prediction of postoperative respiratory function

    International Nuclear Information System (INIS)

    Andou, Akio; Shimizu, Nobuyosi; Maruyama, Shuichiro

    1992-01-01

    Measurement of lung volume by lung perfusion scanning using single photon emission computed tomography (SPECT) and its usefulness for the prediction of respiratory function after lung resection were investigated. The lung volumes calculated in 5 patients by SPECT (threshold level 20%) using 99m Tc-macroaggregated albumin (MAA), related very closely to the actually measured lung volumes. This results prompted us to calculate the total lung volume and the volume of the lobe to be resected in 18 patients with lung cancer by SPECT. Based on the data obtained, postoperative respiratory function was predicted. The predicted values of forced vital capacity (FVC), forced expiratory volume (FEV 1.0 ), and maximum vital volume (MVV) showed closer correlations with the actually measured postoperative values (FVC, FEV 1.0 , MVV : r=0.944, r=0.917, r=0.795 respectively), than the values predicted by the ordinary lung perfusion scanning. This method facilitates more detailed evaluation of local lung function on a lobe-by-lobe basis, and can be applied clinically to predict postoperative respiratory function. (author)

  19. Prediction of midline dose from entrance ad exit dose using OSLD measurements for total irradiation

    Energy Technology Data Exchange (ETDEWEB)

    Choi, Chang Heon; Park, Jong Min; Park, So Yeon; Chun, Min Soo; Han, Ji Hye; Cho, Jin Dong; Kim, Jung In [Dept. of Radiation Oncology, Seoul National University Hospital, Seoul (Korea, Republic of)

    2017-06-15

    This study aims to predict the midline dose based on the entrance and exit doses from optically stimulated luminescence detector (OSLD) measurements for total body irradiation (TBI). For TBI treatment, beam data sets were measured for 6 MV and 15 MV beams. To evaluate the tissue lateral effect of various thicknesses, the midline dose and peak dose were measured using a solid water phantom (SWP) and ion chamber. The entrance and exit doses were measured using OSLDs. OSLDs were attached onto the central beam axis at the entrance and exit surfaces of the phantom. The predicted midline dose was evaluated as the sum of the entrance and exit doses by OSLD measurement. The ratio of the entrance dose to the exit dose was evaluated at various thicknesses. The ratio of the peak dose to the midline dose was 1.12 for a 30 cm thick SWP at both energies. When the patient thickness is greater than 30 cm, the 15 MV should be used to ensure dose homogeneity. The ratio of the entrance dose to the exit dose was less than 1.0 for thicknesses of less than 30 cm and 40 cm at 6 MV and 15 MV, respectively. Therefore, the predicted midline dose can be underestimated for thinner body. At 15 MV, the ratios were approximately 1.06 for a thickness of 50 cm. In cases where adult patients are treated with the 15 MV photon beam, it is possible for the predicted midline dose to be overestimated for parts of the body with a thickness of 50 cm or greater. The predicted midline dose and OSLD-measured midline dose depend on the phantom thickness. For in-vivo dosimetry of TBI, the measurement dose should be corrected in order to accurately predict the midline dose.

  20. Knowledge-based prediction of plan quality metrics in intracranial stereotactic radiosurgery

    Energy Technology Data Exchange (ETDEWEB)

    Shiraishi, Satomi; Moore, Kevin L., E-mail: kevinmoore@ucsd.edu [Department of Radiation Medicine and Applied Sciences, University of California, San Diego, La Jolla, California 92093 (United States); Tan, Jun [Department of Radiation Oncology, UT Southwestern Medical Center, Dallas, Texas 75490 (United States); Olsen, Lindsey A. [Department of Radiation Oncology, Washington University School of Medicine, St. Louis, Missouri 63110 (United States)

    2015-02-15

    Purpose: The objective of this work was to develop a comprehensive knowledge-based methodology for predicting achievable dose–volume histograms (DVHs) and highly precise DVH-based quality metrics (QMs) in stereotactic radiosurgery/radiotherapy (SRS/SRT) plans. Accurate QM estimation can identify suboptimal treatment plans and provide target optimization objectives to standardize and improve treatment planning. Methods: Correlating observed dose as it relates to the geometric relationship of organs-at-risk (OARs) to planning target volumes (PTVs) yields mathematical models to predict achievable DVHs. In SRS, DVH-based QMs such as brain V{sub 10Gy} (volume receiving 10 Gy or more), gradient measure (GM), and conformity index (CI) are used to evaluate plan quality. This study encompasses 223 linear accelerator-based SRS/SRT treatment plans (SRS plans) using volumetric-modulated arc therapy (VMAT), representing 95% of the institution’s VMAT radiosurgery load from the past four and a half years. Unfiltered models that use all available plans for the model training were built for each category with a stratification scheme based on target and OAR characteristics determined emergently through initial modeling process. Model predictive accuracy is measured by the mean and standard deviation of the difference between clinical and predicted QMs, δQM = QM{sub clin} − QM{sub pred}, and a coefficient of determination, R{sup 2}. For categories with a large number of plans, refined models are constructed by automatic elimination of suspected suboptimal plans from the training set. Using the refined model as a presumed achievable standard, potentially suboptimal plans are identified. Predictions of QM improvement are validated via standardized replanning of 20 suspected suboptimal plans based on dosimetric predictions. The significance of the QM improvement is evaluated using the Wilcoxon signed rank test. Results: The most accurate predictions are obtained when plans are

  1. Knowledge-based prediction of plan quality metrics in intracranial stereotactic radiosurgery

    International Nuclear Information System (INIS)

    Shiraishi, Satomi; Moore, Kevin L.; Tan, Jun; Olsen, Lindsey A.

    2015-01-01

    Purpose: The objective of this work was to develop a comprehensive knowledge-based methodology for predicting achievable dose–volume histograms (DVHs) and highly precise DVH-based quality metrics (QMs) in stereotactic radiosurgery/radiotherapy (SRS/SRT) plans. Accurate QM estimation can identify suboptimal treatment plans and provide target optimization objectives to standardize and improve treatment planning. Methods: Correlating observed dose as it relates to the geometric relationship of organs-at-risk (OARs) to planning target volumes (PTVs) yields mathematical models to predict achievable DVHs. In SRS, DVH-based QMs such as brain V 10Gy (volume receiving 10 Gy or more), gradient measure (GM), and conformity index (CI) are used to evaluate plan quality. This study encompasses 223 linear accelerator-based SRS/SRT treatment plans (SRS plans) using volumetric-modulated arc therapy (VMAT), representing 95% of the institution’s VMAT radiosurgery load from the past four and a half years. Unfiltered models that use all available plans for the model training were built for each category with a stratification scheme based on target and OAR characteristics determined emergently through initial modeling process. Model predictive accuracy is measured by the mean and standard deviation of the difference between clinical and predicted QMs, δQM = QM clin − QM pred , and a coefficient of determination, R 2 . For categories with a large number of plans, refined models are constructed by automatic elimination of suspected suboptimal plans from the training set. Using the refined model as a presumed achievable standard, potentially suboptimal plans are identified. Predictions of QM improvement are validated via standardized replanning of 20 suspected suboptimal plans based on dosimetric predictions. The significance of the QM improvement is evaluated using the Wilcoxon signed rank test. Results: The most accurate predictions are obtained when plans are stratified based on

  2. Maximal heart rate in soccer players: Measured versus age-predicted

    Directory of Open Access Journals (Sweden)

    Pantelis T Nikolaidis

    2015-02-01

    Full Text Available Background: Although maximal heart rate (HR max is widely used to assess exercise intensity in sport training, and particularly in soccer, there are limited data with regards to the use of age-based prediction equations of HR max in soccer players. The aim of this study was to compare the measured-HR max with two prediction equations (Fox-HR max = 220 – age and Tanaka-HR max = 208 – 0.7 × age in soccer players. Methods: Adolescent (n = 162, 15.8 ± 1.5 years and adult players (n = 158, 23.4 ± 4.6 years, all members of competitive clubs, voluntarily performed a graded exercise field test (Conconi protocol to assess HR max . Results: The measured-HR max (197.6 ± 9.4 bpm in total, 200.2 ± 7.9 bpm in adolescent players, and 195.0 ± 10.0 bpm in adult players was explained by the formula HR max = 212.3 – 0.75 × age (r = −0.41, standard error of the estimate = 8.6. In the total sample, Fox-HR max overestimated measured-HR max [mean difference (95% confidence intervals = 2.8 bpm (1.6; 3.9], while Tanaka-HR max underestimated HR max [–3.3 bpm (–4.5; –2.2]. In adolescents, Fox-HR max overestimated measured-HR max [4.0 bpm (2.5; 5.5] and Tanaka-HR max underestimated HR max [– 3.2 bpm (–4.7; –1.8]. In adults, Tanaka-HR max underestimated HR max [–5.0 bpm (–5.3; –4.7], while there was not any difference between Fox-HR max and measured-HR max [1.6 bpm (–3.4; 0.2]. Conclusions: The results of this study failed to validate two widely used prediction equations in a large sample of soccer players, indicating the need for a sport-specific equation. On the other hand, the new equation that we presented should be investigated further by future studies before being adopted by coaches and fitness trainers.

  3. Neural Fuzzy Inference System-Based Weather Prediction Model and Its Precipitation Predicting Experiment

    Directory of Open Access Journals (Sweden)

    Jing Lu

    2014-11-01

    Full Text Available We propose a weather prediction model in this article based on neural network and fuzzy inference system (NFIS-WPM, and then apply it to predict daily fuzzy precipitation given meteorological premises for testing. The model consists of two parts: the first part is the “fuzzy rule-based neural network”, which simulates sequential relations among fuzzy sets using artificial neural network; and the second part is the “neural fuzzy inference system”, which is based on the first part, but could learn new fuzzy rules from the previous ones according to the algorithm we proposed. NFIS-WPM (High Pro and NFIS-WPM (Ave are improved versions of this model. It is well known that the need for accurate weather prediction is apparent when considering the benefits. However, the excessive pursuit of accuracy in weather prediction makes some of the “accurate” prediction results meaningless and the numerical prediction model is often complex and time-consuming. By adapting this novel model to a precipitation prediction problem, we make the predicted outcomes of precipitation more accurate and the prediction methods simpler than by using the complex numerical forecasting model that would occupy large computation resources, be time-consuming and which has a low predictive accuracy rate. Accordingly, we achieve more accurate predictive precipitation results than by using traditional artificial neural networks that have low predictive accuracy.

  4. Gene function prediction based on Gene Ontology Hierarchy Preserving Hashing.

    Science.gov (United States)

    Zhao, Yingwen; Fu, Guangyuan; Wang, Jun; Guo, Maozu; Yu, Guoxian

    2018-02-23

    Gene Ontology (GO) uses structured vocabularies (or terms) to describe the molecular functions, biological roles, and cellular locations of gene products in a hierarchical ontology. GO annotations associate genes with GO terms and indicate the given gene products carrying out the biological functions described by the relevant terms. However, predicting correct GO annotations for genes from a massive set of GO terms as defined by GO is a difficult challenge. To combat with this challenge, we introduce a Gene Ontology Hierarchy Preserving Hashing (HPHash) based semantic method for gene function prediction. HPHash firstly measures the taxonomic similarity between GO terms. It then uses a hierarchy preserving hashing technique to keep the hierarchical order between GO terms, and to optimize a series of hashing functions to encode massive GO terms via compact binary codes. After that, HPHash utilizes these hashing functions to project the gene-term association matrix into a low-dimensional one and performs semantic similarity based gene function prediction in the low-dimensional space. Experimental results on three model species (Homo sapiens, Mus musculus and Rattus norvegicus) for interspecies gene function prediction show that HPHash performs better than other related approaches and it is robust to the number of hash functions. In addition, we also take HPHash as a plugin for BLAST based gene function prediction. From the experimental results, HPHash again significantly improves the prediction performance. The codes of HPHash are available at: http://mlda.swu.edu.cn/codes.php?name=HPHash. Copyright © 2018 Elsevier Inc. All rights reserved.

  5. Predicting Story Goodness Performance from Cognitive Measures Following Traumatic Brain Injury

    Science.gov (United States)

    Le, Karen; Coelho, Carl; Mozeiko, Jennifer; Krueger, Frank; Grafman, Jordan

    2012-01-01

    Purpose: This study examined the prediction of performance on measures of the Story Goodness Index (SGI; Le, Coelho, Mozeiko, & Grafman, 2011) from executive function (EF) and memory measures following traumatic brain injury (TBI). It was hypothesized that EF and memory measures would significantly predict SGI outcomes. Method: One hundred…

  6. Electrical resistivity measurements to predict abrasion resistance

    Indian Academy of Sciences (India)

    Home; Journals; Bulletin of Materials Science; Volume 31; Issue 2. Electrical resistivity measurements to predict abrasion resistance of rock aggregates ... It was seen that correlation coefficients were increased for the rock classes. In addition ...

  7. New Temperature-based Models for Predicting Global Solar Radiation

    International Nuclear Information System (INIS)

    Hassan, Gasser E.; Youssef, M. Elsayed; Mohamed, Zahraa E.; Ali, Mohamed A.; Hanafy, Ahmed A.

    2016-01-01

    Highlights: • New temperature-based models for estimating solar radiation are investigated. • The models are validated against 20-years measured data of global solar radiation. • The new temperature-based model shows the best performance for coastal sites. • The new temperature-based model is more accurate than the sunshine-based models. • The new model is highly applicable with weather temperature forecast techniques. - Abstract: This study presents new ambient-temperature-based models for estimating global solar radiation as alternatives to the widely used sunshine-based models owing to the unavailability of sunshine data at all locations around the world. Seventeen new temperature-based models are established, validated and compared with other three models proposed in the literature (the Annandale, Allen and Goodin models) to estimate the monthly average daily global solar radiation on a horizontal surface. These models are developed using a 20-year measured dataset of global solar radiation for the case study location (Lat. 30°51′N and long. 29°34′E), and then, the general formulae of the newly suggested models are examined for ten different locations around Egypt. Moreover, the local formulae for the models are established and validated for two coastal locations where the general formulae give inaccurate predictions. Mostly common statistical errors are utilized to evaluate the performance of these models and identify the most accurate model. The obtained results show that the local formula for the most accurate new model provides good predictions for global solar radiation at different locations, especially at coastal sites. Moreover, the local and general formulas of the most accurate temperature-based model also perform better than the two most accurate sunshine-based models from the literature. The quick and accurate estimations of the global solar radiation using this approach can be employed in the design and evaluation of performance for

  8. Hippocampal MR volumetry

    Science.gov (United States)

    Haller, John W.; Botteron, K.; Brunsden, Barry S.; Sheline, Yvette I.; Walkup, Ronald K.; Black, Kevin J.; Gado, Mokhtar; Vannier, Michael W.

    1994-09-01

    Goal: To estimate hippocampal volumes from in vivo 3D magnetic resonance (MR) brain images and determine inter-rater and intra- rater repeatability. Objective: The precision and repeatability of hippocampal volume estimates using stereologic measurement methods is sought. Design: Five normal control and five schizophrenic subjects were MR scanned using a MPRAGE protocol. Fixed grid stereologic methods were used to estimate hippocampal volumes on a graphics workstation. The images were preprocessed using histogram analysis to standardize 3D MR image scaling from 16 to 8 bits and image volumes were interpolated to 0.5 mm3 isotropic voxels. The following variables were constant for the repeated stereologic measures: grid size, inter-slice distance (1.5 mm), voxel dimensions (0.5 mm3), number of hippocampi measured (10), total number of measurements per rater (40), and number of raters (5). Two grid sizes were tested to determine the coefficient of error associated with the number of sampled 'hits' (approximately 140 and 280) on the hippocampus. Starting slice and grid position were randomly varied to assure unbiased volume estimates. Raters were blind to subject identity, diagnosis, and side of the brain from which the image volumes were extracted and the order of subject presentation was randomized for each of the raters. Inter- and intra-rater intraclass correlation coefficients (ICC) were determined. Results: The data indicate excellent repeatability of fixed grid stereologic hippocampal volume measures when using an inter-slice distance of 1.5 mm and a 6.25 mm2 grid (inter-rater ICCs equals 0.86 - 0.97, intra- rater ICCs equals 0.85 - 0.97). One major advantage of the current study was the use of 3D MR data which significantly improved visualization of hippocampal boundaries by providing the ability to access simultaneous orthogonal views while counting stereological marks within the hippocampus. Conclusion: Stereological estimates of 3D volumes from 2D MR

  9. Comparing predicted estrogen concentrations with measurements in US waters

    International Nuclear Information System (INIS)

    Kostich, Mitch; Flick, Robert; Martinson, John

    2013-01-01

    The range of exposure rates to the steroidal estrogens estrone (E1), beta-estradiol (E2), estriol (E3), and ethinyl estradiol (EE2) in the aquatic environment was investigated by modeling estrogen introduction via municipal wastewater from sewage plants across the US. Model predictions were compared to published measured concentrations. Predictions were congruent with most of the measurements, but a few measurements of E2 and EE2 exceed those that would be expected from the model, despite very conservative model assumptions of no degradation or in-stream dilution. Although some extreme measurements for EE2 may reflect analytical artifacts, remaining data suggest concentrations of E2 and EE2 may reach twice the 99th percentile predicted from the model. The model and bulk of the measurement data both suggest that cumulative exposure rates to humans are consistently low relative to effect levels, but also suggest that fish exposures to E1, E2, and EE2 sometimes substantially exceed chronic no-effect levels. -- Highlights: •Conservatively modeled steroidal estrogen concentrations in ambient water. •Found reasonable agreement between model and published measurements. •Model and measurements agree that risks to humans are remote. •Model and measurements agree significant questions remain about risk to fish. •Need better understanding of temporal variations and their impact on fish. -- Our model and published measurements for estrogens suggest aquatic exposure rates for humans are below potential effect levels, but fish exposure sometimes exceeds published no-effect levels

  10. Comparison of ICRP Publication 30 lung model-based predictions with measured bioassay data for airborne natural UO2 exposure

    International Nuclear Information System (INIS)

    Thind, K.S.

    1987-01-01

    In this paper a comparison is made between the build-up of U thorax burdens and the predicted total lung (lung and lymph) burden, based on the lung model provided in ICRP Publication 30 for a group of 29 atomic radiation workers at a Canadian fuel fabrication facility. A similar comparison is made between the predicted ratio of the total lung burden to urinary excretion and the ratio obtained from bioassay data. The study period for the comparison is 5 y. The inhalation input for the lung model calculations was derived from air-sampling data and the choice of particle size activity median aerodynamic diameter (AMAD) was guided by particle size measurements made at representative work locations. The pulmonary clearance half-times studied were 100, 250 and 500 d. For the purpose of this comparison, averaged exposure and averaged bioassay data for the group were used. This comparison indicates that for the conditions of this facility, the assumption of a 500-d pulmonary clearance half-time and a particle size of 1 micron (AMAD) may be too conservative. It is suggested that measurements of air concentrations and particle size used as input parameters for the ICRP Publication 30 lung model may be used to calculate bioassay parameters which may then be tested against bioassay data obtained as part of an operational health physics program, thereby giving a useful step towards defining a derived air concentration value for U in the workplace

  11. The Optical Fractionator Technique to Estimate Cell Numbers in a Rat Model of Electroconvulsive Therapy

    DEFF Research Database (Denmark)

    Olesen, Mikkel Vestergaard; Needham, Esther Kjær; Pakkenberg, Bente

    2017-01-01

    are too high to count manually, and stereology is now the technique of choice whenever estimates of three-dimensional quantities need to be extracted from measurements on two-dimensional sections. All stereological methods are in principle unbiased; however, they rely on proper knowledge about...

  12. Over Time, Do Anthropometric Measures Still Predict Diabetes Incidence in Chinese Han Nationality Population from Chengdu Community?

    Directory of Open Access Journals (Sweden)

    Kai Liu

    2013-01-01

    Full Text Available Objective. To examine whether anthropometric measures could predict diabetes incidence in a Chinese population during a 15-year follow-up. Design and Methods. The data were collected in 1992 and then again in 2007 from the same group of 687 individuals. Waist circumference, body mass index, waist to hip ratio, and waist to height ratio were collected based on a standard protocol. To assess the effects of baseline anthropometric measures on the new onset of diabetes, Cox's proportional hazards regression models were used to estimate the hazard ratios of them, and the discriminatory power of anthropometric measures for diabetes was assessed by the area under the receiver operating curve (AROC. Results. Seventy-four individuals were diagnosed with diabetes during a 15-year follow-up period (incidence: 10.8%. These anthropometric measures also predicted future diabetes during a long follow-up (. At 7-8 years, the AROC of central obesity measures (WC, WHpR, WHtR were higher than that of general obesity measures (BMI (. But, there were no significant differences among the four anthropometric measurements at 15 years. Conclusions. These anthropometric measures could still predict diabetes with a long time follow-up. However, the validity of anthropometric measures to predict incident diabetes may change with time.

  13. Integrative approaches to the prediction of protein functions based on the feature selection

    Directory of Open Access Journals (Sweden)

    Lee Hyunju

    2009-12-01

    Full Text Available Abstract Background Protein function prediction has been one of the most important issues in functional genomics. With the current availability of various genomic data sets, many researchers have attempted to develop integration models that combine all available genomic data for protein function prediction. These efforts have resulted in the improvement of prediction quality and the extension of prediction coverage. However, it has also been observed that integrating more data sources does not always increase the prediction quality. Therefore, selecting data sources that highly contribute to the protein function prediction has become an important issue. Results We present systematic feature selection methods that assess the contribution of genome-wide data sets to predict protein functions and then investigate the relationship between genomic data sources and protein functions. In this study, we use ten different genomic data sources in Mus musculus, including: protein-domains, protein-protein interactions, gene expressions, phenotype ontology, phylogenetic profiles and disease data sources to predict protein functions that are labelled with Gene Ontology (GO terms. We then apply two approaches to feature selection: exhaustive search feature selection using a kernel based logistic regression (KLR, and a kernel based L1-norm regularized logistic regression (KL1LR. In the first approach, we exhaustively measure the contribution of each data set for each function based on its prediction quality. In the second approach, we use the estimated coefficients of features as measures of contribution of data sources. Our results show that the proposed methods improve the prediction quality compared to the full integration of all data sources and other filter-based feature selection methods. We also show that contributing data sources can differ depending on the protein function. Furthermore, we observe that highly contributing data sets can be similar among

  14. MRI volumetry of prefrontal cortex

    Science.gov (United States)

    Sheline, Yvette I.; Black, Kevin J.; Lin, Daniel Y.; Pimmel, Joseph; Wang, Po; Haller, John W.; Csernansky, John G.; Gado, Mokhtar; Walkup, Ronald K.; Brunsden, Barry S.; Vannier, Michael W.

    1995-05-01

    Prefrontal cortex volumetry by brain magnetic resonance (MR) is required to estimate changes postulated to occur in certain psychiatric and neurologic disorders. A semiautomated method with quantitative characterization of its performance is sought to reliably distinguish small prefrontal cortex volume changes within individuals and between groups. Stereological methods were tested by a blinded comparison of measurements applied to 3D MR scans obtained using an MPRAGE protocol. Fixed grid stereologic methods were used to estimate prefrontal cortex volumes on a graphic workstation, after the images are scaled from 16 to 8 bits using a histogram method. In addition images were resliced into coronal sections perpendicular to the bicommissural plane. Prefrontal cortex volumes were defined as all sections of the frontal lobe anterior to the anterior commissure. Ventricular volumes were excluded. Stereological measurement yielded high repeatability and precision, and was time efficient for the raters. The coefficient of error was volumetry by stereology can yield accurate and repeatable measurements. Small frontal lobe volume reductions in patients with brain disorders such as depression and schizophrenia can be efficiently assessed using this method.

  15. Comparative Study of foF2 Measurements with IRI-2007 Model Predictions During Extended Solar Minimum

    Science.gov (United States)

    Zakharenkova, I. E.; Krankowski, A.; Bilitza, D.; Cherniak, Iu.V.; Shagimuratov, I.I.; Sieradzki, R.

    2013-01-01

    The unusually deep and extended solar minimum of cycle 2324 made it very difficult to predict the solar indices 1 or 2 years into the future. Most of the predictions were proven wrong by the actual observed indices. IRI gets its solar, magnetic, and ionospheric indices from an indices file that is updated twice a year. In recent years, due to the unusual solar minimum, predictions had to be corrected downward with every new indices update. In this paper we analyse how much the uncertainties in the predictability of solar activity indices affect the IRI outcome and how the IRI values calculated with predicted and observed indices compared to the actual measurements.Monthly median values of F2 layer critical frequency (foF2) derived from the ionosonde measurements at the mid-latitude ionospheric station Juliusruh were compared with the International Reference Ionosphere (IRI-2007) model predictions. The analysis found that IRIprovides reliable results that compare well with actual measurements, when the definite (observed and adjusted) indices of solar activityare used, while IRI values based on earlier predictions of these indices noticeably overestimated the measurements during the solar minimum.One of the principal objectives of this paper is to direct attention of IRI users to update their solar activity indices files regularly.Use of an older index file can lead to serious IRI overestimations of F-region electron density during the recent extended solar minimum.

  16. New prediction of chaotic time series based on local Lyapunov exponent

    International Nuclear Information System (INIS)

    Zhang Yong

    2013-01-01

    A new method of predicting chaotic time series is presented based on a local Lyapunov exponent, by quantitatively measuring the exponential rate of separation or attraction of two infinitely close trajectories in state space. After reconstructing state space from one-dimensional chaotic time series, neighboring multiple-state vectors of the predicting point are selected to deduce the prediction formula by using the definition of the local Lyapunov exponent. Numerical simulations are carried out to test its effectiveness and verify its higher precision over two older methods. The effects of the number of referential state vectors and added noise on forecasting accuracy are also studied numerically. (general)

  17. Rutting Prediction in Asphalt Pavement Based on Viscoelastic Theory

    Directory of Open Access Journals (Sweden)

    Nahi Mohammed Hadi

    2016-01-01

    Full Text Available Rutting is one of the most disturbing failures on the asphalt roads due to the interrupting it is caused to the drivers. Predicting of asphalt pavement rutting is essential tool leads to better asphalt mixture design. This work describes a method of predicting the behaviour of various asphalt pavement mixes and linking these to an accelerated performance testing. The objective of this study is to develop a finite element model based on viscoplastic theory for simulating the laboratory testing of asphalt mixes in Hamburg Wheel Rut Tester (HWRT for rutting. The creep parameters C1, C2 and C3 are developed from the triaxial repeated load creep test at 50°C and at a frequency of 1 Hz and the modulus of elasticity and Poisson’ s ratio determined at the same temperature. Viscoelastic model (creep model is adopted using a FE simulator (ANSYS in order to calculate the rutting for various mixes under a uniform loading pressure of 500 kPa. An eight-node with a three Degrees of Freedom (UX, UY, and UZ Element is used for the simulation. The creep model developed for HWRT tester was verified by comparing the predicted rut depths with the measured one and by comparing the rut depth with ABAQUS result from literature. Reasonable agreement can be obtained between the predicted rut depths and the measured one. Moreover, it is found that creep model parameter C1 and C3 have a strong relationship with rutting. It was clear that the parameter C1 strongly influences rutting than the parameter C3. Finally, it can be concluded that creep model based on finite element method can be used as an effective tool to analyse rutting of asphalt pavements.

  18. The prediction of cyclic proximal humerus fracture fixation failure by various bone density measures.

    Science.gov (United States)

    Varga, Peter; Grünwald, Leonard; Windolf, Markus

    2018-02-22

    Fixation of osteoporotic proximal humerus fractures has remained challenging, but may be improved by careful pre-operative planning. The aim of this study was to investigate how well the failure of locking plate fixation of osteoporotic proximal humerus fractures can be predicted by bone density measures assessed with currently available clinical imaging (realistic case) and a higher resolution and quality modality (theoretical best-case). Various density measures were correlated to experimentally assessed number of cycles to construct failure of plated unstable low-density proximal humerus fractures (N = 18). The influence of density evaluation technique was investigated by comparing local (peri-implant) versus global evaluation regions; HR-pQCT-based versus clinical QCT-based image data; ipsilateral versus contralateral side; and bone mineral content (BMC) versus bone mineral density (BMD). All investigated density measures were significantly correlated with the experimental cycles to failure. The best performing clinically feasible parameter was the QCT-based BMC of the contralateral articular cap region, providing significantly better correlation (R 2  = 0.53) compared to a previously proposed clinical density measure (R 2  = 0.30). BMC had consistently, but not significantly stronger correlations with failure than BMD. The overall best results were obtained with the ipsilateral HR-pQCT-based local BMC (R 2  = 0.74) that may be used for implant optimization. Strong correlations were found between the corresponding density measures of the two CT image sources, as well as between the two sides. Future studies should investigate if BMC of the contralateral articular cap region could provide improved prediction of clinical fixation failure compared to previously proposed measures. © 2018 Orthopaedic Research Society. Published by Wiley Periodicals, Inc. J Orthop Res. © 2018 Orthopaedic Research Society. Published by Wiley Periodicals, Inc.

  19. Uncertainty Quantification and Comparison of Weld Residual Stress Measurements and Predictions.

    Energy Technology Data Exchange (ETDEWEB)

    Lewis, John R. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Brooks, Dusty Marie [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2016-10-01

    In pressurized water reactors, the prevention, detection, and repair of cracks within dissimilar metal welds is essential to ensure proper plant functionality and safety. Weld residual stresses, which are difficult to model and cannot be directly measured, contribute to the formation and growth of cracks due to primary water stress corrosion cracking. Additionally, the uncertainty in weld residual stress measurements and modeling predictions is not well understood, further complicating the prediction of crack evolution. The purpose of this document is to develop methodology to quantify the uncertainty associated with weld residual stress that can be applied to modeling predictions and experimental measurements. Ultimately, the results can be used to assess the current state of uncertainty and to build confidence in both modeling and experimental procedures. The methodology consists of statistically modeling the variation in the weld residual stress profiles using functional data analysis techniques. Uncertainty is quantified using statistical bounds (e.g. confidence and tolerance bounds) constructed with a semi-parametric bootstrap procedure. Such bounds describe the range in which quantities of interest, such as means, are expected to lie as evidenced by the data. The methodology is extended to provide direct comparisons between experimental measurements and modeling predictions by constructing statistical confidence bounds for the average difference between the two quantities. The statistical bounds on the average difference can be used to assess the level of agreement between measurements and predictions. The methodology is applied to experimental measurements of residual stress obtained using two strain relief measurement methods and predictions from seven finite element models developed by different organizations during a round robin study.

  20. Predicting personal exposure to airborne carbonyls using residential measurements and time/activity data

    Science.gov (United States)

    Liu, Weili; Zhang, Junfeng (Jim); Korn, Leo R.; Zhang, Lin; Weisel, Clifford P.; Turpin, Barbara; Morandi, Maria; Stock, Tom; Colome, Steve

    As a part of the Relationships of Indoor, Outdoor, and Personal Air (RIOPA) study, 48 h integrated residential indoor, outdoor, and personal exposure concentrations of 10 carbonyls were simultaneously measured in 234 homes selected from three US cities using the Passive Aldehydes and Ketones Samplers (PAKS). In this paper, we examine the feasibility of using residential indoor concentrations to predict personal exposures to carbonyls. Based on paired t-tests, the means of indoor concentrations were not different from those of personal exposure concentrations for eight out of the 10 measured carbonyls, indicating indoor carbonyls concentrations, in general, well predicted the central tendency of personal exposure concentrations. In a linear regression model, indoor concentrations explained 47%, 55%, and 65% of personal exposure variance for formaldehyde, acetaldehyde, and hexaldehyde, respectively. The predictability of indoor concentrations on cross-individual variability in personal exposure for the other carbonyls was poorer, explainingexposure concentrations. It was found that activities related to driving a vehicle and performing yard work had significant impacts on personal exposures to a few carbonyls.

  1. Base Oils Biodegradability Prediction with Data Mining Techniques

    Directory of Open Access Journals (Sweden)

    Malika Trabelsi

    2010-02-01

    Full Text Available In this paper, we apply various data mining techniques including continuous numeric and discrete classification prediction models of base oils biodegradability, with emphasis on improving prediction accuracy. The results show that highly biodegradable oils can be better predicted through numeric models. In contrast, classification models did not uncover a similar dichotomy. With the exception of Memory Based Reasoning and Decision Trees, tested classification techniques achieved high classification prediction. However, the technique of Decision Trees helped uncover the most significant predictors. A simple classification rule derived based on this predictor resulted in good classification accuracy. The application of this rule enables efficient classification of base oils into either low or high biodegradability classes with high accuracy. For the latter, a higher precision biodegradability prediction can be obtained using continuous modeling techniques.

  2. Baseline frontostriatal-limbic connectivity predicts reward-based memory formation.

    Science.gov (United States)

    Hamann, Janne M; Dayan, Eran; Hummel, Friedhelm C; Cohen, Leonardo G

    2014-12-01

    Reward mediates the acquisition and long-term retention of procedural skills in humans. Yet, learning under rewarded conditions is highly variable across individuals and the mechanisms that determine interindividual variability in rewarded learning are not known. We postulated that baseline functional connectivity in a large-scale frontostriatal-limbic network could predict subsequent interindividual variability in rewarded learning. Resting-state functional MRI was acquired in two groups of subjects (n = 30) who then trained on a visuomotor procedural learning task with or without reward feedback. We then tested whether baseline functional connectivity within the frontostriatal-limbic network predicted memory strength measured immediately, 24 h and 1 month after training in both groups. We found that connectivity in the frontostriatal-limbic network predicted interindividual variability in the rewarded but not in the unrewarded learning group. Prediction was strongest for long-term memory. Similar links between connectivity and reward-based memory were absent in two control networks, a fronto-parieto-temporal language network and the dorsal attention network. The results indicate that baseline functional connectivity within the frontostriatal-limbic network successfully predicts long-term retention of rewarded learning. © 2014 Wiley Periodicals, Inc.

  3. Validity of impedance-based predictions of total body water as measured by 2H dilution in African HIV/AIDS outpatients

    International Nuclear Information System (INIS)

    Diouf, Adama; Idohou Dossou, Nicole; Wade, Salimata; Gartner, Agnes; Sanon, Dominique Alexis; Bluck, Les; Wright, Antony

    2009-01-01

    Measurements of body composition are crucial in identifying HIV-infected patients at risk of malnutrition. No information is available on the validity of indirect body composition methods in African HIV-infected outpatients. Our aim was to test the validity of fifteen published equations, developed in whites, African-Americans and/or Africans who were or not HIV-infected, for predicting total body water (TBW) from bioelectrical impedance analysis (BIA) in HIV-infected patients. The second aim was to develop specific predictive equations. Thirty-four HIV-infected patients without antiretroviral treatment and oedema at the beginning of the study (age 39 (SD 7) years, BMI 18.7 (SD 3.7) kg/m2, TBW 30.4 (SD 7.2 kg) were measured at inclusion then 3 and 6 months later. In the resulting eighty-eight measurements, we compared TBW values predicted from BIA to those measured by 2H dilution. Range of bias values was 0.1-4.3, and errors showed acceptable values (2.2-3.4 kg) for fourteen equations and a high value (10.4) for one equation. Two equations developed in non-HIV-infected subjects showed non-significant bias and could be used in African HIV-infected patients. In the other cases, poor agreement indicated a lack of validity. Specific equations developed from our sample showed a higher precision of TBW prediction when using resistance at 1000kHz (1.7kg) than at 50kHz (2.3kg), this latter precision being similar to that of the valid published equations (2.3 and 2.8kg). The valid published or developed predictive equations should be cross-validated in large independent samples of African HIV-infected patients. (Authors)

  4. Prediction of quantitative phenotypes based on genetic networks: a case study in yeast sporulation

    Directory of Open Access Journals (Sweden)

    Shen Li

    2010-09-01

    Full Text Available Abstract Background An exciting application of genetic network is to predict phenotypic consequences for environmental cues or genetic perturbations. However, de novo prediction for quantitative phenotypes based on network topology is always a challenging task. Results Using yeast sporulation as a model system, we have assembled a genetic network from literature and exploited Boolean network to predict sporulation efficiency change upon deleting individual genes. We observe that predictions based on the curated network correlate well with the experimentally measured values. In addition, computational analysis reveals the robustness and hysteresis of the yeast sporulation network and uncovers several patterns of sporulation efficiency change caused by double gene deletion. These discoveries may guide future investigation of underlying mechanisms. We have also shown that a hybridized genetic network reconstructed from both temporal microarray data and literature is able to achieve a satisfactory prediction accuracy of the same quantitative phenotypes. Conclusions This case study illustrates the value of predicting quantitative phenotypes based on genetic network and provides a generic approach.

  5. Predicting physical health: implicit mental health measures versus self-report scales.

    Science.gov (United States)

    Cousineau, Tara McKee; Shedler, Jonathan

    2006-06-01

    Researchers have traditionally relied on self-report questionnaires to assess psychological well-being, but such measures may be unable to differentiate individuals who are genuinely psychologically healthy from those who maintain a facade or illusion of mental health based on denial and self-deception. Prior research suggests that clinically derived assessment procedures that assess implicit psychological processes may have advantages over self-report mental health measures. This prospective study compared the Early Memory Index, an implicit measure of mental health/distress, with a range of familiar self-report scales as predictors of physical health. The Early Memory Index showed significant prospective associations with health service utilization and clinically verified illness. In contrast, self-report measures of mental health, perceived stress, life events stress, and mood states did not predict health outcomes. The findings highlight the limitations of self-report questionnaires and suggest that implicit measures have an important role to play in mental health research.

  6. Prediction of spontaneous ureteral stone passage: Automated 3D-measurements perform equal to radiologists, and linear measurements equal to volumetric.

    Science.gov (United States)

    Jendeberg, Johan; Geijer, Håkan; Alshamari, Muhammed; Lidén, Mats

    2018-01-24

    To compare the ability of different size estimates to predict spontaneous passage of ureteral stones using a 3D-segmentation and to investigate the impact of manual measurement variability on the prediction of stone passage. We retrospectively included 391 consecutive patients with ureteral stones on non-contrast-enhanced CT (NECT). Three-dimensional segmentation size estimates were compared to the mean of three radiologists' measurements. Receiver-operating characteristic (ROC) analysis was performed for the prediction of spontaneous passage for each estimate. The difference in predicted passage probability between the manual estimates in upper and lower stones was compared. The area under the ROC curve (AUC) for the measurements ranged from 0.88 to 0.90. Between the automated 3D algorithm and the manual measurements the 95% limits of agreement were 0.2 ± 1.4 mm for the width. The manual bone window measurements resulted in a > 20 percentage point (ppt) difference between the readers in the predicted passage probability in 44% of the upper and 6% of the lower ureteral stones. All automated 3D algorithm size estimates independently predicted the spontaneous stone passage with similar high accuracy as the mean of three readers' manual linear measurements. Manual size estimation of upper stones showed large inter-reader variations for spontaneous passage prediction. • An automated 3D technique predicts spontaneous stone passage with high accuracy. • Linear, areal and volumetric measurements performed similarly in predicting stone passage. • Reader variability has a large impact on the predicted prognosis for stone passage.

  7. Improving the spectral measurement accuracy based on temperature distribution and spectra-temperature relationship

    Science.gov (United States)

    Li, Zhe; Feng, Jinchao; Liu, Pengyu; Sun, Zhonghua; Li, Gang; Jia, Kebin

    2018-05-01

    Temperature is usually considered as a fluctuation in near-infrared spectral measurement. Chemometric methods were extensively studied to correct the effect of temperature variations. However, temperature can be considered as a constructive parameter that provides detailed chemical information when systematically changed during the measurement. Our group has researched the relationship between temperature-induced spectral variation (TSVC) and normalized squared temperature. In this study, we focused on the influence of temperature distribution in calibration set. Multi-temperature calibration set selection (MTCS) method was proposed to improve the prediction accuracy by considering the temperature distribution of calibration samples. Furthermore, double-temperature calibration set selection (DTCS) method was proposed based on MTCS method and the relationship between TSVC and normalized squared temperature. We compare the prediction performance of PLS models based on random sampling method and proposed methods. The results from experimental studies showed that the prediction performance was improved by using proposed methods. Therefore, MTCS method and DTCS method will be the alternative methods to improve prediction accuracy in near-infrared spectral measurement.

  8. A Method of Calculating Functional Independence Measure at Discharge from Functional Independence Measure Effectiveness Predicted by Multiple Regression Analysis Has a High Degree of Predictive Accuracy.

    Science.gov (United States)

    Tokunaga, Makoto; Watanabe, Susumu; Sonoda, Shigeru

    2017-09-01

    Multiple linear regression analysis is often used to predict the outcome of stroke rehabilitation. However, the predictive accuracy may not be satisfactory. The objective of this study was to elucidate the predictive accuracy of a method of calculating motor Functional Independence Measure (mFIM) at discharge from mFIM effectiveness predicted by multiple regression analysis. The subjects were 505 patients with stroke who were hospitalized in a convalescent rehabilitation hospital. The formula "mFIM at discharge = mFIM effectiveness × (91 points - mFIM at admission) + mFIM at admission" was used. By including the predicted mFIM effectiveness obtained through multiple regression analysis in this formula, we obtained the predicted mFIM at discharge (A). We also used multiple regression analysis to directly predict mFIM at discharge (B). The correlation between the predicted and the measured values of mFIM at discharge was compared between A and B. The correlation coefficients were .916 for A and .878 for B. Calculating mFIM at discharge from mFIM effectiveness predicted by multiple regression analysis had a higher degree of predictive accuracy of mFIM at discharge than that directly predicted. Copyright © 2017 National Stroke Association. Published by Elsevier Inc. All rights reserved.

  9. Measurement and prediction of thermochemical history effects on sensitization development in austenitic stainless steels

    International Nuclear Information System (INIS)

    Bruemmer, S.M.; Charlot, L.A.

    1985-11-01

    The effects of thermal and thermomechanical treatments on sensitization development in Type 304 and 316 stainless steels have been measured and compared to model predictions. Sensitization development resulting from isothermal, continuous cooling and pipe welding treatments has been evaluated. An empirically modified, theoretically based model is shown to accurately predict material degree of sensitization (DOS) as expressed by the electrochemical potentiokinetic reactivation (EPR) test after both simple and complex treatments. Material DOS is also examined using analytical electron microscopy to document grain boundary chromium depletion and is compared to EPR test results

  10. Application of prediction of equilibrium to servo-controlled calorimetry measurements

    International Nuclear Information System (INIS)

    Mayer, R.L. II

    1987-01-01

    Research was performed to develop an endpoint prediction algorithm for use with calorimeters operating in the digital servo-controlled mode. The purpose of this work was to reduce calorimetry measurement times while maintaining the high degree of precision and low bias expected from calorimetry measurements. Data from routine operation of two calorimeters were used to test predictive models at each stage of development against time savings, precision, and robustness criteria. The results of the study indicated that calorimetry measurement times can be significantly reduced using this technique. The time savings is, however, dependent on parameters in the digital servo-control algorithm and on packaging characteristics of measured items

  11. Automatic total kidney volume measurement on follow-up magnetic resonance images to facilitate monitoring of autosomal dominant polycystic kidney disease progression.

    Science.gov (United States)

    Kline, Timothy L; Korfiatis, Panagiotis; Edwards, Marie E; Warner, Joshua D; Irazabal, Maria V; King, Bernard F; Torres, Vicente E; Erickson, Bradley J

    2016-02-01

    Renal imaging examinations provide high-resolution information about the anatomic structure of the kidneys and are used to measure total kidney volume (TKV) in autosomal dominant polycystic kidney disease (ADPKD) patients. TKV has become the gold-standard image biomarker for ADPKD progression at early stages of the disease and is used in clinical trials to characterize treatment efficacy. Automated methods to segment the kidneys and measure TKV are desirable because of the long time requirement for manual approaches such as stereology or planimetry tracings. However, ADPKD kidney segmentation is complicated by a number of factors, including irregular kidney shapes and variable tissue signal at the kidney borders. We describe an image processing approach that overcomes these problems by using a baseline segmentation initialization to provide automatic segmentation of follow-up scans obtained years apart. We validated our approach using 20 patients with complete baseline and follow-up T1-weighted magnetic resonance images. Both manual tracing and stereology were used to calculate TKV, with two observers performing manual tracings and one observer performing repeat tracings. Linear correlation and Bland-Altman analysis were performed to compare the different approaches. Our automated approach measured TKV at a level of accuracy (mean difference ± standard error = 0.99 ± 0.79%) on par with both intraobserver (0.77 ± 0.46%) and interobserver variability (1.34 ± 0.70%) of manual tracings. All approaches had excellent agreement and compared favorably with ground-truth manual tracing with interobserver, stereological and automated approaches having 95% confidence intervals ∼ ± 100 mL. Our method enables fast, cost-effective and reproducible quantification of ADPKD progression that will facilitate and lower the costs of clinical trials in ADPKD and other disorders requiring accurate, longitudinal kidney quantification. In addition, it will hasten the routine use of

  12. Earthquake Prediction Analysis Based on Empirical Seismic Rate: The M8 Algorithm

    International Nuclear Information System (INIS)

    Molchan, G.; Romashkova, L.

    2010-07-01

    The quality of space-time earthquake prediction is usually characterized by a two-dimensional error diagram (n,τ), where n is the rate of failures-to-predict and τ is the normalized measure of space-time alarm. The most reasonable space measure for analysis of a prediction strategy is the rate of target events λ(dg) in a sub-area dg. In that case the quantity H = 1-(n +τ) determines the prediction capability of the strategy. The uncertainty of λ(dg) causes difficulties in estimating H and the statistical significance, α, of prediction results. We investigate this problem theoretically and show how the uncertainty of the measure can be taken into account in two situations, viz., the estimation of α and the construction of a confidence zone for the (n,τ)-parameters of the random strategies. We use our approach to analyse the results from prediction of M ≥ 8.0 events by the M8 method for the period 1985-2009 (the M8.0+ test). The model of λ(dg) based on the events Mw ≥ 5.5, 1977-2004, and the magnitude range of target events 8.0 ≤ M < 8.5 are considered as basic to this M8 analysis. We find the point and upper estimates of α and show that they are still unstable because the number of target events in the experiment is small. However, our results argue in favour of non-triviality of the M8 prediction algorithm. (author)

  13. Prediction of insulin resistance with anthropometric measures: lessons from a large adolescent population

    Directory of Open Access Journals (Sweden)

    Wedin WK

    2012-07-01

    Full Text Available William K Wedin,1 Lizmer Diaz-Gimenez,1 Antonio J Convit1,21Department of Psychiatry, NYU School of Medicine, New York, NY, USA; 2Nathan Kline Institute, Orangeburg, NY, USAObjective: The aim of this study was to describe the minimum number of anthropometric measures that will optimally predict insulin resistance (IR and to characterize the utility of these measures among obese and nonobese adolescents.Research design and methods: Six anthropometric measures (selected from three categories: central adiposity, weight, and body composition were measured from 1298 adolescents attending two New York City public high schools. Body composition was determined by bioelectric impedance analysis (BIA. The homeostatic model assessment of IR (HOMA-IR, based on fasting glucose and insulin concentrations, was used to estimate IR. Stepwise linear regression analyses were performed to predict HOMA-IR based on the six selected measures, while controlling for age.Results: The stepwise regression retained both waist circumference (WC and percentage of body fat (BF%. Notably, BMI was not retained. WC was a stronger predictor of HOMA-IR than BMI was. A regression model using solely WC performed best among the obese II group, while a model using solely BF% performed best among the lean group. Receiver operator characteristic curves showed the WC and BF% model to be more sensitive in detecting IR than BMI, but with less specificity.Conclusion: WC combined with BF% was the best predictor of HOMA-IR. This finding can be attributed partly to the ability of BF% to model HOMA-IR among leaner participants and to the ability of WC to model HOMA-IR among participants who are more obese. BMI was comparatively weak in predicting IR, suggesting that assessments that are more comprehensive and include body composition analysis could increase detection of IR during adolescence, especially among those who are lean, yet insulin-resistant.Keywords: BMI, bioelectrical impedance

  14. Predicting Social Anxiety Treatment Outcome Based on Therapeutic Email Conversations.

    Science.gov (United States)

    Hoogendoorn, Mark; Berger, Thomas; Schulz, Ava; Stolz, Timo; Szolovits, Peter

    2017-09-01

    Predicting therapeutic outcome in the mental health domain is of utmost importance to enable therapists to provide the most effective treatment to a patient. Using information from the writings of a patient can potentially be a valuable source of information, especially now that more and more treatments involve computer-based exercises or electronic conversations between patient and therapist. In this paper, we study predictive modeling using writings of patients under treatment for a social anxiety disorder. We extract a wealth of information from the text written by patients including their usage of words, the topics they talk about, the sentiment of the messages, and the style of writing. In addition, we study trends over time with respect to those measures. We then apply machine learning algorithms to generate the predictive models. Based on a dataset of 69 patients, we are able to show that we can predict therapy outcome with an area under the curve of 0.83 halfway through the therapy and with a precision of 0.78 when using the full data (i.e., the entire treatment period). Due to the limited number of participants, it is hard to generalize the results, but they do show great potential in this type of information.

  15. An Improved Dissonance Measure Based on Auditory Memory

    DEFF Research Database (Denmark)

    Jensen, Kristoffer; Hjortkjær, Jens

    2012-01-01

    Dissonance is an important feature in music audio analysis. We present here a dissonance model that accounts for the temporal integration of dissonant events in auditory short term memory. We compare the memory-based dissonance extracted from musical audio sequences to the response of human...... listeners. In a number of tests, the memory model predicts listener’s response better than traditional dissonance measures....

  16. Deep learning predictions of survival based on MRI in amyotrophic lateral sclerosis.

    Science.gov (United States)

    van der Burgh, Hannelore K; Schmidt, Ruben; Westeneng, Henk-Jan; de Reus, Marcel A; van den Berg, Leonard H; van den Heuvel, Martijn P

    2017-01-01

    Amyotrophic lateral sclerosis (ALS) is a progressive neuromuscular disease, with large variation in survival between patients. Currently, it remains rather difficult to predict survival based on clinical parameters alone. Here, we set out to use clinical characteristics in combination with MRI data to predict survival of ALS patients using deep learning, a machine learning technique highly effective in a broad range of big-data analyses. A group of 135 ALS patients was included from whom high-resolution diffusion-weighted and T1-weighted images were acquired at the first visit to the outpatient clinic. Next, each of the patients was monitored carefully and survival time to death was recorded. Patients were labeled as short, medium or long survivors, based on their recorded time to death as measured from the time of disease onset. In the deep learning procedure, the total group of 135 patients was split into a training set for deep learning (n = 83 patients), a validation set (n = 20) and an independent evaluation set (n = 32) to evaluate the performance of the obtained deep learning networks. Deep learning based on clinical characteristics predicted survival category correctly in 68.8% of the cases. Deep learning based on MRI predicted 62.5% correctly using structural connectivity and 62.5% using brain morphology data. Notably, when we combined the three sources of information, deep learning prediction accuracy increased to 84.4%. Taken together, our findings show the added value of MRI with respect to predicting survival in ALS, demonstrating the advantage of deep learning in disease prognostication.

  17. Estimating Time-Varying PCB Exposures Using Person-Specific Predictions to Supplement Measured Values: A Comparison of Observed and Predicted Values in Two Cohorts of Norwegian Women

    Science.gov (United States)

    Nøst, Therese Haugdahl; Breivik, Knut; Wania, Frank; Rylander, Charlotta; Odland, Jon Øyvind; Sandanger, Torkjel Manning

    2015-01-01

    Background Studies on the health effects of polychlorinated biphenyls (PCBs) call for an understanding of past and present human exposure. Time-resolved mechanistic models may supplement information on concentrations in individuals obtained from measurements and/or statistical approaches if they can be shown to reproduce empirical data. Objectives Here, we evaluated the capability of one such mechanistic model to reproduce measured PCB concentrations in individual Norwegian women. We also assessed individual life-course concentrations. Methods Concentrations of four PCB congeners in pregnant (n = 310, sampled in 2007–2009) and postmenopausal (n = 244, 2005) women were compared with person-specific predictions obtained using CoZMoMAN, an emission-based environmental fate and human food-chain bioaccumulation model. Person-specific predictions were also made using statistical regression models including dietary and lifestyle variables and concentrations. Results CoZMoMAN accurately reproduced medians and ranges of measured concentrations in the two study groups. Furthermore, rank correlations between measurements and predictions from both CoZMoMAN and regression analyses were strong (Spearman’s r > 0.67). Precision in quartile assignments from predictions was strong overall as evaluated by weighted Cohen’s kappa (> 0.6). Simulations indicated large inter-individual differences in concentrations experienced in the past. Conclusions The mechanistic model reproduced all measurements of PCB concentrations within a factor of 10, and subject ranking and quartile assignments were overall largely consistent, although they were weak within each study group. Contamination histories for individuals predicted by CoZMoMAN revealed variation between study subjects, particularly in the timing of peak concentrations. Mechanistic models can provide individual PCB exposure metrics that could serve as valuable supplements to measurements. Citation Nøst TH, Breivik K, Wania F

  18. Comparison between laboratory measurements, simulations, and analytical predictions of the transverse wall impedance at low frequencies

    CERN Document Server

    Roncarolo, F; Kroyer, T; Metral, E; Mounet, N; Salvant, B; Zotter, B

    2009-01-01

    The prediction of the transverse wall beam impedance at the first unstable betatron line (8 kHz) of the CERN Large Hadron Collider (LHC) is of paramount importance for understanding and controlling the related coupled-bunch instabilities. Until now only novel analytical formulas were available at this frequency. Recently, laboratory measurements and numerical simulations were performed to cross-check the analytical predictions. The experimental results based on the measurement of the variation of a probe coil inductance in the presence of (i) sample graphite plates, (ii) stand-alone LHC collimator jaws, and (iii) a full LHC collimator assembly are presented in detail. The measurement results are compared to both analytical theories and simulations. In addition, the consequences for the understanding of the LHC impedance are discussed.

  19. The predictive value of different infant attachment measures for socioemotional development at age 5 years

    NARCIS (Netherlands)

    Smeekens, S.; Riksen-Walraven, J.M.A.; Bakel, H.J.A. van

    2009-01-01

    The predictive value of different infant attachment measures was examined in a community-based sample of 111 healthy children (59 boys, 52 girls). Two procedures to assess infant attachment, the Attachment Q-Set (applied on a relatively short observation period) and a shortened version of the

  20. Ship Attitude Prediction Based on Input Delay Neural Network and Measurements of Gyroscopes

    DEFF Research Database (Denmark)

    Wang, Yunlong; N. Soltani, Mohsen; Hussain, Dil muhammed Akbar

    2017-01-01

    sampled in a ship simulation hardware system. Moreover, the factors that affect the prediction performance are also explored through a set of experiments. The prediction method proposed can achieve high precision, that is, the root-mean-square prediction errors for roll, pitch and yaw, are 0.26 deg, 0...

  1. Application of prediction of equilibrium to servo-controlled calorimetry measurements

    International Nuclear Information System (INIS)

    Mayer, R.L. II.

    1987-01-01

    Research was performed to develop an endpoint prediction algorithm for use with calorimeters operating in the digital servo-controlled mode. The purpose of this work was to reduce calorimetry measurement times while maintaining the high degree of precision and low bias expected from calorimetry measurements. Data from routine operation of two calorimeters were used to test predictive models at each stage of development against time savings, precision, and robustness criteria. The results of the study indicated that calorimetry measurement times can be significantly reduced using this technique. The time savings is, however, dependent on parameters in the digital servo-control algorithm and on packaging characteristics of measured items. 7 refs., 4 figs., 1 tab

  2. Evaluation of two methods of predicting MLC leaf positions using EPID measurements

    International Nuclear Information System (INIS)

    Parent, Laure; Seco, Joao; Evans, Phil M.; Dance, David R.; Fielding, Andrew

    2006-01-01

    In intensity modulated radiation treatments (IMRT), the position of the field edges and the modulation within the beam are often achieved with a multileaf collimator (MLC). During the MLC calibration process, due to the finite accuracy of leaf position measurements, a systematic error may be introduced to leaf positions. Thereafter leaf positions of the MLC depend on the systematic error introduced on each leaf during MLC calibration and on the accuracy of the leaf position control system (random errors). This study presents and evaluates two methods to predict the systematic errors on the leaf positions introduced during the MLC calibration. The two presented methods are based on a series of electronic portal imaging device (EPID) measurements. A comparison with film measurements showed that the EPID could be used to measure leaf positions without introducing any bias. The first method, referred to as the 'central leaf method', is based on the method currently used at this center for MLC leaf calibration. It mimics the manner in which leaf calibration parameters are specified in the MLC control system and consequently is also used by other centers. The second method, a new method proposed by the authors and referred to as the ''individual leaf method,'' involves the measurement of two positions for each leaf (-5 and +15 cm) and the interpolation and extrapolation from these two points to any other given position. The central leaf method and the individual leaf method predicted leaf positions at prescribed positions of -11, 0, 5, and 10 cm within 2.3 and 1.0 mm, respectively, with a standard deviation (SD) of 0.3 and 0.2 mm, respectively. The individual leaf method provided a better prediction of the leaf positions than the central leaf method. Reproducibility tests for leaf positions of -5 and +15 cm were performed. The reproducibility was within 0.4 mm on the same day and 0.4 mm six weeks later (1 SD). Measurements at gantry angles of 0 deg., 90 deg., and 270 deg

  3. No correlation between ultrasound placental grading at 31-34 weeks of gestation and a surrogate estimate of organ function at term obtained by stereological analysis.

    Science.gov (United States)

    Yin, T T; Loughna, P; Ong, S S; Padfield, J; Mayhew, T M

    2009-08-01

    We test the experimental hypothesis that early changes in the ultrasound appearance of the placenta reflect poor or reduced placental function. The sonographic (Grannum) grade of placental maturity was compared to placental function as expressed by the morphometric oxygen diffusive conductance of the villous membrane. Ultrasonography was used to assess the Grannum grade of 32 placentas at 31-34 weeks of gestation. Indications for the scans included a history of previous fetal abnormalities, previous fetal growth problems or suspicion of IUGR. Placentas were classified from grade 0 (most immature) to grade III (most mature). We did not exclude smokers or complicated pregnancies as we aimed to correlate the early appearance of mature placentas with placental function. After delivery, microscopical fields on formalin-fixed, trichrome-stained histological sections of each placenta were obtained by multistage systematic uniform random sampling. Using design-based stereological methods, the exchange surface areas of peripheral (terminal and intermediate) villi and their fetal capillaries and the arithmetic and harmonic mean thicknesses of the villous membrane (maternal surface of villous trophoblast to adluminal surface of vascular endothelium) were estimated. An index of the variability in thickness of this membrane, and an estimate of its oxygen diffusive conductance, were derived secondarily as were estimates of the mean diameters and total lengths of villi and fetal capillaries. Group comparisons were drawn using analysis of variance. We found no significant differences in placental volume or composition or in the dimensions or diffusive conductances of the villous membrane. Subsequent exclusion of smokers did not alter these main findings. Grannum grades at 31-34 weeks of gestation appear not to provide reliable predictors of the functional capacity of the term placenta as expressed by the surrogate measure, morphometric diffusive conductance.

  4. Sorption to soil, biochar and compost: is prediction to multicomponent mixtures possible based on single sorbent measurements?

    Directory of Open Access Journals (Sweden)

    Melanie Kah

    2018-06-01

    Full Text Available Amendment with biochar and/or compost has been proposed as a strategy to remediate soil contaminated with low levels of polycyclic aromatic hydrocarbons. The strong sorption potential of biochar can help sequestering contaminants while the compost may promote their degradation. An improved understanding of how sorption evolves upon soil amendment is an essential step towards the implementation of the approach. The present study reports on the sorption of pyrene to two soils, four biochars and one compost. Detailed isotherm analyzes across a wide range of concentration confirmed that soil amendments can significantly increase the sorption of pyrene. Comparisons of data obtained by a classical batch and a passive sampling method suggest that dissolved organic matter did not play a significant role on the sorption of pyrene. The addition of 10% compost to soil led to a moderate increase in sorption (<2-fold, which could be well predicted based on measurements of sorption to the individual components. Hence, our result suggest that the sorption of pyrene to soil and compost can be relatively well approximated by an additive process. The addition of 5% biochar to soil (with or without compost led to a major increase in the sorption of pyrene (2.5–4.7-fold, which was, however, much smaller than that suggested based on the sorption measured on the three individual components. Results suggest that the strong sorption to the biochar was attenuated by up to 80% in the presence of soil and compost, much likely due to surface and pore blockage. Results were very similar in the two soils considered, and collectively suggest that combined amendments with compost and biochar may be a useful approach to remediate soils with low levels of contamination. Further studies carried out in more realistic settings and over longer periods of time are the next step to evaluate the long term viability of remediation approaches based on biochar amendments.

  5. A new measure-correlate-predict approach for resource assessment

    Energy Technology Data Exchange (ETDEWEB)

    Joensen, A; Landberg, L [Risoe National Lab., Dept. of Wind Energy and Atmospheric Physics, Roskilde (Denmark); Madsen, H [The Technical Univ. of Denmark, Dept. of Mathematical Modelling, Lyngby (Denmark)

    1999-03-01

    In order to find reasonable candidate site for wind farms, it is of great importance to be able to calculate the wind resource at potential sites. One way to solve this problem is to measure wind speed and direction at the site, and use these measurements to predict the resource. If the measurements at the potential site cover less than e.g. one year, which most likely will be the case, it is not possible to get a reliable estimate of the long-term resource, using this approach. If long-term measurements from e.g. some nearby meteorological station are available, however, then statistical methods can be used to find a relation between the measurements at the site and at the meteorological station. This relation can then be used to transform the long-term measurements to the potential site, and the resource can be calculated using the transformed measurements. Here, a varying-coefficient model, estimated using local regression, is applied in order to establish a relation between the measurements. The approach is evaluated using measurements from two sites, located approximately two kilometres apart, and the results show that the resource in this case can be predicted accurately, although this approach has serious shortcomings. (au)

  6. Pressure Prediction of Coal Slurry Transportation Pipeline Based on Particle Swarm Optimization Kernel Function Extreme Learning Machine

    Directory of Open Access Journals (Sweden)

    Xue-cun Yang

    2015-01-01

    Full Text Available For coal slurry pipeline blockage prediction problem, through the analysis of actual scene, it is determined that the pressure prediction from each measuring point is the premise of pipeline blockage prediction. Kernel function of support vector machine is introduced into extreme learning machine, the parameters are optimized by particle swarm algorithm, and blockage prediction method based on particle swarm optimization kernel function extreme learning machine (PSOKELM is put forward. The actual test data from HuangLing coal gangue power plant are used for simulation experiments and compared with support vector machine prediction model optimized by particle swarm algorithm (PSOSVM and kernel function extreme learning machine prediction model (KELM. The results prove that mean square error (MSE for the prediction model based on PSOKELM is 0.0038 and the correlation coefficient is 0.9955, which is superior to prediction model based on PSOSVM in speed and accuracy and superior to KELM prediction model in accuracy.

  7. Prediction based on mean subset

    DEFF Research Database (Denmark)

    Øjelund, Henrik; Brown, P. J.; Madsen, Henrik

    2002-01-01

    , it is found that the proposed mean subset method has superior prediction performance than prediction based on the best subset method, and in some settings also better than the ridge regression and lasso methods. The conclusions drawn from the Monte Carlo study is corroborated in an example in which prediction......Shrinkage methods have traditionally been applied in prediction problems. In this article we develop a shrinkage method (mean subset) that forms an average of regression coefficients from individual subsets of the explanatory variables. A Bayesian approach is taken to derive an expression of how...... the coefficient vectors from each subset should be weighted. It is not computationally feasible to calculate the mean subset coefficient vector for larger problems, and thus we suggest an algorithm to find an approximation to the mean subset coefficient vector. In a comprehensive Monte Carlo simulation study...

  8. Predictive based monitoring of nuclear plant component degradation using support vector regression

    International Nuclear Information System (INIS)

    Agarwal, Vivek; Alamaniotis, Miltiadis; Tsoukalas, Lefteri H.

    2015-01-01

    Nuclear power plants (NPPs) are large installations comprised of many active and passive assets. Degradation monitoring of all these assets is expensive (labor cost) and highly demanding task. In this paper a framework based on Support Vector Regression (SVR) for online surveillance of critical parameter degradation of NPP components is proposed. In this case, on time replacement or maintenance of components will prevent potential plant malfunctions, and reduce the overall operational cost. In the current work, we apply SVR equipped with a Gaussian kernel function to monitor components. Monitoring includes the one-step-ahead prediction of the component's respective operational quantity using the SVR model, while the SVR model is trained using a set of previous recorded degradation histories of similar components. Predictive capability of the model is evaluated upon arrival of a sensor measurement, which is compared to the component failure threshold. A maintenance decision is based on a fuzzy inference system that utilizes three parameters: (i) prediction evaluation in the previous steps, (ii) predicted value of the current step, (iii) and difference of current predicted value with components failure thresholds. The proposed framework will be tested on turbine blade degradation data.

  9. In reactor measurements, modeling and assessments to predict liquid injection shutdown system nozzle to Calandria tube time to contact

    International Nuclear Information System (INIS)

    Kirstein, K.; Kalenchuk, D.

    2011-01-01

    Over the past few years there has been an expanding effort to assess the potential for Calandria Tubes (CTs) coming into contact with Liquid Injection Shutdown System (LISS) Nozzles to ensure continued contact-free operation as required by CSA N285.4. LISS Nozzles (LINs), which run perpendicular to and between rows of fuel channels, sag at a slower rate than the fuel channels. As a result certain LINs may come in contact with CTs above them. The CT/LIN gaps can be predicted from calculated CT sag, LIN sag and a number of component and installation tolerances. This method however results in very conservative predictions when compared to measurements, confirmed with the in reactor measurements initiated in 2000, when gaps were successfully measured the first time using images obtained from a camera-assisted measurement tool inserted into the calandria. To reduce the conservatism of the CT/LIN gap predictions, statistical CT/LIN gap models are used instead. They are derived from a comparison between calculated gaps based on nominal dimensions and the visual image based measured gaps. These reactor specific (typically 95% confidence level) CT/LIN gap models account for all uncertainties and deviations from nominal values. Prediction error margins reduce as more in-reactor gap measurements become available. Each year more measurements are being made using this standardized visual CT/LIN proximity method. The subsequently prepared reactor-specific models have been used to provide time to contact for every channel above the LINs at these stations. In a number of cases it has been used to demonstrate that the reactor can be operated to its end of life before refurbishment with no predicted contact, or specific at-risk channels have been identified for which appropriate remedial actions could be implemented in a planned manner. (author)

  10. Validation of measured poleward TEC gradient using multi-station GPS with Artificial Neural Network based TEC model in low latitude region for developing predictive capability of ionospheric scintillation

    Science.gov (United States)

    Sur, D.; Paul, A.

    2017-12-01

    The equatorial ionosphere shows sharp diurnal and latitudinal Total Electron Content (TEC) variations over a major part of the day. Equatorial ionosphere also exhibits intense post-sunset ionospheric irregularities. Accurate prediction of TEC in these low latitudes is not possible from standard ionospheric models. An Artificial Neural Network (ANN) based Vertical TEC (VTEC) model has been designed using TEC data in low latitude Indian longitude sector for accurate prediction of VTEC. GPS TEC data from the stations Calcutta (22.58°N, 88.38°E geographic, magnetic dip 32°), Baharampore (24.09°N, 88.25°E geographic, magnetic dip 35°) and Siliguri (26.72°N, 88.39°E geographic; magnetic dip 40°) are used as training dataset for the duration of January 2007-September 2011. Poleward VTEC gradients from northern EIA crest to region beyond EIA crest have been calculated from measured VTEC and compared with that obtained from ANN based VTEC model. TEC data from Calcutta and Siliguri are used to compute VTEC gradients during April 2013 and August-September 2013. It has been observed that poleward VTEC gradient computed from ANN based TEC model has shown good correlation with measured values during vernal and autumnal equinoxes of high solar activity periods of 2013. Possible correlation between measured poleward TEC gradients and post-sunset scintillations (S4 ≥ 0.4) from northern crest of EIA has been observed in this paper. From the observation, a suitable threshold poleward VTEC gradient has been proposed for possible occurrence of post-sunset scintillations at northern crest of EIA along 88°E longitude. Poleward VTEC gradients obtained from ANN based VTEC model are used to forecast possible ionospheric scintillation after post-sunset period using the threshold value. It has been observed that these predicted VTEC gradients obtained from ANN based VTEC model can forecast post-sunset L-band scintillation with an accuracy of 67% to 82% in this dynamic low latitude

  11. Energy based prediction models for building acoustics

    DEFF Research Database (Denmark)

    Brunskog, Jonas

    2012-01-01

    In order to reach robust and simplified yet accurate prediction models, energy based principle are commonly used in many fields of acoustics, especially in building acoustics. This includes simple energy flow models, the framework of statistical energy analysis (SEA) as well as more elaborated...... principles as, e.g., wave intensity analysis (WIA). The European standards for building acoustic predictions, the EN 12354 series, are based on energy flow and SEA principles. In the present paper, different energy based prediction models are discussed and critically reviewed. Special attention is placed...... on underlying basic assumptions, such as diffuse fields, high modal overlap, resonant field being dominant, etc., and the consequences of these in terms of limitations in the theory and in the practical use of the models....

  12. Comparison of predicted versus measured dose rates for low-level radioactive waste cask shipments

    International Nuclear Information System (INIS)

    Macher, Martin S.

    1992-01-01

    Shippers of low-level radioactive waste must select casks which will provide sufficient shielding to keep dose rates below the federal limit of 10 mr/hr at 2 meters from the vehicle. Chem-Nuclear Systems, Inc. uses a cask selection methodology which is based on shielding analysis code predictions with an additional factor of safety applied to compensate for inhomogeneities in the waste, uncertainties in waste characterization, and inaccuracy in the calculational methods. This proven cask selection methodology is explained and suggested factors of safety are presented based on comparisons of predicted and measured dose rates. A safety factor of 2 is shown to be generally appropriate for relatively homogeneous waste and a safety factor of between 3 and 4 is shown to be generally appropriate for relatively inhomogeneous wastes. (author)

  13. The statistical prediction of offshore winds from land-based data for wind-energy applications

    DEFF Research Database (Denmark)

    Walmsley, J.L.; Barthelmie, R.J.; Burrows, W.R.

    2001-01-01

    Land-based meteorological measurements at two locations on the Danish coast are used to predict offshore wind speeds. Offshore wind-speed data are used only for developing the statistical prediction algorithms and for verification. As a first step, the two datasets were separated into nine...... percentile-based bins, with a minimum of 30 data records in each bin. Next, the records were randomly selected with approximately 70% of the data in each bin being used as a training set for development of the prediction algorithms, and the remaining 30% being reserved as a test set for evaluation purposes....... The binning procedure ensured that both training and test sets fairly represented the overall data distribution. To base the conclusions on firmer ground, five permutations of these training and test sets were created. Thus, all calculations were based on five cases, each one representing a different random...

  14. Transcriptome dynamics-based operon prediction in prokaryotes.

    Science.gov (United States)

    Fortino, Vittorio; Smolander, Olli-Pekka; Auvinen, Petri; Tagliaferri, Roberto; Greco, Dario

    2014-05-16

    Inferring operon maps is crucial to understanding the regulatory networks of prokaryotic genomes. Recently, RNA-seq based transcriptome studies revealed that in many bacterial species the operon structure vary with the change of environmental conditions. Therefore, new computational solutions that use both static and dynamic data are necessary to create condition specific operon predictions. In this work, we propose a novel classification method that integrates RNA-seq based transcriptome profiles with genomic sequence features to accurately identify the operons that are expressed under a measured condition. The classifiers are trained on a small set of confirmed operons and then used to classify the remaining gene pairs of the organism studied. Finally, by linking consecutive gene pairs classified as operons, our computational approach produces condition-dependent operon maps. We evaluated our approach on various RNA-seq expression profiles of the bacteria Haemophilus somni, Porphyromonas gingivalis, Escherichia coli and Salmonella enterica. Our results demonstrate that, using features depending on both transcriptome dynamics and genome sequence characteristics, we can identify operon pairs with high accuracy. Moreover, the combination of DNA sequence and expression data results in more accurate predictions than each one alone. We present a computational strategy for the comprehensive analysis of condition-dependent operon maps in prokaryotes. Our method can be used to generate condition specific operon maps of many bacterial organisms for which high-resolution transcriptome data is available.

  15. Adaptive Granulation-Based Prediction for Energy System of Steel Industry.

    Science.gov (United States)

    Wang, Tianyu; Han, Zhongyang; Zhao, Jun; Wang, Wei

    2018-01-01

    The flow variation tendency of byproduct gas plays a crucial role for energy scheduling in steel industry. An accurate prediction of its future trends will be significantly beneficial for the economic profits of steel enterprise. In this paper, a long-term prediction model for the energy system is proposed by providing an adaptive granulation-based method that considers the production semantics involved in the fluctuation tendency of the energy data, and partitions them into a series of information granules. To fully reflect the corresponding data characteristics of the formed unequal-length temporal granules, a 3-D feature space consisting of the timespan, the amplitude and the linetype is designed as linguistic descriptors. In particular, a collaborative-conditional fuzzy clustering method is proposed to granularize the tendency-based feature descriptors and specifically measure the amplitude variation of industrial data which plays a dominant role in the feature space. To quantify the performance of the proposed method, a series of real-world industrial data coming from the energy data center of a steel plant is employed to conduct the comparative experiments. The experimental results demonstrate that the proposed method successively satisfies the requirements of the practically viable prediction.

  16. Estimating Time-Varying PCB Exposures Using Person-Specific Predictions to Supplement Measured Values: A Comparison of Observed and Predicted Values in Two Cohorts of Norwegian Women.

    Science.gov (United States)

    Nøst, Therese Haugdahl; Breivik, Knut; Wania, Frank; Rylander, Charlotta; Odland, Jon Øyvind; Sandanger, Torkjel Manning

    2016-03-01

    Studies on the health effects of polychlorinated biphenyls (PCBs) call for an understanding of past and present human exposure. Time-resolved mechanistic models may supplement information on concentrations in individuals obtained from measurements and/or statistical approaches if they can be shown to reproduce empirical data. Here, we evaluated the capability of one such mechanistic model to reproduce measured PCB concentrations in individual Norwegian women. We also assessed individual life-course concentrations. Concentrations of four PCB congeners in pregnant (n = 310, sampled in 2007-2009) and postmenopausal (n = 244, 2005) women were compared with person-specific predictions obtained using CoZMoMAN, an emission-based environmental fate and human food-chain bioaccumulation model. Person-specific predictions were also made using statistical regression models including dietary and lifestyle variables and concentrations. CoZMoMAN accurately reproduced medians and ranges of measured concentrations in the two study groups. Furthermore, rank correlations between measurements and predictions from both CoZMoMAN and regression analyses were strong (Spearman's r > 0.67). Precision in quartile assignments from predictions was strong overall as evaluated by weighted Cohen's kappa (> 0.6). Simulations indicated large inter-individual differences in concentrations experienced in the past. The mechanistic model reproduced all measurements of PCB concentrations within a factor of 10, and subject ranking and quartile assignments were overall largely consistent, although they were weak within each study group. Contamination histories for individuals predicted by CoZMoMAN revealed variation between study subjects, particularly in the timing of peak concentrations. Mechanistic models can provide individual PCB exposure metrics that could serve as valuable supplements to measurements.

  17. Thermal deformation prediction in reticles for extreme ultraviolet lithography based on a measurement-dependent low-order model

    NARCIS (Netherlands)

    Bikcora, C.; Weiland, S.; Coene, W.M.J.

    2014-01-01

    In extreme ultraviolet lithography, imaging errors due to thermal deformation of reticles are becoming progressively intolerable as the source power increases. Despite this trend, such errors can be mitigated by adjusting the wafer and reticle stages based on a set of predicted deformation-induced

  18. Validity of impedance-based equations for the prediction of total body water as measured by deuterium dilution in African women

    International Nuclear Information System (INIS)

    Dioum, Aissatou S.; Cisse, Aita; Wade, Salimata; Gartner, Agnes; Delpeuch, Francis; Maire, Bernard; Schutz, Yves

    2005-01-01

    Background: Little information is available on the validity of simple and indirect body-composition methods in non-Western populations. Equations for predicting body composition are population- specific, and body composition differs between blacks and whites. Objective:Wetestedthehypothesisthatthevalidityofequationsfor predicting total body water (TBW) from bioelectrical impedance analysis measurements is likely to depend on the racial background of the group from which the equations were derived. Design: The hypothesis was tested by comparing, in 36 African women, TBW values measured by deuterium dilution with those predicted by 23 equations developed in white, African American, or African subjects. These cross-validations in our African sample were also compared, whenever possible, with results from other studies in black subjects. Results: Errors in predicting TBW showed acceptable values (1.3- 1.9 kg) in all cases, whereas a large range of bias (0.2-6.1 kg) was observed independently of the ethnic origin of the sample from which the equations were derived. Three equations (2 from whites and 1 from blacks) showed nonsignificant bias and could be used in Africans. In all other cases, we observed either an overestimation or under estimation of TBW with variable bias values, regardless of racial background, yielding no clear trend for validity as a function of ethnic origin. Conclusions: The findings of this cross-validation study emphasize the need for further fundamental research to explore the causes of the poor validity of TBW prediction equations across populations rather than the need to develop new prediction equations for use in Africa. (Authors)

  19. Period, epoch, and prediction errors of ephemerides from continuous sets of timing measurements

    Science.gov (United States)

    Deeg, H. J.

    2015-06-01

    Space missions such as Kepler and CoRoT have led to large numbers of eclipse or transit measurements in nearly continuous time series. This paper shows how to obtain the period error in such measurements from a basic linear least-squares fit, and how to correctly derive the timing error in the prediction of future transit or eclipse events. Assuming strict periodicity, a formula for the period error of these time series is derived, σP = σT (12 / (N3-N))1 / 2, where σP is the period error, σT the timing error of a single measurement, and N the number of measurements. Compared to the iterative method for period error estimation by Mighell & Plavchan (2013), this much simpler formula leads to smaller period errors, whose correctness has been verified through simulations. For the prediction of times of future periodic events, usual linear ephemeris were epoch errors are quoted for the first time measurement, are prone to an overestimation of the error of that prediction. This may be avoided by a correction for the duration of the time series. An alternative is the derivation of ephemerides whose reference epoch and epoch error are given for the centre of the time series. For long continuous or near-continuous time series whose acquisition is completed, such central epochs should be the preferred way for the quotation of linear ephemerides. While this work was motivated from the analysis of eclipse timing measures in space-based light curves, it should be applicable to any other problem with an uninterrupted sequence of discrete timings for which the determination of a zero point, of a constant period and of the associated errors is needed.

  20. Extending Theory-Based Quantitative Predictions to New Health Behaviors.

    Science.gov (United States)

    Brick, Leslie Ann D; Velicer, Wayne F; Redding, Colleen A; Rossi, Joseph S; Prochaska, James O

    2016-04-01

    Traditional null hypothesis significance testing suffers many limitations and is poorly adapted to theory testing. A proposed alternative approach, called Testing Theory-based Quantitative Predictions, uses effect size estimates and confidence intervals to directly test predictions based on theory. This paper replicates findings from previous smoking studies and extends the approach to diet and sun protection behaviors using baseline data from a Transtheoretical Model behavioral intervention (N = 5407). Effect size predictions were developed using two methods: (1) applying refined effect size estimates from previous smoking research or (2) using predictions developed by an expert panel. Thirteen of 15 predictions were confirmed for smoking. For diet, 7 of 14 predictions were confirmed using smoking predictions and 6 of 16 using expert panel predictions. For sun protection, 3 of 11 predictions were confirmed using smoking predictions and 5 of 19 using expert panel predictions. Expert panel predictions and smoking-based predictions poorly predicted effect sizes for diet and sun protection constructs. Future studies should aim to use previous empirical data to generate predictions whenever possible. The best results occur when there have been several iterations of predictions for a behavior, such as with smoking, demonstrating that expected values begin to converge on the population effect size. Overall, the study supports necessity in strengthening and revising theory with empirical data.

  1. Predicting Document Retrieval System Performance: An Expected Precision Measure.

    Science.gov (United States)

    Losee, Robert M., Jr.

    1987-01-01

    Describes an expected precision (EP) measure designed to predict document retrieval performance. Highlights include decision theoretic models; precision and recall as measures of system performance; EP graphs; relevance feedback; and computing the retrieval status value of a document for two models, the Binary Independent Model and the Two Poisson…

  2. Bridge Structure Deformation Prediction Based on GNSS Data Using Kalman-ARIMA-GARCH Model.

    Science.gov (United States)

    Xin, Jingzhou; Zhou, Jianting; Yang, Simon X; Li, Xiaoqing; Wang, Yu

    2018-01-19

    Bridges are an essential part of the ground transportation system. Health monitoring is fundamentally important for the safety and service life of bridges. A large amount of structural information is obtained from various sensors using sensing technology, and the data processing has become a challenging issue. To improve the prediction accuracy of bridge structure deformation based on data mining and to accurately evaluate the time-varying characteristics of bridge structure performance evolution, this paper proposes a new method for bridge structure deformation prediction, which integrates the Kalman filter, autoregressive integrated moving average model (ARIMA), and generalized autoregressive conditional heteroskedasticity (GARCH). Firstly, the raw deformation data is directly pre-processed using the Kalman filter to reduce the noise. After that, the linear recursive ARIMA model is established to analyze and predict the structure deformation. Finally, the nonlinear recursive GARCH model is introduced to further improve the accuracy of the prediction. Simulation results based on measured sensor data from the Global Navigation Satellite System (GNSS) deformation monitoring system demonstrated that: (1) the Kalman filter is capable of denoising the bridge deformation monitoring data; (2) the prediction accuracy of the proposed Kalman-ARIMA-GARCH model is satisfactory, where the mean absolute error increases only from 3.402 mm to 5.847 mm with the increment of the prediction step; and (3) in comparision to the Kalman-ARIMA model, the Kalman-ARIMA-GARCH model results in superior prediction accuracy as it includes partial nonlinear characteristics (heteroscedasticity); the mean absolute error of five-step prediction using the proposed model is improved by 10.12%. This paper provides a new way for structural behavior prediction based on data processing, which can lay a foundation for the early warning of bridge health monitoring system based on sensor data using sensing

  3. Bridge Structure Deformation Prediction Based on GNSS Data Using Kalman-ARIMA-GARCH Model

    Directory of Open Access Journals (Sweden)

    Jingzhou Xin

    2018-01-01

    Full Text Available Bridges are an essential part of the ground transportation system. Health monitoring is fundamentally important for the safety and service life of bridges. A large amount of structural information is obtained from various sensors using sensing technology, and the data processing has become a challenging issue. To improve the prediction accuracy of bridge structure deformation based on data mining and to accurately evaluate the time-varying characteristics of bridge structure performance evolution, this paper proposes a new method for bridge structure deformation prediction, which integrates the Kalman filter, autoregressive integrated moving average model (ARIMA, and generalized autoregressive conditional heteroskedasticity (GARCH. Firstly, the raw deformation data is directly pre-processed using the Kalman filter to reduce the noise. After that, the linear recursive ARIMA model is established to analyze and predict the structure deformation. Finally, the nonlinear recursive GARCH model is introduced to further improve the accuracy of the prediction. Simulation results based on measured sensor data from the Global Navigation Satellite System (GNSS deformation monitoring system demonstrated that: (1 the Kalman filter is capable of denoising the bridge deformation monitoring data; (2 the prediction accuracy of the proposed Kalman-ARIMA-GARCH model is satisfactory, where the mean absolute error increases only from 3.402 mm to 5.847 mm with the increment of the prediction step; and (3 in comparision to the Kalman-ARIMA model, the Kalman-ARIMA-GARCH model results in superior prediction accuracy as it includes partial nonlinear characteristics (heteroscedasticity; the mean absolute error of five-step prediction using the proposed model is improved by 10.12%. This paper provides a new way for structural behavior prediction based on data processing, which can lay a foundation for the early warning of bridge health monitoring system based on sensor data

  4. Multi-Objective Predictive Balancing Control of Battery Packs Based on Predictive Current

    Directory of Open Access Journals (Sweden)

    Wenbiao Li

    2016-04-01

    Full Text Available Various balancing topology and control methods have been proposed for the inconsistency problem of battery packs. However, these strategies only focus on a single objective, ignore the mutual interaction among various factors and are only based on the external performance of the battery pack inconsistency, such as voltage balancing and state of charge (SOC balancing. To solve these problems, multi-objective predictive balancing control (MOPBC based on predictive current is proposed in this paper, namely, in the driving process of an electric vehicle, using predictive control to predict the battery pack output current the next time. Based on this information, the impact of the battery pack temperature caused by the output current can be obtained. Then, the influence is added to the battery pack balancing control, which makes the present degradation, temperature, and SOC imbalance achieve balance automatically due to the change of the output current the next moment. According to MOPBC, the simulation model of the balancing circuit is built with four cells in Matlab/Simulink. The simulation results show that MOPBC is not only better than the other traditional balancing control strategies but also reduces the energy loss in the balancing process.

  5. Construction of Models for Nondestructive Prediction of Ingredient Contents in Blueberries by Near-infrared Spectroscopy Based on HPLC Measurements.

    Science.gov (United States)

    Bai, Wenming; Yoshimura, Norio; Takayanagi, Masao; Che, Jingai; Horiuchi, Naomi; Ogiwara, Isao

    2016-06-28

    Nondestructive prediction of ingredient contents of farm products is useful to ship and sell the products with guaranteed qualities. Here, near-infrared spectroscopy is used to predict nondestructively total sugar, total organic acid, and total anthocyanin content in each blueberry. The technique is expected to enable the selection of only delicious blueberries from all harvested ones. The near-infrared absorption spectra of blueberries are measured with the diffuse reflectance mode at the positions not on the calyx. The ingredient contents of a blueberry determined by high-performance liquid chromatography are used to construct models to predict the ingredient contents from observed spectra. Partial least squares regression is used for the construction of the models. It is necessary to properly select the pretreatments for the observed spectra and the wavelength regions of the spectra used for analyses. Validations are necessary for the constructed models to confirm that the ingredient contents are predicted with practical accuracies. Here we present a protocol to construct and validate the models for nondestructive prediction of ingredient contents in blueberries by near-infrared spectroscopy.

  6. Predicted and measured velocity distribution in a model heat exchanger

    International Nuclear Information System (INIS)

    Rhodes, D.B.; Carlucci, L.N.

    1984-01-01

    This paper presents a comparison between numerical predictions, using the porous media concept, and measurements of the two-dimensional isothermal shell-side velocity distributions in a model heat exchanger. Computations and measurements were done with and without tubes present in the model. The effect of tube-to-baffle leakage was also investigated. The comparison was made to validate certain porous media concepts used in a computer code being developed to predict the detailed shell-side flow in a wide range of shell-and-tube heat exchanger geometries

  7. Speech Intelligibility Prediction Based on Mutual Information

    DEFF Research Database (Denmark)

    Jensen, Jesper; Taal, Cees H.

    2014-01-01

    This paper deals with the problem of predicting the average intelligibility of noisy and potentially processed speech signals, as observed by a group of normal hearing listeners. We propose a model which performs this prediction based on the hypothesis that intelligibility is monotonically related...... to the mutual information between critical-band amplitude envelopes of the clean signal and the corresponding noisy/processed signal. The resulting intelligibility predictor turns out to be a simple function of the mean-square error (mse) that arises when estimating a clean critical-band amplitude using...... a minimum mean-square error (mmse) estimator based on the noisy/processed amplitude. The proposed model predicts that speech intelligibility cannot be improved by any processing of noisy critical-band amplitudes. Furthermore, the proposed intelligibility predictor performs well ( ρ > 0.95) in predicting...

  8. Unique contributions of dynamic versus global measures of parent-child interaction quality in predicting school adjustment.

    Science.gov (United States)

    Bardack, Sarah; Herbers, Janette E; Obradović, Jelena

    2017-09-01

    This study investigates the unique contribution of microsocial and global measures of parent-child positive coregulation (PCR) in predicting children's behavioral and social adjustment in school. Using a community sample of 102 children, ages 4-6, and their parents, we conducted nested path analytic models to identify the unique effects of 2 measures of PCR on school outcomes. Microsocial PCR independently predicted fewer externalizing and inattention/impulsive behaviors in school. Global PCR did not uniquely relate to children's behavioral and social adjustment outcomes. Household socioeconomic status was related to both microsocial and global measures of PCR, but not directly associated with school outcomes. Findings illustrate the importance of using dynamic measures of PCR based on microsocial coding to further understand how the quality of parent-child interaction is related to children's self-regulatory and social development during school transition. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  9. Earthquake prediction analysis based on empirical seismic rate: the M8 algorithm

    Science.gov (United States)

    Molchan, G.; Romashkova, L.

    2010-12-01

    The quality of space-time earthquake prediction is usually characterized by a 2-D error diagram (n, τ), where n is the fraction of failures-to-predict and τ is the local rate of alarm averaged in space. The most reasonable averaging measure for analysis of a prediction strategy is the normalized rate of target events λ(dg) in a subarea dg. In that case the quantity H = 1 - (n + τ) determines the prediction capability of the strategy. The uncertainty of λ(dg) causes difficulties in estimating H and the statistical significance, α, of prediction results. We investigate this problem theoretically and show how the uncertainty of the measure can be taken into account in two situations, viz., the estimation of α and the construction of a confidence zone for the (n, τ)-parameters of the random strategies. We use our approach to analyse the results from prediction of M >= 8.0 events by the M8 method for the period 1985-2009 (the M8.0+ test). The model of λ(dg) based on the events Mw >= 5.5, 1977-2004, and the magnitude range of target events 8.0 <= M < 8.5 are considered as basic to this M8 analysis. We find the point and upper estimates of α and show that they are still unstable because the number of target events in the experiment is small. However, our results argue in favour of non-triviality of the M8 prediction algorithm.

  10. Parathyroid Hormone Measurement in Prediction of Hypocalcaemia following Thyroidectomy

    International Nuclear Information System (INIS)

    Mehrvarz, S.; Mohebbi, H. A.; Motamedi, M. H. K.; Khatami, S. M.; Reazie, R.; Rasouli, H. R.

    2014-01-01

    Objective: To determine the risk of postthyroidectomy hypocalcaemia by measuring parathyroid hormone (PTH) level after thyroidectomy. Study Design: Cross-sectional study. Place and Duration of Study: Baqiyatallah Hospital, Tehran, Iran, from March 2008 to July 2010. Methodology: All included patients were referred for total or near bilateral thyroidectomy. Serum Calcium (Ca) and PTH levels were measured before and 24 hours after surgery. In low Ca cases or development of hypocalcaemia symptoms, daily monitoring of Ca levels were continued. Data were analyzed using SPSS 20 software (SPSS, Chicago, IL, USA). A p-value less than 0.05 were considered statistically significant. To assess the standard value of useful predictive factors, we used receiver operating characteristic (ROC) curves. Results: Of total 99 patients who underwent bilateral thyroidectomy, 47 patients (47.5%) developed hypocalcaemia, out of them, 12 (25.5%) became symptomatic while 2 patients developed permanent hypoparathyroidism. After surgery, mean rank of PTH level within the normocalcaemic and hypocalcaemic patients was 55.34 and 44.1 respectively, p=0.052. Twenty four hours after surgery, 62% drop in PTH was associated with 83.3% of symptomatic hypocalcaemic. For diagnosis of symptomatic hypocalcaemia, 62% PTH drop had sensitivity and specificity were 83.3% and 90.80%. The area under the ROC curve for the PTH postoperative and PTH drop for diagnostic symptomatic hypocalcaemia were 0.835 and 0.873 respectively. Conclusion: Measuring PTH levels after 24 hours postthyroidectomy is not reliable factor for predicting hypocalcaemia itself. For predicting the risk of hypocalcaemia after thyroidectomy it is more reliable to measure the serum PTH level before and after operation and compare the reduction level of percentage of PTH drop for predicting the risk of hypocalcaemia. (author)

  11. Comparison of predicted and measured variations of indoor radon concentration

    International Nuclear Information System (INIS)

    Arvela, H.; Voutilainen, A.; Maekelaeinen, I.; Castren, O.; Winqvist, K.

    1988-01-01

    Prediction of the variations of indoor radon concentration were calculated using a model relating indoor radon concentration to radon entry rate, air infiltration and meteorological factors. These calculated variations have been compared with seasonal variations of 33 houses during 1-4 years, with winter-summer concentration ratios of 300 houses and the measured diurnal variation. In houses with a slab in ground contact the measured seasonal variations are quite often in agreement with variations predicted for nearly pure pressure difference driven flow. The contribution of a diffusion source is significant in houses with large porous concrete walls against the ground. Air flow due to seasonally variable thermal convection within eskers strongly affects the seasonal variations within houses located thereon. Measured and predicted winter-summer concentration ratios demonstrate that, on average, the ratio is a function of radon concentration. The ratio increases with increasing winter concentration. According to the model the diurnal maximum caused by a pressure difference driven flow occurs in the morning, a finding which is in agreement with the measurements. The model presented can be used for differentiating between factors affecting radon entry into houses. (author)

  12. Empirical comparison of web-based antimicrobial peptide prediction tools.

    Science.gov (United States)

    Gabere, Musa Nur; Noble, William Stafford

    2017-07-01

    Antimicrobial peptides (AMPs) are innate immune molecules that exhibit activities against a range of microbes, including bacteria, fungi, viruses and protozoa. Recent increases in microbial resistance against current drugs has led to a concomitant increase in the need for novel antimicrobial agents. Over the last decade, a number of AMP prediction tools have been designed and made freely available online. These AMP prediction tools show potential to discriminate AMPs from non-AMPs, but the relative quality of the predictions produced by the various tools is difficult to quantify. We compiled two sets of AMP and non-AMP peptides, separated into three categories-antimicrobial, antibacterial and bacteriocins. Using these benchmark data sets, we carried out a systematic evaluation of ten publicly available AMP prediction methods. Among the six general AMP prediction tools-ADAM, CAMPR3(RF), CAMPR3(SVM), MLAMP, DBAASP and MLAMP-we find that CAMPR3(RF) provides a statistically significant improvement in performance, as measured by the area under the receiver operating characteristic (ROC) curve, relative to the other five methods. Surprisingly, for antibacterial prediction, the original AntiBP method significantly outperforms its successor, AntiBP2 based on one benchmark dataset. The two bacteriocin prediction tools, BAGEL3 and BACTIBASE, both provide very good performance and BAGEL3 outperforms its predecessor, BACTIBASE, on the larger of the two benchmarks. gaberemu@ngha.med.sa or william-noble@uw.edu. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com

  13. Consumer Neuroscience-Based Metrics Predict Recall, Liking and Viewing Rates in Online Advertising.

    Science.gov (United States)

    Guixeres, Jaime; Bigné, Enrique; Ausín Azofra, Jose M; Alcañiz Raya, Mariano; Colomer Granero, Adrián; Fuentes Hurtado, Félix; Naranjo Ornedo, Valery

    2017-01-01

    The purpose of the present study is to investigate whether the effectiveness of a new ad on digital channels (YouTube) can be predicted by using neural networks and neuroscience-based metrics (brain response, heart rate variability and eye tracking). Neurophysiological records from 35 participants were exposed to 8 relevant TV Super Bowl commercials. Correlations between neurophysiological-based metrics, ad recall, ad liking, the ACE metrix score and the number of views on YouTube during a year were investigated. Our findings suggest a significant correlation between neuroscience metrics and self-reported of ad effectiveness and the direct number of views on the YouTube channel. In addition, and using an artificial neural network based on neuroscience metrics, the model classifies (82.9% of average accuracy) and estimate the number of online views (mean error of 0.199). The results highlight the validity of neuromarketing-based techniques for predicting the success of advertising responses. Practitioners can consider the proposed methodology at the design stages of advertising content, thus enhancing advertising effectiveness. The study pioneers the use of neurophysiological methods in predicting advertising success in a digital context. This is the first article that has examined whether these measures could actually be used for predicting views for advertising on YouTube.

  14. Consumer Neuroscience-Based Metrics Predict Recall, Liking and Viewing Rates in Online Advertising

    Directory of Open Access Journals (Sweden)

    Jaime Guixeres

    2017-10-01

    Full Text Available The purpose of the present study is to investigate whether the effectiveness of a new ad on digital channels (YouTube can be predicted by using neural networks and neuroscience-based metrics (brain response, heart rate variability and eye tracking. Neurophysiological records from 35 participants were exposed to 8 relevant TV Super Bowl commercials. Correlations between neurophysiological-based metrics, ad recall, ad liking, the ACE metrix score and the number of views on YouTube during a year were investigated. Our findings suggest a significant correlation between neuroscience metrics and self-reported of ad effectiveness and the direct number of views on the YouTube channel. In addition, and using an artificial neural network based on neuroscience metrics, the model classifies (82.9% of average accuracy and estimate the number of online views (mean error of 0.199. The results highlight the validity of neuromarketing-based techniques for predicting the success of advertising responses. Practitioners can consider the proposed methodology at the design stages of advertising content, thus enhancing advertising effectiveness. The study pioneers the use of neurophysiological methods in predicting advertising success in a digital context. This is the first article that has examined whether these measures could actually be used for predicting views for advertising on YouTube.

  15. Consumer Neuroscience-Based Metrics Predict Recall, Liking and Viewing Rates in Online Advertising

    Science.gov (United States)

    Guixeres, Jaime; Bigné, Enrique; Ausín Azofra, Jose M.; Alcañiz Raya, Mariano; Colomer Granero, Adrián; Fuentes Hurtado, Félix; Naranjo Ornedo, Valery

    2017-01-01

    The purpose of the present study is to investigate whether the effectiveness of a new ad on digital channels (YouTube) can be predicted by using neural networks and neuroscience-based metrics (brain response, heart rate variability and eye tracking). Neurophysiological records from 35 participants were exposed to 8 relevant TV Super Bowl commercials. Correlations between neurophysiological-based metrics, ad recall, ad liking, the ACE metrix score and the number of views on YouTube during a year were investigated. Our findings suggest a significant correlation between neuroscience metrics and self-reported of ad effectiveness and the direct number of views on the YouTube channel. In addition, and using an artificial neural network based on neuroscience metrics, the model classifies (82.9% of average accuracy) and estimate the number of online views (mean error of 0.199). The results highlight the validity of neuromarketing-based techniques for predicting the success of advertising responses. Practitioners can consider the proposed methodology at the design stages of advertising content, thus enhancing advertising effectiveness. The study pioneers the use of neurophysiological methods in predicting advertising success in a digital context. This is the first article that has examined whether these measures could actually be used for predicting views for advertising on YouTube. PMID:29163251

  16. The prediction of rotor rotational noise using measured fluctuating blade loads

    Science.gov (United States)

    Hosier, R. N.; Pegg, R. J.; Ramakrishnan, R.

    1974-01-01

    In tests conducted at the NASA Langley Research Center Helicopter Rotor Test Facility, simultaneous measurements of the high-frequency fluctuating aerodynamic blade loads and far-field radiated noise were made on a full-scale, nontranslating rotor system. After their characteristics were determined, the measured blade loads were used in an existing theory to predict the far-field rotational noise. A comparison of the calculated and measured rotational noise is presented with specific attention given to the effect of blade loading coefficients, chordwise loading distributions, blade loading phases, and observer azimuthal position on the predictions.

  17. Validation of Energy Expenditure Prediction Models Using Real-Time Shoe-Based Motion Detectors.

    Science.gov (United States)

    Lin, Shih-Yun; Lai, Ying-Chih; Hsia, Chi-Chun; Su, Pei-Fang; Chang, Chih-Han

    2017-09-01

    This study aimed to verify and compare the accuracy of energy expenditure (EE) prediction models using shoe-based motion detectors with embedded accelerometers. Three physical activity (PA) datasets (unclassified, recognition, and intensity segmentation) were used to develop three prediction models. A multiple classification flow and these models were used to estimate EE. The "unclassified" dataset was defined as the data without PA recognition, the "recognition" as the data classified with PA recognition, and the "intensity segmentation" as the data with intensity segmentation. The three datasets contained accelerometer signals (quantified as signal magnitude area (SMA)) and net heart rate (HR net ). The accuracy of these models was assessed according to the deviation between physically measured EE and model-estimated EE. The variance between physically measured EE and model-estimated EE expressed by simple linear regressions was increased by 63% and 13% using SMA and HR net , respectively. The accuracy of the EE predicted from accelerometer signals is influenced by the different activities that exhibit different count-EE relationships within the same prediction model. The recognition model provides a better estimation and lower variability of EE compared with the unclassified and intensity segmentation models. The proposed shoe-based motion detectors can improve the accuracy of EE estimation and has great potential to be used to manage everyday exercise in real time.

  18. Combined effects of form- and meaning-based predictability on perceived clarity of speech.

    Science.gov (United States)

    Signoret, Carine; Johnsrude, Ingrid; Classon, Elisabet; Rudner, Mary

    2018-02-01

    The perceptual clarity of speech is influenced by more than just the acoustic quality of the sound; it also depends on contextual support. For example, a degraded sentence is perceived to be clearer when the content of the speech signal is provided with matching text (i.e., form-based predictability) before hearing the degraded sentence. Here, we investigate whether sentence-level semantic coherence (i.e., meaning-based predictability), enhances perceptual clarity of degraded sentences, and if so, whether the mechanism is the same as that underlying enhancement by matching text. We also ask whether form- and meaning-based predictability are related to individual differences in cognitive abilities. Twenty participants listened to spoken sentences that were either clear or degraded by noise vocoding and rated the clarity of each item. The sentences had either high or low semantic coherence. Each spoken word was preceded by the homologous printed word (matching text), or by a meaningless letter string (nonmatching text). Cognitive abilities were measured with a working memory test. Results showed that perceptual clarity was significantly enhanced both by matching text and by semantic coherence. Importantly, high coherence enhanced the perceptual clarity of the degraded sentences even when they were preceded by matching text, suggesting that the effects of form- and meaning-based predictions on perceptual clarity are independent and additive. However, when working memory capacity indexed by the Size-Comparison Span Test was controlled for, only form-based predictions enhanced perceptual clarity, and then only at some sound quality levels, suggesting that prediction effects are to a certain extent dependent on cognitive abilities. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  19. Feature-Based and String-Based Models for Predicting RNA-Protein Interaction

    Directory of Open Access Journals (Sweden)

    Donald Adjeroh

    2018-03-01

    Full Text Available In this work, we study two approaches for the problem of RNA-Protein Interaction (RPI. In the first approach, we use a feature-based technique by combining extracted features from both sequences and secondary structures. The feature-based approach enhanced the prediction accuracy as it included much more available information about the RNA-protein pairs. In the second approach, we apply search algorithms and data structures to extract effective string patterns for prediction of RPI, using both sequence information (protein and RNA sequences, and structure information (protein and RNA secondary structures. This led to different string-based models for predicting interacting RNA-protein pairs. We show results that demonstrate the effectiveness of the proposed approaches, including comparative results against leading state-of-the-art methods.

  20. Laboratory-based and office-based risk scores and charts to predict 10-year risk of cardiovascular disease in 182 countries

    DEFF Research Database (Denmark)

    Ueda, Peter; Woodward, Mark; Lu, Yuan

    2017-01-01

    BACKGROUND: Worldwide implementation of risk-based cardiovascular disease (CVD) prevention requires risk prediction tools that are contemporarily recalibrated for the target country and can be used where laboratory measurements are unavailable. We present two cardiovascular risk scores, with and ...

  1. Electrochemical sensor for predicting transformer overload by phenol measurement

    Energy Technology Data Exchange (ETDEWEB)

    Bosworth, Timothy; Setford, Steven; Saini, Selwayan [Cranfield Centre for Analytical Science, Cranfield University, Silsoe, Beds MK45 4DT (United Kingdom); Heywood, Richard [National Grid Company Plc, Kelvin Avenue, Leatherhead, Surrey KT22 7ST (United Kingdom)

    2003-03-10

    Transformer overload is a significant problem to the power transmission industry, with severe safety and cost implications. Overload may be predicted by measuring phenol levels in the transformer-insulating oil, arising from the thermolytic degradation of phenol-formaldehyde resins. The development of two polyphenol oxidase (PPO) sensors, based on monitoring the enzymatic consumption of oxygen using an oxygen electrode, or reduction of enzymatically generated o-quinone at a screen-printed electrode (SPE), for the measurement of phenol in transformer oil is reported. Ex-service oils were prepared either by extraction into aqueous electrolyte-buffer, or by direct dilution in propan-2-ol, the latter method being more amenable to simple at-line operation. The oxygen electrode, with a sensitivity of 2.87 nA {mu}g{sup -1} ml{sup -1}, RSD of 7.0-19.9% and accuracy of {+-}8.3% versus the industry standard International Electrotechnical Commission (IEC) method, proved superior to the SPE (sensitivity: 3.02 nA {mu}g{sup -1} ml{sup -1}; RSD: 8.9-18.3%; accuracy: {+-}7.9%) and was considerably more accurate at low phenol concentrations. However, the SPE approach is more amenable to field-based usage for reasons of device simplicity. The method has potential as a rapid and simple screening tool for the at-site monitoring of phenol in transformer oils, thereby reducing incidences of transformer failure.

  2. Measuring and predicting sooting tendencies of oxygenates, alkanes, alkenes, cycloalkanes, and aromatics on a unified scale

    Energy Technology Data Exchange (ETDEWEB)

    Das, Dhrubajyoti D.; St. John, Peter C.; McEnally, Charles S.; Kim, Seonah; Pfefferle, Lisa D.

    2018-04-01

    Databases of sooting indices, based on measuring some aspect of sooting behavior in a standardized combustion environment, are useful in providing information on the comparative sooting tendencies of different fuels or pure compounds. However, newer biofuels have varied chemical structures including both aromatic and oxygenated functional groups, which expands the chemical space of relevant compounds. In this work, we propose a unified sooting tendency database for pure compounds, including both regular and oxygenated hydrocarbons, which is based on combining two disparate databases of yield-based sooting tendency measurements in the literature. Unification of the different databases was made possible by leveraging the greater dynamic range of the color ratio pyrometry soot diagnostic. This unified database contains a substantial number of pure compounds (greater than or equal to 400 total) from multiple categories of hydrocarbons important in modern fuels and establishes the sooting tendencies of aromatic and oxygenated hydrocarbons on the same numeric scale for the first time. Using this unified sooting tendency database, we have developed a predictive model for sooting behavior applicable to a broad range of hydrocarbons and oxygenated hydrocarbons. The model decomposes each compound into single-carbon fragments and assigns a sooting tendency contribution to each fragment based on regression against the unified database. The model's predictive accuracy (as demonstrated by leave-one-out cross-validation) is comparable to a previously developed, more detailed predictive model. The fitted model provides insight into the effects of chemical structure on soot formation, and cases where its predictions fail reveal the presence of more complicated kinetic sooting mechanisms. This work will therefore enable the rational design of low-sooting fuel blends from a wide range of feedstocks and chemical functionalities.

  3. Hierarchical prediction of industrial water demand based on refined Laspeyres decomposition analysis.

    Science.gov (United States)

    Shang, Yizi; Lu, Shibao; Gong, Jiaguo; Shang, Ling; Li, Xiaofei; Wei, Yongping; Shi, Hongwang

    2017-12-01

    A recent study decomposed the changes in industrial water use into three hierarchies (output, technology, and structure) using a refined Laspeyres decomposition model, and found monotonous and exclusive trends in the output and technology hierarchies. Based on that research, this study proposes a hierarchical prediction approach to forecast future industrial water demand. Three water demand scenarios (high, medium, and low) were then established based on potential future industrial structural adjustments, and used to predict water demand for the structural hierarchy. The predictive results of this approach were compared with results from a grey prediction model (GPM (1, 1)). The comparison shows that the results of the two approaches were basically identical, differing by less than 10%. Taking Tianjin, China, as a case, and using data from 2003-2012, this study predicts that industrial water demand will continuously increase, reaching 580 million m 3 , 776.4 million m 3 , and approximately 1.09 billion m 3 by the years 2015, 2020 and 2025 respectively. It is concluded that Tianjin will soon face another water crisis if no immediate measures are taken. This study recommends that Tianjin adjust its industrial structure with water savings as the main objective, and actively seek new sources of water to increase its supply.

  4. Resting energy expenditure prediction in recreational athletes of 18-35 years: confirmation of Cunningham equation and an improved weight-based alternative.

    Science.gov (United States)

    ten Haaf, Twan; Weijs, Peter J M

    2014-01-01

    Resting energy expenditure (REE) is expected to be higher in athletes because of their relatively high fat free mass (FFM). Therefore, REE predictive equation for recreational athletes may be required. The aim of this study was to validate existing REE predictive equations and to develop a new recreational athlete specific equation. 90 (53 M, 37 F) adult athletes, exercising on average 9.1 ± 5.0 hours a week and 5.0 ± 1.8 times a week, were included. REE was measured using indirect calorimetry (Vmax Encore n29), FFM and FM were measured using air displacement plethysmography. Multiple linear regression analysis was used to develop a new FFM-based and weight-based REE predictive equation. The percentage accurate predictions (within 10% of measured REE), percentage bias, root mean square error and limits of agreement were calculated. Results: The Cunningham equation and the new weight-based equation REE(kJ / d) = 49.940* weight(kg) + 2459.053* height(m) - 34.014* age(y) + 799.257* sex(M = 1,F = 0) + 122.502 and the new FFM-based equation REE(kJ / d) = 95.272*FFM(kg) + 2026.161 performed equally well. De Lorenzo's equation predicted REE less accurate, but better than the other generally used REE predictive equations. Harris-Benedict, WHO, Schofield, Mifflin and Owen all showed less than 50% accuracy. For a population of (Dutch) recreational athletes, the REE can accurately be predicted with the existing Cunningham equation. Since body composition measurement is not always possible, and other generally used equations fail, the new weight-based equation is advised for use in sports nutrition.

  5. Prediction Model of Collapse Risk Based on Information Entropy and Distance Discriminant Analysis Method

    Directory of Open Access Journals (Sweden)

    Hujun He

    2017-01-01

    Full Text Available The prediction and risk classification of collapse is an important issue in the process of highway construction in mountainous regions. Based on the principles of information entropy and Mahalanobis distance discriminant analysis, we have produced a collapse hazard prediction model. We used the entropy measure method to reduce the influence indexes of the collapse activity and extracted the nine main indexes affecting collapse activity as the discriminant factors of the distance discriminant analysis model (i.e., slope shape, aspect, gradient, and height, along with exposure of the structural face, stratum lithology, relationship between weakness face and free face, vegetation cover rate, and degree of rock weathering. We employ postearthquake collapse data in relation to construction of the Yingxiu-Wolong highway, Hanchuan County, China, as training samples for analysis. The results were analyzed using the back substitution estimation method, showing high accuracy and no errors, and were the same as the prediction result of uncertainty measure. Results show that the classification model based on information entropy and distance discriminant analysis achieves the purpose of index optimization and has excellent performance, high prediction accuracy, and a zero false-positive rate. The model can be used as a tool for future evaluation of collapse risk.

  6. A storm-based CSLE incorporating the modified SCS-CN method for soil loss prediction on the Chinese Loess Plateau

    Science.gov (United States)

    Shi, Wenhai; Huang, Mingbin

    2017-04-01

    The Chinese Loess Plateau is one of the most erodible areas in the world. In order to reduce soil and water losses, suitable conservation practices need to be designed. For this purpose, there is an increasing demand for an appropriate model that can accurately predict storm-based surface runoff and soil losses on the Loess Plateau. The Chinese Soil Loss Equation (CSLE) has been widely used in this region to assess soil losses from different land use types. However, the CSLE was intended only to predict the mean annual gross soil loss. In this study, a CSLE was proposed that would be storm-based and that introduced a new rainfall-runoff erosivity factor. A dataset was compiled that comprised measurements of soil losses during individual storms from three runoff-erosion plots in each of three different watersheds in the gully region of the Plateau for 3-7 years in three different time periods (1956-1959; 1973-1980; 2010-13). The accuracy of the soil loss predictions made by the new storm-based CSLE was determined using the data for the six plots in two of the watersheds measured during 165 storm-runoff events. The performance of the storm-based CSLE was further compared with the performance of the storm-based Revised Universal Soil Loss Equation (RUSLE) for the same six plots. During the calibration (83 storms) and validation (82 storms) of the storm-based CSLE, the model efficiency, E, was 87.7% and 88.9%, respectively, while the root mean square error (RMSE) was 2.7 and 2.3 t ha-1 indicating a high degree of accuracy. Furthermore, the storm-based CSLE performed better than the storm-based RULSE (E: 75.8% and 70.3%; RMSE: 3.8 and 3.7 t ha-1, for the calibration and validation storms, respectively). The storm-based CSLE was then used to predict the soil losses from the three experimental plots in the third watershed. For these predictions, the model parameter values, previously determined by the calibration based on the data from the initial six plots, were used in

  7. Methods of developing core collections based on the predicted genotypic value of rice ( Oryza sativa L.).

    Science.gov (United States)

    Li, C T; Shi, C H; Wu, J G; Xu, H M; Zhang, H Z; Ren, Y L

    2004-04-01

    The selection of an appropriate sampling strategy and a clustering method is important in the construction of core collections based on predicted genotypic values in order to retain the greatest degree of genetic diversity of the initial collection. In this study, methods of developing rice core collections were evaluated based on the predicted genotypic values for 992 rice varieties with 13 quantitative traits. The genotypic values of the traits were predicted by the adjusted unbiased prediction (AUP) method. Based on the predicted genotypic values, Mahalanobis distances were calculated and employed to measure the genetic similarities among the rice varieties. Six hierarchical clustering methods, including the single linkage, median linkage, centroid, unweighted pair-group average, weighted pair-group average and flexible-beta methods, were combined with random, preferred and deviation sampling to develop 18 core collections of rice germplasm. The results show that the deviation sampling strategy in combination with the unweighted pair-group average method of hierarchical clustering retains the greatest degree of genetic diversities of the initial collection. The core collections sampled using predicted genotypic values had more genetic diversity than those based on phenotypic values.

  8. Prediction of human core body temperature using non-invasive measurement methods.

    Science.gov (United States)

    Niedermann, Reto; Wyss, Eva; Annaheim, Simon; Psikuta, Agnes; Davey, Sarah; Rossi, René Michel

    2014-01-01

    The measurement of core body temperature is an efficient method for monitoring heat stress amongst workers in hot conditions. However, invasive measurement of core body temperature (e.g. rectal, intestinal, oesophageal temperature) is impractical for such applications. Therefore, the aim of this study was to define relevant non-invasive measures to predict core body temperature under various conditions. We conducted two human subject studies with different experimental protocols, different environmental temperatures (10 °C, 30 °C) and different subjects. In both studies the same non-invasive measurement methods (skin temperature, skin heat flux, heart rate) were applied. A principle component analysis was conducted to extract independent factors, which were then used in a linear regression model. We identified six parameters (three skin temperatures, two skin heat fluxes and heart rate), which were included for the calculation of two factors. The predictive value of these factors for core body temperature was evaluated by a multiple regression analysis. The calculated root mean square deviation (rmsd) was in the range from 0.28 °C to 0.34 °C for all environmental conditions. These errors are similar to previous models using non-invasive measures to predict core body temperature. The results from this study illustrate that multiple physiological parameters (e.g. skin temperature and skin heat fluxes) are needed to predict core body temperature. In addition, the physiological measurements chosen in this study and the algorithm defined in this work are potentially applicable as real-time core body temperature monitoring to assess health risk in broad range of working conditions.

  9. The prediction of BRDFs from surface profile measurements

    International Nuclear Information System (INIS)

    Church, E.L.; Takacs, P.Z.; Leonard, T.A.

    1989-01-01

    This paper discusses methods of predicting the BRDF of smooth surfaces from profile measurements of their surface finish. The conversion of optical profile data to the BRDF at the same wavelength is essentially independent of scattering models, while the conversion of mechanical measurements, and wavelength scaling in general, are model dependent. Procedures are illustrated for several surfaces, including two from the recent HeNe BRDF round robin, and results are compared with measured data. Reasonable agreement is found except for surfaces which involve significant scattering from isolated surface defects which are poorly sampled in the profile data

  10. Customizing Countermeasure Prescriptions using Predictive Measures of Sensorimotor Adaptability

    Science.gov (United States)

    Bloomberg, J. J.; Peters, B. T.; Mulavara, A. P.; Miller, C. A.; Batson, C. D.; Wood, S. J.; Guined, J. R.; Cohen, H. S.; Buccello-Stout, R.; DeDios, Y. E.; hide

    2014-01-01

    Astronauts experience sensorimotor disturbances during the initial exposure to microgravity and during the readapation phase following a return to a gravitational environment. These alterations may lead to disruption in the ability to perform mission critical functional tasks during and after these gravitational transitions. Astronauts show significant inter-subject variation in adaptive capability following gravitational transitions. The ability to predict the manner and degree to which each individual astronaut will be affected would improve the effectiveness of a countermeasure comprised of a training program designed to enhance sensorimotor adaptability. Due to this inherent individual variability we need to develop predictive measures of sensorimotor adaptability that will allow us to predict, before actual space flight, which crewmember will experience challenges in adaptive capacity. Thus, obtaining this information will allow us to design and implement better sensorimotor adaptability training countermeasures that will be customized for each crewmember's unique adaptive capabilities. Therefore the goals of this project are to: 1) develop a set of predictive measures capable of identifying individual differences in sensorimotor adaptability, and 2) use this information to design sensorimotor adaptability training countermeasures that are customized for each crewmember's individual sensorimotor adaptive characteristics. To achieve these goals we are currently pursuing the following specific aims: Aim 1: Determine whether behavioral metrics of individual sensory bias predict sensorimotor adaptability. For this aim, subjects perform tests that delineate individual sensory biases in tests of visual, vestibular, and proprioceptive function. Aim 2: Determine if individual capability for strategic and plastic-adaptive responses predicts sensorimotor adaptability. For this aim, each subject's strategic and plastic-adaptive motor learning abilities are assessed using

  11. Accuracy of depolarization and delay spread predictions using advanced ray-based modeling in indoor scenarios

    Directory of Open Access Journals (Sweden)

    Mani Francesco

    2011-01-01

    Full Text Available Abstract This article investigates the prediction accuracy of an advanced deterministic propagation model in terms of channel depolarization and frequency selectivity for indoor wireless propagation. In addition to specular reflection and diffraction, the developed ray tracing tool considers penetration through dielectric blocks and/or diffuse scattering mechanisms. The sensitivity and prediction accuracy analysis is based on two measurement campaigns carried out in a warehouse and an office building. It is shown that the implementation of diffuse scattering into RT significantly increases the accuracy of the cross-polar discrimination prediction, whereas the delay-spread prediction is only marginally improved.

  12. Effect of Urtica dioica on morphometric indices of kidney in streptozotocin diabetic rats--a stereological study.

    Science.gov (United States)

    Golalipour, Mohammad Jafar; Gharravi, Anneh Mohammad; Ghafari, Sorya; Afshar, Mohammad

    2007-11-01

    The aim of the present study was to investigate the effect of Urtica dioica on Morphometric indices of kidney in diabetic rats. Thirty male adult albino wistar rats of 125-175 g divided into control, diabetic and Urtica dioica treatment groups. In treatment Group, diabetic rats received 100 mg kg(-1) daily hydroalcoholic extract of U. dioica intraperitoneally for 4 weeks. After the animals had been sacrified, the kidneys were removed and fixed by formaldehyde, cut horizontally into 1 mm slices and processed, Stained with H and E. Stereological study performed using light microscope and the image projected on a table of olysa software. Cavalieri principle was used to estimate the volume of cortex, medulla and whole kidney. All the grouped data statistically evaluated using Student's t-test, expressed as the Mean +/- SE. Ration of kidney weight/body weight in diabetes (0.51) and diabetes-extract group (0.67) were higher then control group (0.42). Ratio of kidney volume/body weight in diabetes (350) and diabetes-extract group (348) were higher then control group (323). Volume Ratio of cortex/medulla in diabetes-extract group (1.65) was higher then control (1.34) and diabetes group (1.33). Glomerular area and diameter and proximal tubule diameter in diabetes-Extract group was higher than control and diabetes groups. This study revealed that Urtica dioica has no effect on renal morphometric indices in induced diabetic rats.

  13. Model-based uncertainty in species range prediction

    DEFF Research Database (Denmark)

    Pearson, R. G.; Thuiller, Wilfried; Bastos Araujo, Miguel

    2006-01-01

    Aim Many attempts to predict the potential range of species rely on environmental niche (or 'bioclimate envelope') modelling, yet the effects of using different niche-based methodologies require further investigation. Here we investigate the impact that the choice of model can have on predictions...

  14. Security Measurement for Unknown Threats Based on Attack Preferences

    Directory of Open Access Journals (Sweden)

    Lihua Yin

    2018-01-01

    Full Text Available Security measurement matters to every stakeholder in network security. It provides security practitioners the exact security awareness. However, most of the works are not applicable to the unknown threat. What is more, existing efforts on security metric mainly focus on the ease of certain attack from a theoretical point of view, ignoring the “likelihood of exploitation.” To help administrator have a better understanding, we analyze the behavior of attackers who exploit the zero-day vulnerabilities and predict their attack timing. Based on the prediction, we propose a method of security measurement. In detail, we compute the optimal attack timing from the perspective of attacker, using a long-term game to estimate the risk of being found and then choose the optimal timing based on the risk and profit. We design a learning strategy to model the information sharing mechanism among multiattackers and use spatial structure to model the long-term process. After calculating the Nash equilibrium for each subgame, we consider the likelihood of being attacked for each node as the security metric result. The experiment results show the efficiency of our approach.

  15. Measuring and Predicting Tag Importance for Image Retrieval.

    Science.gov (United States)

    Li, Shangwen; Purushotham, Sanjay; Chen, Chen; Ren, Yuzhuo; Kuo, C-C Jay

    2017-12-01

    Textual data such as tags, sentence descriptions are combined with visual cues to reduce the semantic gap for image retrieval applications in today's Multimodal Image Retrieval (MIR) systems. However, all tags are treated as equally important in these systems, which may result in misalignment between visual and textual modalities during MIR training. This will further lead to degenerated retrieval performance at query time. To address this issue, we investigate the problem of tag importance prediction, where the goal is to automatically predict the tag importance and use it in image retrieval. To achieve this, we first propose a method to measure the relative importance of object and scene tags from image sentence descriptions. Using this as the ground truth, we present a tag importance prediction model to jointly exploit visual, semantic and context cues. The Structural Support Vector Machine (SSVM) formulation is adopted to ensure efficient training of the prediction model. Then, the Canonical Correlation Analysis (CCA) is employed to learn the relation between the image visual feature and tag importance to obtain robust retrieval performance. Experimental results on three real-world datasets show a significant performance improvement of the proposed MIR with Tag Importance Prediction (MIR/TIP) system over other MIR systems.

  16. Misalignment Effect Function Measurement for Oblique Rotation Axes: Counterintuitive Predictions and Theoretical Extensions

    Science.gov (United States)

    Ellis, Stephen R.; Adelstein, Bernard D.; Yeom, Kiwon

    2013-01-01

    The Misalignment Effect Function (MEF) describes the decrement in manual performance associated with a rotation between operators' visual display frame of reference and that of their manual control. It now has been empirically determined for rotation axes oblique to canonical body axes and is compared with the MEF previously measured for rotations about canonical axes. A targeting rule, called the Secant Rule, based on these earlier measurements is derived from a hypothetical process and shown to describe some of the data from three previous experiments. It explains the motion trajectories determined for rotations less than 65deg in purely kinematic terms without the need to appeal to a mental rotation process. Further analysis of this rule in three dimensions applied to oblique rotation axes leads to a somewhat surprising expectation that the difficulty posed by rotational misalignment should get harder as the required movement is shorter. This prediction is confirmed. Geometry underlying this rule also suggests analytic extensions for predicting more generally the difficulty of making movements in arbitrary directions subject to arbitrary misalignments.

  17. Uncertainties in model-based outcome predictions for treatment planning

    International Nuclear Information System (INIS)

    Deasy, Joseph O.; Chao, K.S. Clifford; Markman, Jerry

    2001-01-01

    Purpose: Model-based treatment-plan-specific outcome predictions (such as normal tissue complication probability [NTCP] or the relative reduction in salivary function) are typically presented without reference to underlying uncertainties. We provide a method to assess the reliability of treatment-plan-specific dose-volume outcome model predictions. Methods and Materials: A practical method is proposed for evaluating model prediction based on the original input data together with bootstrap-based estimates of parameter uncertainties. The general framework is applicable to continuous variable predictions (e.g., prediction of long-term salivary function) and dichotomous variable predictions (e.g., tumor control probability [TCP] or NTCP). Using bootstrap resampling, a histogram of the likelihood of alternative parameter values is generated. For a given patient and treatment plan we generate a histogram of alternative model results by computing the model predicted outcome for each parameter set in the bootstrap list. Residual uncertainty ('noise') is accounted for by adding a random component to the computed outcome values. The residual noise distribution is estimated from the original fit between model predictions and patient data. Results: The method is demonstrated using a continuous-endpoint model to predict long-term salivary function for head-and-neck cancer patients. Histograms represent the probabilities for the level of posttreatment salivary function based on the input clinical data, the salivary function model, and the three-dimensional dose distribution. For some patients there is significant uncertainty in the prediction of xerostomia, whereas for other patients the predictions are expected to be more reliable. In contrast, TCP and NTCP endpoints are dichotomous, and parameter uncertainties should be folded directly into the estimated probabilities, thereby improving the accuracy of the estimates. Using bootstrap parameter estimates, competing treatment

  18. Animal-based measures for welfare assessment

    Directory of Open Access Journals (Sweden)

    Agostino Sevi

    2010-01-01

    Full Text Available Animal welfare assessment can’t be irrespective of measures taken on animals. Indeed, housing parametersrelatedtostructures, designandmicro-environment, evenifreliable parameters related to structures, design and micro-environment, even if reliable and easier to take, can only identify conditions which could be detrimental to animal welfare, but can’t predict poor welfare in animals per se. Welfare assessment through animal-based measures is almost complex, given that animals’ responses to stressful conditions largely depend on the nature, length and intensity of challenges and on physiological status, age, genetic susceptibility and previous experience of animals. Welfare assessment requires a multi-disciplinary approach and the monitoring of productive, ethological, endocrine, immunological and pathological param- eters to be exhaustive and reliable. So many measures are needed, because stresses can act only on some of the mentioned parameters or on all of them but at different times and degree. Under this point of view, the main aim of research is to find feasible and most responsive indicators of poor animal welfare. In last decades, studies focused on the following parameters for animal wel- fare assessment indexes of biological efficiency, responses to behavioral tests, cortisol secretion, neutrophil to lymphocyte ratio, lymphocyte proliferation, production of antigen specific IgG and cytokine release, somatic cell count and acute phase proteins. Recently, a lot of studies have been addressed to reduce handling and constraint of animals for taking measures to be used in welfare assessment, since such procedures can induce stress in animals and undermined the reliability of measures taken for welfare assessment. Range of animal-based measures for welfare assessment is much wider under experimental condition than at on-farm level. In welfare monitoring on-farm the main aim is to find feasible measures of proved validity and reliability

  19. Soil pH Errors Propagation from Measurements to Spatial Predictions - Cost Benefit Analysis and Risk Assessment Implications for Practitioners and Modelers

    Science.gov (United States)

    Owens, P. R.; Libohova, Z.; Seybold, C. A.; Wills, S. A.; Peaslee, S.; Beaudette, D.; Lindbo, D. L.

    2017-12-01

    The measurement errors and spatial prediction uncertainties of soil properties in the modeling community are usually assessed against measured values when available. However, of equal importance is the assessment of errors and uncertainty impacts on cost benefit analysis and risk assessments. Soil pH was selected as one of the most commonly measured soil properties used for liming recommendations. The objective of this study was to assess the error size from different sources and their implications with respect to management decisions. Error sources include measurement methods, laboratory sources, pedotransfer functions, database transections, spatial aggregations, etc. Several databases of measured and predicted soil pH were used for this study including the United States National Cooperative Soil Survey Characterization Database (NCSS-SCDB), the US Soil Survey Geographic (SSURGO) Database. The distribution of errors among different sources from measurement methods to spatial aggregation showed a wide range of values. The greatest RMSE of 0.79 pH units was from spatial aggregation (SSURGO vs Kriging), while the measurement methods had the lowest RMSE of 0.06 pH units. Assuming the order of data acquisition based on the transaction distance i.e. from measurement method to spatial aggregation the RMSE increased from 0.06 to 0.8 pH units suggesting an "error propagation". This has major implications for practitioners and modeling community. Most soil liming rate recommendations are based on 0.1 pH unit increments, while the desired soil pH level increments are based on 0.4 to 0.5 pH units. Thus, even when the measured and desired target soil pH are the same most guidelines recommend 1 ton ha-1 lime, which translates in 111 ha-1 that the farmer has to factor in the cost-benefit analysis. However, this analysis need to be based on uncertainty predictions (0.5-1.0 pH units) rather than measurement errors (0.1 pH units) which would translate in 555-1,111 investment that

  20. Comparing a medical records-based and a claims-based index for measuring comorbidity in patients with lung or colon cancer.

    Science.gov (United States)

    Kehl, Kenneth L; Lamont, Elizabeth B; McNeil, Barbara J; Bozeman, Samuel R; Kelley, Michael J; Keating, Nancy L

    2015-05-01

    Ascertaining comorbid conditions in cancer patients is important for research and clinical quality measurement, and is particularly important for understanding care and outcomes for older patients and those with multi-morbidity. We compared the medical records-based ACE-27 index and the claims-based Charlson index in predicting receipt of therapy and survival for lung and colon cancer patients. We calculated the Charlson index using administrative data and the ACE-27 score using medical records for Veterans Affairs patients diagnosed with stage I/II non-small cell lung or stage III colon cancer from January 2003 to December 2004. We compared the proportion of patients identified by each index as having any comorbidity. We used multivariable logistic regression to ascertain the predictive power of each index regarding delivery of guideline-recommended therapies and two-year survival, comparing the c-statistic and the Akaike information criterion (AIC). Overall, 97.2% of lung and 90.9% of colon cancer patients had any comorbidity according to the ACE-27 index, versus 59.5% and 49.7%, respectively, according to the Charlson. Multivariable models including the ACE-27 index outperformed Charlson-based models when assessing receipt of guideline-recommended therapies, with higher c-statistics and lower AICs. Neither index was clearly superior in prediction of two-year survival. The ACE-27 index measured using medical records captured more comorbidity and outperformed the Charlson index measured using administrative data for predicting receipt of guideline-recommended therapies, demonstrating the potential value of more detailed comorbidity data. However, the two indices had relatively similar performance when predicting survival. Copyright © 2015 Elsevier Inc. All rights reserved.

  1. Predicting story goodness performance from cognitive measures following traumatic brain injury.

    Science.gov (United States)

    Lê, Karen; Coelho, Carl; Mozeiko, Jennifer; Krueger, Frank; Grafman, Jordan

    2012-05-01

    This study examined the prediction of performance on measures of the Story Goodness Index (SGI; Lê, Coelho, Mozeiko, & Grafman, 2011) from executive function (EF) and memory measures following traumatic brain injury (TBI). It was hypothesized that EF and memory measures would significantly predict SGI outcomes. One hundred sixty-seven individuals with TBI participated in the study. Story retellings were analyzed using the SGI protocol. Three cognitive measures--Delis-Kaplan Executive Function System (D-KEFS; Delis, Kaplan, & Kramer, 2001) Sorting Test, Wechsler Memory Scale--Third Edition (WMS-III; Wechsler, 1997) Working Memory Primary Index (WMI), and WMS-III Immediate Memory Primary Index (IMI)--were entered into a multiple linear regression model for each discourse measure. Two sets of regression analyses were performed, the first with the Sorting Test as the first predictor and the second with it as the last. The first set of regression analyses identified the Sorting Test and IMI as the only significant predictors of performance on measures of the SGI. The second set identified all measures as significant predictors when evaluating each step of the regression function. The cognitive variables predicted performance on the SGI measures, although there were differences in the amount of explained variance. The results (a) suggest that storytelling ability draws on a number of underlying skills and (b) underscore the importance of using discrete cognitive tasks rather than broad cognitive indices to investigate the cognitive substrates of discourse.

  2. RandomSpot: A web-based tool for systematic random sampling of virtual slides.

    Science.gov (United States)

    Wright, Alexander I; Grabsch, Heike I; Treanor, Darren E

    2015-01-01

    This paper describes work presented at the Nordic Symposium on Digital Pathology 2014, Linköping, Sweden. Systematic random sampling (SRS) is a stereological tool, which provides a framework to quickly build an accurate estimation of the distribution of objects or classes within an image, whilst minimizing the number of observations required. RandomSpot is a web-based tool for SRS in stereology, which systematically places equidistant points within a given region of interest on a virtual slide. Each point can then be visually inspected by a pathologist in order to generate an unbiased sample of the distribution of classes within the tissue. Further measurements can then be derived from the distribution, such as the ratio of tumor to stroma. RandomSpot replicates the fundamental principle of traditional light microscope grid-shaped graticules, with the added benefits associated with virtual slides, such as facilitated collaboration and automated navigation between points. Once the sample points have been added to the region(s) of interest, users can download the annotations and view them locally using their virtual slide viewing software. Since its introduction, RandomSpot has been used extensively for international collaborative projects, clinical trials and independent research projects. So far, the system has been used to generate over 21,000 sample sets, and has been used to generate data for use in multiple publications, identifying significant new prognostic markers in colorectal, upper gastro-intestinal and breast cancer. Data generated using RandomSpot also has significant value for training image analysis algorithms using sample point coordinates and pathologist classifications.

  3. Digital image analysis

    DEFF Research Database (Denmark)

    Riber-Hansen, Rikke; Vainer, Ben; Steiniche, Torben

    2012-01-01

    Digital image analysis (DIA) is increasingly implemented in histopathological research to facilitate truly quantitative measurements, decrease inter-observer variation and reduce hands-on time. Originally, efforts were made to enable DIA to reproduce manually obtained results on histological slides...... reproducibility, application of stereology-based quantitative measurements, time consumption, optimization of histological slides, regions of interest selection and recent developments in staining and imaging techniques....

  4. Model-free and model-based reward prediction errors in EEG.

    Science.gov (United States)

    Sambrook, Thomas D; Hardwick, Ben; Wills, Andy J; Goslin, Jeremy

    2018-05-24

    Learning theorists posit two reinforcement learning systems: model-free and model-based. Model-based learning incorporates knowledge about structure and contingencies in the world to assign candidate actions with an expected value. Model-free learning is ignorant of the world's structure; instead, actions hold a value based on prior reinforcement, with this value updated by expectancy violation in the form of a reward prediction error. Because they use such different learning mechanisms, it has been previously assumed that model-based and model-free learning are computationally dissociated in the brain. However, recent fMRI evidence suggests that the brain may compute reward prediction errors to both model-free and model-based estimates of value, signalling the possibility that these systems interact. Because of its poor temporal resolution, fMRI risks confounding reward prediction errors with other feedback-related neural activity. In the present study, EEG was used to show the presence of both model-based and model-free reward prediction errors and their place in a temporal sequence of events including state prediction errors and action value updates. This demonstration of model-based prediction errors questions a long-held assumption that model-free and model-based learning are dissociated in the brain. Copyright © 2018 Elsevier Inc. All rights reserved.

  5. [Prediction of the molecular response to pertubations from single cell measurements].

    Science.gov (United States)

    Remacle, Françoise; Levine, Raphael D

    2014-12-01

    The response of protein signalization networks to perturbations is analysed from single cell measurements. This experimental approach allows characterizing the fluctuations in protein expression levels from cell to cell. The analysis is based on an information theoretic approach grounded in thermodynamics leading to a quantitative version of Le Chatelier principle which allows to predict the molecular response. Two systems are investigated: human macrophages subjected to lipopolysaccharide challenge, analogous to the immune response against Gram-negative bacteria and the response of the proteins involved in the mTOR signalizing network of GBM cancer cells to changes in partial oxygen pressure. © 2014 médecine/sciences – Inserm.

  6. A comparison on radar range profiles between in-flight measurements and RCS-predictions

    NARCIS (Netherlands)

    Heiden, R. van der; Ewijk, L.J. van; Groen, F.C.A.

    1998-01-01

    The validation of Radar Cross Section (RCS) prediction techniques against real measurements is crucial to acquire confidence in predictions when measurements are nut available. In this paper we present the results of a comparison on one-dimensional signatures, i.e. radar range profiles. The profiles

  7. Predictions and measurements of residual stress in repair welds in plates

    Energy Technology Data Exchange (ETDEWEB)

    Brown, T.B. [Mitsui Babcock Energy Limited, Technology and Engineering, Porterfield Road, Renfrew, PA4 8DJ, Scotland (United Kingdom)]. E-mail: bbrown@mitsuibabcock.com; Dauda, T.A. [Mitsui Babcock Energy Limited, Technology and Engineering, Porterfield Road, Renfrew, PA4 8DJ, Scotland (United Kingdom); Truman, C.E. [Department of Mechanical Engineering, University of Bristol, Bristol BS8 1TR, England (United Kingdom); Smith, D.J. [Department of Mechanical Engineering, University of Bristol, Bristol BS8 1TR (United Kingdom); Memhard, D. [Fraunhofer-Institut fuer Werkstoffmechanik, Freiburg (Germany); Pfeiffer, W. [Fraunhofer-Institut fuer Werkstoffmechanik, Freiburg (Germany)

    2006-11-15

    This paper presents the work, from the European Union FP-5 project ELIXIR, on a series of rectangular repair welds in P275 and S690 steels to validate the numerical modelling techniques used in the determination of the residual stresses generated during the repair process. The plates were 1,000 mm by 800 mm with thicknesses of 50 and 100 mm. The repair welds were 50%, 75% and 100% through the plate thickness. The repair welds were modelled using the finite element method to make predictions of the as-welded residual stress distributions. These predictions were compared with surface-strain measurements made on the parent plates during welding and found to be in good agreement. Through-thickness residual stress measurements were obtained from the test plates through, and local to, the weld repairs using the deep hole drilling technique. Comparisons between the measurements and the finite element predictions generally showed good agreement, thus providing confidence in the method.

  8. Predictions and measurements of residual stress in repair welds in plates

    International Nuclear Information System (INIS)

    Brown, T.B.; Dauda, T.A.; Truman, C.E.; Smith, D.J.; Memhard, D.; Pfeiffer, W.

    2006-01-01

    This paper presents the work, from the European Union FP-5 project ELIXIR, on a series of rectangular repair welds in P275 and S690 steels to validate the numerical modelling techniques used in the determination of the residual stresses generated during the repair process. The plates were 1,000 mm by 800 mm with thicknesses of 50 and 100 mm. The repair welds were 50%, 75% and 100% through the plate thickness. The repair welds were modelled using the finite element method to make predictions of the as-welded residual stress distributions. These predictions were compared with surface-strain measurements made on the parent plates during welding and found to be in good agreement. Through-thickness residual stress measurements were obtained from the test plates through, and local to, the weld repairs using the deep hole drilling technique. Comparisons between the measurements and the finite element predictions generally showed good agreement, thus providing confidence in the method

  9. Evaluation of Monticello Nuclear Power Plant, Environmental Impact Prediction, based on monitoring programs

    International Nuclear Information System (INIS)

    Gore, K.L.; Thomas, J.M.; Kannberg, L.D.; Watson, D.G.

    1976-11-01

    This report evaluates quantitatively the nonradiological environmental monitoring programs at Monticello Nuclear Generating Plant. The general objective of the study is to assess the effectiveness of monitoring programs in the measurement of environmental impacts. Specific objectives include the following: (1) Assess the validity of environmental impact predictions made in the Environmental Statement by analysis of nonradiological monitoring data; (2) evaluate the general adequacy of environmental monitoring programs for detecting impacts and their responsiveness to Technical Specifications objectives; (3) assess the adequacy of preoperational monitoring programs in providing a sufficient data base for evaluating operational impacts; (4) identify possible impacts that were not predicted in the environmental statement and identify monitoring activities that need to be added, modified or deleted; and (5) assist in identifying environmental impacts, monitoring methods, and measurement problems that need additional research before quantitative predictions can be attempted. Preoperational as well as operational monitoring data were examined to test the usefulness of baseline information in evaluating impacts. This included an examination of the analytical methods used to measure ecological and physical parameters, and an assessment of sampling periodicity and sensitivity where appropriate data were available

  10. Groundwater level prediction of landslide based on classification and regression tree

    Directory of Open Access Journals (Sweden)

    Yannan Zhao

    2016-09-01

    Full Text Available According to groundwater level monitoring data of Shuping landslide in the Three Gorges Reservoir area, based on the response relationship between influential factors such as rainfall and reservoir level and the change of groundwater level, the influential factors of groundwater level were selected. Then the classification and regression tree (CART model was constructed by the subset and used to predict the groundwater level. Through the verification, the predictive results of the test sample were consistent with the actually measured values, and the mean absolute error and relative error is 0.28 m and 1.15% respectively. To compare the support vector machine (SVM model constructed using the same set of factors, the mean absolute error and relative error of predicted results is 1.53 m and 6.11% respectively. It is indicated that CART model has not only better fitting and generalization ability, but also strong advantages in the analysis of landslide groundwater dynamic characteristics and the screening of important variables. It is an effective method for prediction of ground water level in landslides.

  11. Some Comparisons of Measured and Predicted Primary Radiation Levels in the Aagesta Power Plant

    Energy Technology Data Exchange (ETDEWEB)

    Aalto, E; Sandlin, R; Krell, Aa

    1968-05-15

    Neutron fluxes and gamma exposure rates in the primary shields of the Aagesta nuclear plant have been measured and the results compared with values predicted during shield design, and with values obtained later by the NRN bulk shielding code. The input data for the problems are given. The radial predictions are conservative by a factor of not more than 2 close to the reactor and by an unknown, higher factor further out. The conservatism is explainable by the differences between the true local conditions and core power distributions and those assumed in the predictions. The axial flux levels based on streaming calculations are found to agree quite well with the estimated values. The conservatism here is not so large and it seems to be necessary to be very careful when handling streaming problems. The experience gained shows that a power plant is less suitable for studying the accuracy of the shield design codes as such, but the practical results from the combined application of massive shield codes and void streaming predictions to complicated problems give information about the true degree of conservatism present.

  12. Measurement Error Correction for Predicted Spatiotemporal Air Pollution Exposures.

    Science.gov (United States)

    Keller, Joshua P; Chang, Howard H; Strickland, Matthew J; Szpiro, Adam A

    2017-05-01

    Air pollution cohort studies are frequently analyzed in two stages, first modeling exposure then using predicted exposures to estimate health effects in a second regression model. The difference between predicted and unobserved true exposures introduces a form of measurement error in the second stage health model. Recent methods for spatial data correct for measurement error with a bootstrap and by requiring the study design ensure spatial compatibility, that is, monitor and subject locations are drawn from the same spatial distribution. These methods have not previously been applied to spatiotemporal exposure data. We analyzed the association between fine particulate matter (PM2.5) and birth weight in the US state of Georgia using records with estimated date of conception during 2002-2005 (n = 403,881). We predicted trimester-specific PM2.5 exposure using a complex spatiotemporal exposure model. To improve spatial compatibility, we restricted to mothers residing in counties with a PM2.5 monitor (n = 180,440). We accounted for additional measurement error via a nonparametric bootstrap. Third trimester PM2.5 exposure was associated with lower birth weight in the uncorrected (-2.4 g per 1 μg/m difference in exposure; 95% confidence interval [CI]: -3.9, -0.8) and bootstrap-corrected (-2.5 g, 95% CI: -4.2, -0.8) analyses. Results for the unrestricted analysis were attenuated (-0.66 g, 95% CI: -1.7, 0.35). This study presents a novel application of measurement error correction for spatiotemporal air pollution exposures. Our results demonstrate the importance of spatial compatibility between monitor and subject locations and provide evidence of the association between air pollution exposure and birth weight.

  13. Hierarchical anatomical brain networks for MCI prediction: revisiting volumetric measures.

    Directory of Open Access Journals (Sweden)

    Luping Zhou

    Full Text Available Owning to its clinical accessibility, T1-weighted MRI (Magnetic Resonance Imaging has been extensively studied in the past decades for prediction of Alzheimer's disease (AD and mild cognitive impairment (MCI. The volumes of gray matter (GM, white matter (WM and cerebrospinal fluid (CSF are the most commonly used measurements, resulting in many successful applications. It has been widely observed that disease-induced structural changes may not occur at isolated spots, but in several inter-related regions. Therefore, for better characterization of brain pathology, we propose in this paper a means to extract inter-regional correlation based features from local volumetric measurements. Specifically, our approach involves constructing an anatomical brain network for each subject, with each node representing a Region of Interest (ROI and each edge representing Pearson correlation of tissue volumetric measurements between ROI pairs. As second order volumetric measurements, network features are more descriptive but also more sensitive to noise. To overcome this limitation, a hierarchy of ROIs is used to suppress noise at different scales. Pairwise interactions are considered not only for ROIs with the same scale in the same layer of the hierarchy, but also for ROIs across different scales in different layers. To address the high dimensionality problem resulting from the large number of network features, a supervised dimensionality reduction method is further employed to embed a selected subset of features into a low dimensional feature space, while at the same time preserving discriminative information. We demonstrate with experimental results the efficacy of this embedding strategy in comparison with some other commonly used approaches. In addition, although the proposed method can be easily generalized to incorporate other metrics of regional similarities, the benefits of using Pearson correlation in our application are reinforced by the experimental

  14. PubMed-supported clinical term weighting approach for improving inter-patient similarity measure in diagnosis prediction.

    Science.gov (United States)

    Chan, Lawrence Wc; Liu, Ying; Chan, Tao; Law, Helen Kw; Wong, S C Cesar; Yeung, Andy Ph; Lo, K F; Yeung, S W; Kwok, K Y; Chan, William Yl; Lau, Thomas Yh; Shyu, Chi-Ren

    2015-06-02

    Similarity-based retrieval of Electronic Health Records (EHRs) from large clinical information systems provides physicians the evidence support in making diagnoses or referring examinations for the suspected cases. Clinical Terms in EHRs represent high-level conceptual information and the similarity measure established based on these terms reflects the chance of inter-patient disease co-occurrence. The assumption that clinical terms are equally relevant to a disease is unrealistic, reducing the prediction accuracy. Here we propose a term weighting approach supported by PubMed search engine to address this issue. We collected and studied 112 abdominal computed tomography imaging examination reports from four hospitals in Hong Kong. Clinical terms, which are the image findings related to hepatocellular carcinoma (HCC), were extracted from the reports. Through two systematic PubMed search methods, the generic and specific term weightings were established by estimating the conditional probabilities of clinical terms given HCC. Each report was characterized by an ontological feature vector and there were totally 6216 vector pairs. We optimized the modified direction cosine (mDC) with respect to a regularization constant embedded into the feature vector. Equal, generic and specific term weighting approaches were applied to measure the similarity of each pair and their performances for predicting inter-patient co-occurrence of HCC diagnoses were compared by using Receiver Operating Characteristics (ROC) analysis. The Areas under the curves (AUROCs) of similarity scores based on equal, generic and specific term weighting approaches were 0.735, 0.728 and 0.743 respectively (p PubMed. Our findings suggest that the optimized similarity measure with specific term weighting to EHRs can improve significantly the accuracy for predicting the inter-patient co-occurrence of diagnosis when compared with equal and generic term weighting approaches.

  15. Prediction of DVH parameter changes due to setup errors for breast cancer treatment based on 2D portal dosimetry

    International Nuclear Information System (INIS)

    Nijsten, S. M. J. J. G.; Elmpt, W. J. C. van; Mijnheer, B. J.; Minken, A. W. H.; Persoon, L. C. G. G.; Lambin, P.; Dekker, A. L. A. J.

    2009-01-01

    Electronic portal imaging devices (EPIDs) are increasingly used for portal dosimetry applications. In our department, EPIDs are clinically used for two-dimensional (2D) transit dosimetry. Predicted and measured portal dose images are compared to detect dose delivery errors caused for instance by setup errors or organ motion. The aim of this work is to develop a model to predict dose-volume histogram (DVH) changes due to setup errors during breast cancer treatment using 2D transit dosimetry. First, correlations between DVH parameter changes and 2D gamma parameters are investigated for different simulated setup errors, which are described by a binomial logistic regression model. The model calculates the probability that a DVH parameter changes more than a specific tolerance level and uses several gamma evaluation parameters for the planning target volume (PTV) projection in the EPID plane as input. Second, the predictive model is applied to clinically measured portal images. Predicted DVH parameter changes are compared to calculated DVH parameter changes using the measured setup error resulting from a dosimetric registration procedure. Statistical accuracy is investigated by using receiver operating characteristic (ROC) curves and values for the area under the curve (AUC), sensitivity, specificity, positive and negative predictive values. Changes in the mean PTV dose larger than 5%, and changes in V 90 and V 95 larger than 10% are accurately predicted based on a set of 2D gamma parameters. Most pronounced changes in the three DVH parameters are found for setup errors in the lateral-medial direction. AUC, sensitivity, specificity, and negative predictive values were between 85% and 100% while the positive predictive values were lower but still higher than 54%. Clinical predictive value is decreased due to the occurrence of patient rotations or breast deformations during treatment, but the overall reliability of the predictive model remains high. Based on our

  16. x-y-recording in transmission electron microscopy. A versatile and inexpensive interface to personal computers with application to stereology.

    Science.gov (United States)

    Rickmann, M; Siklós, L; Joó, F; Wolff, J R

    1990-09-01

    An interface for IBM XT/AT-compatible computers is described which has been designed to read the actual specimen stage position of electron microscopes. The complete system consists of (i) optical incremental encoders attached to the x- and y-stage drivers of the microscope, (ii) two keypads for operator input, (iii) an interface card fitted to the bus of the personal computer, (iv) a standard configuration IBM XT (or compatible) personal computer optionally equipped with a (v) HP Graphic Language controllable colour plotter. The small size of the encoders and their connection to the stage drivers by simple ribbed belts allows an easy adaptation of the system to most electron microscopes. Operation of the interface card itself is supported by any high-level language available for personal computers. By the modular concept of these languages, the system can be customized to various applications, and no computer expertise is needed for actual operation. The present configuration offers an inexpensive attachment, which covers a wide range of applications from a simple notebook to high-resolution (200-nm) mapping of tissue. Since section coordinates can be processed in real-time, stereological estimations can be derived directly "on microscope". This is exemplified by an application in which particle numbers were determined by the disector method.

  17. Immunocytochemical and stereological analysis of GABA(B) receptor subunit expression in the rat vestibular nucleus following unilateral vestibular deafferentation.

    Science.gov (United States)

    Zhang, Rong; Ashton, John; Horii, Arata; Darlington, Cynthia L; Smith, Paul F

    2005-03-10

    The process of behavioral recovery that occurs following damage to one vestibular labyrinth, vestibular compensation, has been attributed in part to a down-regulation of GABA(B) receptors in the vestibular nucleus complex (VNC) ipsilateral to the lesion, which could potentially reduce commissural inhibition from the contralateral VNC. In this study, we tested the possibility that this occurs through a decrease in the expression of either the GABA(B1) or GABA(B2) subunits of the GABA(B) receptor. We used Western blotting to quantify the expression of these subunits in the VNC at 10 h and 50 h following unilateral vestibular deafferentation (UVD) or sham surgery in rats. We then used immunocytochemistry and stereological counting methods to estimate the number of neurons expressing these subunits in the MVN at 10 h and 2 weeks following UVD or sham surgery. Compared to sham controls, we found no significant changes in either the expression of the two GABA(B) receptor subunits in the VNC or in the number of MVN neurons expressing these GABA(B) receptor subunits post-UVD. These results suggest that GABA(B) receptor expression does not change substantially in the VNC during the process of vestibular compensation.

  18. Comparison of predicted and measured pulsed-column profiles and inventories

    International Nuclear Information System (INIS)

    Ostenak, C.A.; Cermak, A.F.

    1983-01-01

    Nuclear materials accounting and process control in fuels reprocessing plants can be improved by near-real-time estimation of the in-process inventory in solvent-extraction contactors. Experimental studies were conducted on pilot- and plant-scale pulsed columns by Allied-General Nuclear Service (AGNS), and the extensive uranium concentration-profile and inventory data were analyzed by Los Alamos and AGNS to develop and evaluate different predictive inventory techniques. Preliminary comparisons of predicted and measured pulsed-column profiles and inventories show promise for using these predictive techniques to improve nuclear materials accounting and process control in fuels reprocessing plants

  19. Comparing direct image and wavelet transform-based approaches to analysing remote sensing imagery for predicting wildlife distribution

    NARCIS (Netherlands)

    Murwira, A.; Skidmore, A.K.

    2010-01-01

    In this study we tested the ability to predict the probability of elephant (Loxodonta africana) presence in an agricultural landscape of Zimbabwe based on three methods of measuring the spatial heterogeneity in vegetation cover, where vegetation cover was measured using the Landsat Thematic Mapper

  20. Mining key elements for severe convection prediction based on CNN

    Science.gov (United States)

    Liu, Ming; Pan, Ning; Zhang, Changan; Sha, Hongzhou; Zhang, Bolei; Liu, Liang; Zhang, Meng

    2017-04-01

    Severe convective weather is a kind of weather disasters accompanied by heavy rainfall, gust wind, hail, etc. Along with recent developments on remote sensing and numerical modeling, there are high-volume and long-term observational and modeling data accumulated to capture massive severe convective events over particular areas and time periods. With those high-volume and high-variety weather data, most of the existing studies and methods carry out the dynamical laws, cause analysis, potential rule study, and prediction enhancement by utilizing the governing equations from fluid dynamics and thermodynamics. In this study, a key-element mining method is proposed for severe convection prediction based on convolution neural network (CNN). It aims to identify the key areas and key elements from huge amounts of historical weather data including conventional measurements, weather radar, satellite, so as numerical modeling and/or reanalysis data. Under this manner, the machine-learning based method could help the human forecasters on their decision-making on operational weather forecasts on severe convective weathers by extracting key information from the real-time and historical weather big data. In this paper, it first utilizes computer vision technology to complete the data preprocessing work of the meteorological variables. Then, it utilizes the information such as radar map and expert knowledge to annotate all images automatically. And finally, by using CNN model, it cloud analyze and evaluate each weather elements (e.g., particular variables, patterns, features, etc.), and identify key areas of those critical weather elements, then help forecasters quickly screen out the key elements from huge amounts of observation data by current weather conditions. Based on the rich weather measurement and model data (up to 10 years) over Fujian province in China, where the severe convective weathers are very active during the summer months, experimental tests are conducted with

  1. Repeated Blood Pressure Measurements in Childhood in Prediction of Hypertension in Adulthood.

    Science.gov (United States)

    Oikonen, Mervi; Nuotio, Joel; Magnussen, Costan G; Viikari, Jorma S A; Taittonen, Leena; Laitinen, Tomi; Hutri-Kähönen, Nina; Jokinen, Eero; Jula, Antti; Cheung, Michael; Sabin, Matthew A; Daniels, Stephen R; Raitakari, Olli T; Juonala, Markus

    2016-01-01

    Hypertension may be predicted from childhood risk factors. Repeated observations of abnormal blood pressure in childhood may enhance prediction of hypertension and subclinical atherosclerosis in adulthood compared with a single observation. Participants (1927, 54% women) from the Cardiovascular Risk in Young Finns Study had systolic and diastolic blood pressure measurements performed when aged 3 to 24 years. Childhood/youth abnormal blood pressure was defined as above 90th or 95th percentile. After a 21- to 31-year follow-up, at the age of 30 to 45 years, hypertension (>140/90 mm Hg or antihypertensive medication) prevalence was found to be 19%. Carotid intima-media thickness was examined, and high-risk intima-media was defined as intima-media thickness >90th percentile or carotid plaques. Prediction of adulthood hypertension and high-risk intima-media was compared between one observation of abnormal blood pressure in childhood/youth and multiple observations by improved Pearson correlation coefficients and area under the receiver operating curve. When compared with a single measurement, 2 childhood/youth observations improved the correlation for adult systolic (r=0.44 versus 0.35, Phypertension in adulthood (0.63 for 2 versus 0.60 for 1 observation, P=0.003). When compared with 2 measurements, third observation did not provide any significant improvement for correlation or prediction (P always >0.05). A higher number of childhood/youth observations of abnormal blood pressure did not enhance prediction of adult high-risk intima-media thickness. Compared with a single measurement, the prediction of adult hypertension was enhanced by 2 observations of abnormal blood pressure in childhood/youth. © 2015 American Heart Association, Inc.

  2. Predicting volume of distribution with decision tree-based regression methods using predicted tissue:plasma partition coefficients.

    Science.gov (United States)

    Freitas, Alex A; Limbu, Kriti; Ghafourian, Taravat

    2015-01-01

    Volume of distribution is an important pharmacokinetic property that indicates the extent of a drug's distribution in the body tissues. This paper addresses the problem of how to estimate the apparent volume of distribution at steady state (Vss) of chemical compounds in the human body using decision tree-based regression methods from the area of data mining (or machine learning). Hence, the pros and cons of several different types of decision tree-based regression methods have been discussed. The regression methods predict Vss using, as predictive features, both the compounds' molecular descriptors and the compounds' tissue:plasma partition coefficients (Kt:p) - often used in physiologically-based pharmacokinetics. Therefore, this work has assessed whether the data mining-based prediction of Vss can be made more accurate by using as input not only the compounds' molecular descriptors but also (a subset of) their predicted Kt:p values. Comparison of the models that used only molecular descriptors, in particular, the Bagging decision tree (mean fold error of 2.33), with those employing predicted Kt:p values in addition to the molecular descriptors, such as the Bagging decision tree using adipose Kt:p (mean fold error of 2.29), indicated that the use of predicted Kt:p values as descriptors may be beneficial for accurate prediction of Vss using decision trees if prior feature selection is applied. Decision tree based models presented in this work have an accuracy that is reasonable and similar to the accuracy of reported Vss inter-species extrapolations in the literature. The estimation of Vss for new compounds in drug discovery will benefit from methods that are able to integrate large and varied sources of data and flexible non-linear data mining methods such as decision trees, which can produce interpretable models. Graphical AbstractDecision trees for the prediction of tissue partition coefficient and volume of distribution of drugs.

  3. Prediction of Coronal Mass Ejections from Vector Magnetograms: Quantitative Measures as Predictors

    Science.gov (United States)

    Falconer, D. A.; Moore, R. L.; Gary, G. A.

    2001-05-01

    In a pilot study of 4 active regions (Falconer, D.A. 2001, JGR, in press), we derived two quantitative measures of an active region's global nonpotentiality from the region's vector magnetogram, 1) the net current (IN), and 2) the length of the strong-shear, strong-field main neutral line (LSS), and used these two measures of the CME productivity of the active regions. We compared the global nonpotentiality measures to the active regions' CME productivity determined from GOES and Yohkoh/SXT observations. We found that two of the active regions were highly globally nonpotential and were CME productive, while the other two active regions had little global nonpotentiality and produced no CMEs. At the Fall 2000 AGU (Falconer, Moore, & Gary, 2000, EOS 81, 48 F998), we reported on an expanded study (12 active regions and 17 magnetograms) in which we evaluated four quantitative global measures of an active region's magnetic field and compared these measures with the CME productivity. The four global measures (all derived from MSFC vector magnetograms) included our two previous measures (IN and LSS) as well as two new ones, the total magnetic flux (Φ ) (a measure of an active region's size), and the normalized twist (α =μ IN/Φ ). We found that the three measures of global nonpotentiality (IN, LSS, α ) were all well correlated (>99% confidence level) with an active region's CME productivity within (2 days of the day of the magnetogram. We will now report on our findings of how good our quantitative measures are as predictors of active-region CME productivity, using only CMEs that occurred after the magnetogram. We report the preliminary skill test of these quantitative measures as predictors. We compare the CME prediction success of our quantitative measures to the CME prediction success based on an active region's past CME productivity. We examine the cases of the handful of false positive and false negatives to look for improvements to our predictors. This work is

  4. Ground Motion Prediction Equations Empowered by Stress Drop Measurement

    Science.gov (United States)

    Miyake, H.; Oth, A.

    2015-12-01

    Significant variation of stress drop is a crucial issue for ground motion prediction equations and probabilistic seismic hazard assessment, since only a few ground motion prediction equations take into account stress drop. In addition to average and sigma studies of stress drop and ground motion prediction equations (e.g., Cotton et al., 2013; Baltay and Hanks, 2014), we explore 1-to-1 relationship for each earthquake between stress drop and between-event residual of a ground motion prediction equation. We used the stress drop dataset of Oth (2013) for Japanese crustal earthquakes ranging 0.1 to 100 MPa and K-NET/KiK-net ground motion dataset against for several ground motion prediction equations with volcanic front treatment. Between-event residuals for ground accelerations and velocities are generally coincident with stress drop, as investigated by seismic intensity measures of Oth et al. (2015). Moreover, we found faster attenuation of ground acceleration and velocities for large stress drop events for the similar fault distance range and focal depth. It may suggest an alternative parameterization of stress drop to control attenuation distance rate for ground motion prediction equations. We also investigate 1-to-1 relationship and sigma for regional/national-scale stress drop variation and current national-scale ground motion equations.

  5. Measurement error and timing of predictor values for multivariable risk prediction models are poorly reported.

    Science.gov (United States)

    Whittle, Rebecca; Peat, George; Belcher, John; Collins, Gary S; Riley, Richard D

    2018-05-18

    Measurement error in predictor variables may threaten the validity of clinical prediction models. We sought to evaluate the possible extent of the problem. A secondary objective was to examine whether predictors are measured at the intended moment of model use. A systematic search of Medline was used to identify a sample of articles reporting the development of a clinical prediction model published in 2015. After screening according to a predefined inclusion criteria, information on predictors, strategies to control for measurement error and intended moment of model use were extracted. Susceptibility to measurement error for each predictor was classified into low and high risk. Thirty-three studies were reviewed, including 151 different predictors in the final prediction models. Fifty-one (33.7%) predictors were categorised as high risk of error, however this was not accounted for in the model development. Only 8 (24.2%) studies explicitly stated the intended moment of model use and when the predictors were measured. Reporting of measurement error and intended moment of model use is poor in prediction model studies. There is a need to identify circumstances where ignoring measurement error in prediction models is consequential and whether accounting for the error will improve the predictions. Copyright © 2018. Published by Elsevier Inc.

  6. Impedance computations and beam-based measurements: A problem of discrepancy

    Science.gov (United States)

    Smaluk, Victor

    2018-04-01

    High intensity of particle beams is crucial for high-performance operation of modern electron-positron storage rings, both colliders and light sources. The beam intensity is limited by the interaction of the beam with self-induced electromagnetic fields (wake fields) proportional to the vacuum chamber impedance. For a new accelerator project, the total broadband impedance is computed by element-wise wake-field simulations using computer codes. For a machine in operation, the impedance can be measured experimentally using beam-based techniques. In this article, a comparative analysis of impedance computations and beam-based measurements is presented for 15 electron-positron storage rings. The measured data and the predictions based on the computed impedance budgets show a significant discrepancy. Three possible reasons for the discrepancy are discussed: interference of the wake fields excited by a beam in adjacent components of the vacuum chamber, effect of computation mesh size, and effect of insufficient bandwidth of the computed impedance.

  7. Effects of hypertension and ovariectomy on rat hepatocytes. Are amlodipine and lacidipine protective? (A stereological and histological study).

    Science.gov (United States)

    Dursun, Hakan; Albayrak, Fatih; Uyanik, Abdullah; Keleş, Nuri Osman; Beyzagül, Polat; Bayram, Ednan; Halici, Zekai; Altunkaynak, Zuhal Berrin; Süleyman, Halis; Okçu, Nihat; Ünal, Bünyamin

    2010-12-01

    Calcium channel blockers are increasingly used for the treatment of hypertension. Menopause and hypertension are both important risk factors for liver damage and several other circulatory abnormalities. The aim of this study was to determine the effects of amlodipine and lacidipine in an ovariectomy-induced postmenopausal period model and a deoxycorticosterone acetate-salt-induced hypertensive model in rats. In this study, animals were divided into six groups as follows: control (Group 1), hypertension (Group 2), ovariectomy (Group 3), ovariectomy and hypertension (Group 4), ovariectomy, hypertension and amlodipine-treated (Group 5), and ovariectomy, hypertension and lacidipine-treated (Group 6). At the end of the experiment, the livers were removed and tissue samples were histologically and stereologically examined. The numerical densities of the hepatocytes according to group were 0.000422, 0.00329, 0.000272, 0.00259, 0.00374 and 0.000346 μm3, respectively. Significant differences were found between values of all groups (phypertension in Groups 5 and 6. Our experimental results show that both hypertension and the postmenopausal period have negative effects on the number of hepatocytes and histological structure of the liver. Both amlodipine and lacidipine appear to ameliorate the hypertension and/or postmenopausal period-related decrease in hepatocyte number. We thus suggest that lacidipine and particularly amlodipine have important protective and recovering effects on the liver.

  8. Importance weighting of local flux measurements to improve reactivity predictions in nuclear systems

    Energy Technology Data Exchange (ETDEWEB)

    Dulla, Sandra; Hoh, Siew Sin; Nervo, Marta; Ravetto, Piero [Politecnico di Torino, Dipt. Energia (Italy)

    2015-07-15

    The reactivity monitoring is a key aspect for the safe operation of nuclear reactors, especially for subcritical source-driven systems. Various methods are available for both, off-line and on-line reactivity determination from direct measurements carried out on the reactor. Usually the methods are based on the inverse point kinetic model applied to signals from neutron detectors and results may be severely affected by space and spectral effects. Such effects need to be compensated and correction procedures have to be applied. In this work, a new approach is proposed, by using the full information from different local measurements to generate a global signal through a proper weighting of the signals provided by single neutron detectors. A weighting techique based on the use of the adjoint flux proves to be efficient in improving the prediction capability of inverse techniques. The idea is applied to the recently developed algorithm, named MAρTA, that can be used in both off-line and online modes.

  9. A predictive coding account of bistable perception - a model-based fMRI study.

    Science.gov (United States)

    Weilnhammer, Veith; Stuke, Heiner; Hesselmann, Guido; Sterzer, Philipp; Schmack, Katharina

    2017-05-01

    In bistable vision, subjective perception wavers between two interpretations of a constant ambiguous stimulus. This dissociation between conscious perception and sensory stimulation has motivated various empirical studies on the neural correlates of bistable perception, but the neurocomputational mechanism behind endogenous perceptual transitions has remained elusive. Here, we recurred to a generic Bayesian framework of predictive coding and devised a model that casts endogenous perceptual transitions as a consequence of prediction errors emerging from residual evidence for the suppressed percept. Data simulations revealed close similarities between the model's predictions and key temporal characteristics of perceptual bistability, indicating that the model was able to reproduce bistable perception. Fitting the predictive coding model to behavioural data from an fMRI-experiment on bistable perception, we found a correlation across participants between the model parameter encoding perceptual stabilization and the behaviourally measured frequency of perceptual transitions, corroborating that the model successfully accounted for participants' perception. Formal model comparison with established models of bistable perception based on mutual inhibition and adaptation, noise or a combination of adaptation and noise was used for the validation of the predictive coding model against the established models. Most importantly, model-based analyses of the fMRI data revealed that prediction error time-courses derived from the predictive coding model correlated with neural signal time-courses in bilateral inferior frontal gyri and anterior insulae. Voxel-wise model selection indicated a superiority of the predictive coding model over conventional analysis approaches in explaining neural activity in these frontal areas, suggesting that frontal cortex encodes prediction errors that mediate endogenous perceptual transitions in bistable perception. Taken together, our current work

  10. A predictive coding account of bistable perception - a model-based fMRI study.

    Directory of Open Access Journals (Sweden)

    Veith Weilnhammer

    2017-05-01

    Full Text Available In bistable vision, subjective perception wavers between two interpretations of a constant ambiguous stimulus. This dissociation between conscious perception and sensory stimulation has motivated various empirical studies on the neural correlates of bistable perception, but the neurocomputational mechanism behind endogenous perceptual transitions has remained elusive. Here, we recurred to a generic Bayesian framework of predictive coding and devised a model that casts endogenous perceptual transitions as a consequence of prediction errors emerging from residual evidence for the suppressed percept. Data simulations revealed close similarities between the model's predictions and key temporal characteristics of perceptual bistability, indicating that the model was able to reproduce bistable perception. Fitting the predictive coding model to behavioural data from an fMRI-experiment on bistable perception, we found a correlation across participants between the model parameter encoding perceptual stabilization and the behaviourally measured frequency of perceptual transitions, corroborating that the model successfully accounted for participants' perception. Formal model comparison with established models of bistable perception based on mutual inhibition and adaptation, noise or a combination of adaptation and noise was used for the validation of the predictive coding model against the established models. Most importantly, model-based analyses of the fMRI data revealed that prediction error time-courses derived from the predictive coding model correlated with neural signal time-courses in bilateral inferior frontal gyri and anterior insulae. Voxel-wise model selection indicated a superiority of the predictive coding model over conventional analysis approaches in explaining neural activity in these frontal areas, suggesting that frontal cortex encodes prediction errors that mediate endogenous perceptual transitions in bistable perception. Taken together

  11. Meta-path based heterogeneous combat network link prediction

    Science.gov (United States)

    Li, Jichao; Ge, Bingfeng; Yang, Kewei; Chen, Yingwu; Tan, Yuejin

    2017-09-01

    The combat system-of-systems in high-tech informative warfare, composed of many interconnected combat systems of different types, can be regarded as a type of complex heterogeneous network. Link prediction for heterogeneous combat networks (HCNs) is of significant military value, as it facilitates reconfiguring combat networks to represent the complex real-world network topology as appropriate with observed information. This paper proposes a novel integrated methodology framework called HCNMP (HCN link prediction based on meta-path) to predict multiple types of links simultaneously for an HCN. More specifically, the concept of HCN meta-paths is introduced, through which the HCNMP can accumulate information by extracting different features of HCN links for all the six defined types. Next, an HCN link prediction model, based on meta-path features, is built to predict all types of links of the HCN simultaneously. Then, the solution algorithm for the HCN link prediction model is proposed, in which the prediction results are obtained by iteratively updating with the newly predicted results until the results in the HCN converge or reach a certain maximum iteration number. Finally, numerical experiments on the dataset of a real HCN are conducted to demonstrate the feasibility and effectiveness of the proposed HCNMP, in comparison with 30 baseline methods. The results show that the performance of the HCNMP is superior to those of the baseline methods.

  12. The predictive validity of prospect theory versus expected utility in health utility measurement.

    Science.gov (United States)

    Abellan-Perpiñan, Jose Maria; Bleichrodt, Han; Pinto-Prades, Jose Luis

    2009-12-01

    Most health care evaluations today still assume expected utility even though the descriptive deficiencies of expected utility are well known. Prospect theory is the dominant descriptive alternative for expected utility. This paper tests whether prospect theory leads to better health evaluations than expected utility. The approach is purely descriptive: we explore how simple measurements together with prospect theory and expected utility predict choices and rankings between more complex stimuli. For decisions involving risk prospect theory is significantly more consistent with rankings and choices than expected utility. This conclusion no longer holds when we use prospect theory utilities and expected utilities to predict intertemporal decisions. The latter finding cautions against the common assumption in health economics that health state utilities are transferable across decision contexts. Our results suggest that the standard gamble and algorithms based on, should not be used to value health.

  13. Predicting in-patient falls in a geriatric clinic: a clinical study combining assessment data and simple sensory gait measurements.

    Science.gov (United States)

    Marschollek, M; Nemitz, G; Gietzelt, M; Wolf, K H; Meyer Zu Schwabedissen, H; Haux, R

    2009-08-01

    Falls are among the predominant causes for morbidity and mortality in elderly persons and occur most often in geriatric clinics. Despite several studies that have identified parameters associated with elderly patients' fall risk, prediction models -- e.g., based on geriatric assessment data -- are currently not used on a regular basis. Furthermore, technical aids to objectively assess mobility-associated parameters are currently not used. To assess group differences in clinical as well as common geriatric assessment data and sensory gait measurements between fallers and non-fallers in a geriatric sample, and to derive and compare two prediction models based on assessment data alone (model #1) and added sensory measurement data (model #2). For a sample of n=110 geriatric in-patients (81 women, 29 men) the following fall risk-associated assessments were performed: Timed 'Up & Go' (TUG) test, STRATIFY score and Barthel index. During the TUG test the subjects wore a triaxial accelerometer, and sensory gait parameters were extracted from the data recorded. Group differences between fallers (n=26) and non-fallers (n=84) were compared using Student's t-test. Two classification tree prediction models were computed and compared. Significant differences between the two groups were found for the following parameters: time to complete the TUG test, transfer item (Barthel), recent falls (STRATIFY), pelvic sway while walking and step length. Prediction model #1 (using common assessment data only) showed a sensitivity of 38.5% and a specificity of 97.6%, prediction model #2 (assessment data plus sensory gait parameters) performed with 57.7% and 100%, respectively. Significant differences between fallers and non-fallers among geriatric in-patients can be detected for several assessment subscores as well as parameters recorded by simple accelerometric measurements during a common mobility test. Existing geriatric assessment data may be used for falls prediction on a regular basis

  14. Measurements and prediction of inhaled air quality with personalized ventilation

    DEFF Research Database (Denmark)

    Cermak, Radim; Majer, M.; Melikov, Arsen Krikor

    2002-01-01

    the room air) at flow rates ranging from less than 5 L/s up to 23 L/s. The air quality assessment was based on temperature measurements of the inhaled air and on the portion of the personalized air inhaled. The percentage of dissatisfied with the air quality was predicted. The results suggest......This paper examines the performance of five different air terminal devices for personalized ventilation in relation to the quality of air inhaled by a breathing thermal manikin in a climate chamber. The personalized air was supplied either isothermally or non-isothermally (6 deg.C cooler than...... that regardless of the temperature combinations, personalized ventilation may decrease significantly the number of occupants dissatisfied with the air quality. Under non-isothermal conditions the percentage of dissatisfied may decrease up to 4 times....

  15. Base pair probability estimates improve the prediction accuracy of RNA non-canonical base pairs.

    Directory of Open Access Journals (Sweden)

    Michael F Sloma

    2017-11-01

    Full Text Available Prediction of RNA tertiary structure from sequence is an important problem, but generating accurate structure models for even short sequences remains difficult. Predictions of RNA tertiary structure tend to be least accurate in loop regions, where non-canonical pairs are important for determining the details of structure. Non-canonical pairs can be predicted using a knowledge-based model of structure that scores nucleotide cyclic motifs, or NCMs. In this work, a partition function algorithm is introduced that allows the estimation of base pairing probabilities for both canonical and non-canonical interactions. Pairs that are predicted to be probable are more likely to be found in the true structure than pairs of lower probability. Pair probability estimates can be further improved by predicting the structure conserved across multiple homologous sequences using the TurboFold algorithm. These pairing probabilities, used in concert with prior knowledge of the canonical secondary structure, allow accurate inference of non-canonical pairs, an important step towards accurate prediction of the full tertiary structure. Software to predict non-canonical base pairs and pairing probabilities is now provided as part of the RNAstructure software package.

  16. Base pair probability estimates improve the prediction accuracy of RNA non-canonical base pairs.

    Science.gov (United States)

    Sloma, Michael F; Mathews, David H

    2017-11-01

    Prediction of RNA tertiary structure from sequence is an important problem, but generating accurate structure models for even short sequences remains difficult. Predictions of RNA tertiary structure tend to be least accurate in loop regions, where non-canonical pairs are important for determining the details of structure. Non-canonical pairs can be predicted using a knowledge-based model of structure that scores nucleotide cyclic motifs, or NCMs. In this work, a partition function algorithm is introduced that allows the estimation of base pairing probabilities for both canonical and non-canonical interactions. Pairs that are predicted to be probable are more likely to be found in the true structure than pairs of lower probability. Pair probability estimates can be further improved by predicting the structure conserved across multiple homologous sequences using the TurboFold algorithm. These pairing probabilities, used in concert with prior knowledge of the canonical secondary structure, allow accurate inference of non-canonical pairs, an important step towards accurate prediction of the full tertiary structure. Software to predict non-canonical base pairs and pairing probabilities is now provided as part of the RNAstructure software package.

  17. Slope Deformation Prediction Based on Support Vector Machine

    Directory of Open Access Journals (Sweden)

    Lei JIA

    2013-07-01

    Full Text Available This paper principally studies the prediction of slope deformation based on Support Vector Machine (SVM. In the prediction process,explore how to reconstruct the phase space. The geological body’s displacement data obtained from chaotic time series are used as SVM’s training samples. Slope displacement caused by multivariable coupling is predicted by means of single variable. Results show that this model is of high fitting accuracy and generalization, and provides reference for deformation prediction in slope engineering.

  18. Are performance-based measures predictive of work participation in patients with musculoskeletal disorders? A systematic review

    NARCIS (Netherlands)

    Kuijer, P. P. F. M.; Gouttebarge, V.; Brouwer, S.; Reneman, M. F.; Frings-Dresen, M. H. W.

    Assessments of whether patients with musculoskeletal disorders (MSDs) can participate in work mainly consist of case history, physical examinations, and self-reports. Performance-based measures might add value in these assessments. This study answers the question: how well do performance-based

  19. Evaluation of FOCUS surface water pesticide concentration predictions and risk assessment of field-measured pesticide mixtures-a crop-based approach under Mediterranean conditions.

    Science.gov (United States)

    Pereira, Ana Santos; Daam, Michiel A; Cerejeira, Maria José

    2017-07-01

    FOCUS models are used in the European regulatory risk assessment (RA) to predict individual pesticide concentrations in edge-of-field surface waters. The scenarios used in higher tier FOCUS simulations were mainly based on Central/North European, and work is needed to underpin the validity of simulated exposure profiles for Mediterranean agroecosystems. In addition, the RA of chemicals are traditionally evaluated on the basis of single substances although freshwater life is generally exposed to a multitude of pesticides. In the present study, we monitored 19 pesticides in surface waters of five locations in the Portuguese 'Lezíria do Tejo' agricultural area. FOCUS step 3 simulations were performed for the South European scenarios to estimate predicted environmental concentrations (PECs). We verified that 44% of the PECs underestimated the measured environmental concentrations (MEC) of the pesticides, showing a non-compliance with the field data. Risk was assessed by comparing the environmental quality standards (EQS) and regulatory acceptable concentrations with their respective MECs. Risk of mixtures was demonstrated in 100% of the samples with insecticides accounting for 60% of the total risk identified. The overall link between the RA and the actual situation in the field must be considerably strengthened, and field studies on pesticide exposure and effects should be carried out to assist the improvement of predictive approaches used for regulatory purposes.

  20. Can foot anthropometric measurements predict dynamic plantar surface contact area?

    Directory of Open Access Journals (Sweden)

    Collins Natalie

    2009-10-01

    Full Text Available Abstract Background Previous studies have suggested that increased plantar surface area, associated with pes planus, is a risk factor for the development of lower extremity overuse injuries. The intent of this study was to determine if a single or combination of foot anthropometric measures could be used to predict plantar surface area. Methods Six foot measurements were collected on 155 subjects (97 females, 58 males, mean age 24.5 ± 3.5 years. The measurements as well as one ratio were entered into a stepwise regression analysis to determine the optimal set of measurements associated with total plantar contact area either including or excluding the toe region. The predicted values were used to calculate plantar surface area and were compared to the actual values obtained dynamically using a pressure sensor platform. Results A three variable model was found to describe the relationship between the foot measures/ratio and total plantar contact area (R2 = 0.77, p R2 = 0.76, p Conclusion The results of this study indicate that the clinician can use a combination of simple, reliable, and time efficient foot anthropometric measurements to explain over 75% of the plantar surface contact area, either including or excluding the toe region.

  1. Combining GPS measurements and IRI model predictions

    International Nuclear Information System (INIS)

    Hernandez-Pajares, M.; Juan, J.M.; Sanz, J.; Bilitza, D.

    2002-01-01

    The free electrons distributed in the ionosphere (between one hundred and thousands of km in height) produce a frequency-dependent effect on Global Positioning System (GPS) signals: a delay in the pseudo-orange and an advance in the carrier phase. These effects are proportional to the columnar electron density between the satellite and receiver, i.e. the integrated electron density along the ray path. Global ionospheric TEC (total electron content) maps can be obtained with GPS data from a network of ground IGS (international GPS service) reference stations with an accuracy of few TEC units. The comparison with the TOPEX TEC, mainly measured over the oceans far from the IGS stations, shows a mean bias and standard deviation of about 2 and 5 TECUs respectively. The discrepancies between the STEC predictions and the observed values show an RMS typically below 5 TECUs (which also includes the alignment code noise). he existence of a growing database 2-hourly global TEC maps and with resolution of 5x2.5 degrees in longitude and latitude can be used to improve the IRI prediction capability of the TEC. When the IRI predictions and the GPS estimations are compared for a three month period around the Solar Maximum, they are in good agreement for middle latitudes. An over-determination of IRI TEC has been found at the extreme latitudes, the IRI predictions being, typically two times higher than the GPS estimations. Finally, local fits of the IRI model can be done by tuning the SSN from STEC GPS observations

  2. Predicting ambient aerosol thermal-optical reflectance measurements from infrared spectra: elemental carbon

    Science.gov (United States)

    Dillner, A. M.; Takahama, S.

    2015-10-01

    concentration value based on the nominal IMPROVE sample volume of 32.8 m3), low error (0.03 μg m-3) and reasonable normalized error (21 %). These performance metrics can be achieved with various degrees of spectral pretreatment (e.g., including or excluding substrate contributions to the absorbances) and are comparable in precision and accuracy to collocated TOR measurements. Only the normalized error is higher for the FT-IR EC measurements than for collocated TOR. FT-IR spectra are also divided into calibration and test sets by the ratios OC/EC and ammonium/EC to determine the impact of OC and ammonium on EC prediction. We conclude that FT-IR analysis with partial least squares regression is a robust method for accurately predicting TOR EC in IMPROVE network samples, providing complementary information to TOR OC predictions (Dillner and Takahama, 2015) and the organic functional group composition and organic matter estimated previously from the same set of sample spectra (Ruthenburg et al., 2014).

  3. Measurement and prediction of residual stress in a bead-on-plate weld benchmark specimen

    International Nuclear Information System (INIS)

    Ficquet, X.; Smith, D.J.; Truman, C.E.; Kingston, E.J.; Dennis, R.J.

    2009-01-01

    This paper presents measurements and predictions of the residual stresses generated by laying a single weld bead on a flat, austenitic stainless steel plate. The residual stress field that is created is strongly three-dimensional and is considered representative of that found in a repair weld. Through-thickness measurements are made using the deep hole drilling technique, and near-surface measurements are made using incremental centre hole drilling. Measurements are compared to predictions at the same locations made using finite element analysis incorporating an advanced, non-linear kinematic hardening model. The work was conducted as part of an European round robin exercise, coordinated as part of the NeT network. Overall, there was broad agreement between measurements and predictions, but there were notable differences

  4. Machine learning-based methods for prediction of linear B-cell epitopes.

    Science.gov (United States)

    Wang, Hsin-Wei; Pai, Tun-Wen

    2014-01-01

    B-cell epitope prediction facilitates immunologists in designing peptide-based vaccine, diagnostic test, disease prevention, treatment, and antibody production. In comparison with T-cell epitope prediction, the performance of variable length B-cell epitope prediction is still yet to be satisfied. Fortunately, due to increasingly available verified epitope databases, bioinformaticians could adopt machine learning-based algorithms on all curated data to design an improved prediction tool for biomedical researchers. Here, we have reviewed related epitope prediction papers, especially those for linear B-cell epitope prediction. It should be noticed that a combination of selected propensity scales and statistics of epitope residues with machine learning-based tools formulated a general way for constructing linear B-cell epitope prediction systems. It is also observed from most of the comparison results that the kernel method of support vector machine (SVM) classifier outperformed other machine learning-based approaches. Hence, in this chapter, except reviewing recently published papers, we have introduced the fundamentals of B-cell epitope and SVM techniques. In addition, an example of linear B-cell prediction system based on physicochemical features and amino acid combinations is illustrated in details.

  5. A new spirometry-based algorithm to predict occupational pulmonary restrictive impairment.

    Science.gov (United States)

    De Matteis, S; Iridoy-Zulet, A A; Aaron, S; Swann, A; Cullinan, P

    2016-01-01

    Spirometry is often included in workplace-based respiratory surveillance programmes but its performance in the identification of restrictive lung disease is poor, especially when the prevalence of this condition is low in the tested population. To improve the specificity (Sp) and positive predictive value (PPV) of current spirometry-based algorithms in the diagnosis of restrictive pulmonary impairment in the workplace and to reduce the proportion of false positives findings and, as a result, unnecessary referrals for lung volume measurements. We re-analysed two studies of hospital patients, respectively used to derive and validate a recommended spirometry-based algorithm [forced vital capacity (FVC) 55%] for the recognition of restrictive pulmonary impairment. We used true lung restrictive cases as a reference standard in 2×2 contingency tables to estimate sensitivity (Sn), Sp and PPV and negative predictive values for each diagnostic cut-off. We simulated a working population aged spirometry-based algorithm may be adopted to accurately exclude pulmonary restriction and to possibly reduce unnecessary lung volume testing in an occupational health setting. © The Author 2015. Published by Oxford University Press on behalf of the Society of Occupational Medicine. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  6. Improved prediction of residue flexibility by embedding optimized amino acid grouping into RSA-based linear models.

    Science.gov (United States)

    Zhang, Hua; Kurgan, Lukasz

    2014-12-01

    Knowledge of protein flexibility is vital for deciphering the corresponding functional mechanisms. This knowledge would help, for instance, in improving computational drug design and refinement in homology-based modeling. We propose a new predictor of the residue flexibility, which is expressed by B-factors, from protein chains that use local (in the chain) predicted (or native) relative solvent accessibility (RSA) and custom-derived amino acid (AA) alphabets. Our predictor is implemented as a two-stage linear regression model that uses RSA-based space in a local sequence window in the first stage and a reduced AA pair-based space in the second stage as the inputs. This method is easy to comprehend explicit linear form in both stages. Particle swarm optimization was used to find an optimal reduced AA alphabet to simplify the input space and improve the prediction performance. The average correlation coefficients between the native and predicted B-factors measured on a large benchmark dataset are improved from 0.65 to 0.67 when using the native RSA values and from 0.55 to 0.57 when using the predicted RSA values. Blind tests that were performed on two independent datasets show consistent improvements in the average correlation coefficients by a modest value of 0.02 for both native and predicted RSA-based predictions.

  7. Assessing Long-Term Wind Conditions by Combining Different Measure-Correlate-Predict Algorithms: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, J.; Chowdhury, S.; Messac, A.; Hodge, B. M.

    2013-08-01

    This paper significantly advances the hybrid measure-correlate-predict (MCP) methodology, enabling it to account for variations of both wind speed and direction. The advanced hybrid MCP method uses the recorded data of multiple reference stations to estimate the long-term wind condition at a target wind plant site. The results show that the accuracy of the hybrid MCP method is highly sensitive to the combination of the individual MCP algorithms and reference stations. It was also found that the best combination of MCP algorithms varies based on the length of the correlation period.

  8. Predicting Forearm Physical Exposures During Computer Work Using Self-Reports, Software-Recorded Computer Usage Patterns, and Anthropometric and Workstation Measurements.

    Science.gov (United States)

    Huysmans, Maaike A; Eijckelhof, Belinda H W; Garza, Jennifer L Bruno; Coenen, Pieter; Blatter, Birgitte M; Johnson, Peter W; van Dieën, Jaap H; van der Beek, Allard J; Dennerlein, Jack T

    2017-12-15

    physical exposures the full models performed better. Relative RMS errors ranged between 5% and 19% for the full models, and between 10% and 19% for the practical model. When the predicted physical exposures were classified into low, medium, and high, classification agreement ranged from 26% to 71%. The full prediction models, based on self-reported factors, software-recorded computer usage patterns, and additional measurements of anthropometrics and workstation set-up, show a better predictive quality as compared to the practical models based on self-reported factors and recorded computer usage patterns only. However, predictive quality varied largely across different arm-wrist-hand exposure parameters. Future exploration of the relation between predicted physical exposure and symptoms is therefore only recommended for physical exposures that can be reasonably well predicted. © The Author 2017. Published by Oxford University Press on behalf of the British Occupational Hygiene Society.

  9. Predicting memory performance in normal ageing using different measures of hippocampal size

    International Nuclear Information System (INIS)

    Lye, T.C.; Creasey, H.; Kril, J.J.; Grayson, D.A.; Piguet, O.; Bennett, H.P.; Ridley, L.J.; Broe, G.A.

    2006-01-01

    A number of different methods have been employed to correct hippocampal volumes for individual variation in head size. Researchers have previously used qualitative visual inspection to gauge hippocampal atrophy. The purpose of this study was to determine the best measure(s) of hippocampal size for predicting memory functioning in 102 community-dwelling individuals over 80 years of age. Hippocampal size was estimated using magnetic resonance imaging (MRI) volumetry and qualitative visual assessment. Right and left hippocampal volumes were adjusted by three different estimates of head size: total intracranial volume (TICV), whole-brain volume including ventricles (WB+V) and a more refined measure of whole-brain volume with ventricles extracted (WB). We compared the relative efficacy of these three volumetric adjustment methods and visual ratings of hippocampal size in predicting memory performance using linear regression. All four measures of hippocampal size were significant predictors of memory performance. TICV-adjusted volumes performed most poorly in accounting for variance in memory scores. Hippocampal volumes adjusted by either measure of whole-brain volume performed equally well, although qualitative visual ratings of the hippocampus were at least as effective as the volumetric measures in predicting memory performance in community-dwelling individuals in the ninth or tenth decade of life. (orig.)

  10. Predicting long-term average concentrations of traffic-related air pollutants using GIS-based information

    Science.gov (United States)

    Hochadel, Matthias; Heinrich, Joachim; Gehring, Ulrike; Morgenstern, Verena; Kuhlbusch, Thomas; Link, Elke; Wichmann, H.-Erich; Krämer, Ursula

    Global regression models were developed to estimate individual levels of long-term exposure to traffic-related air pollutants. The models are based on data of a one-year measurement programme including geographic data on traffic and population densities. This investigation is part of a cohort study on the impact of traffic-related air pollution on respiratory health, conducted at the westerly end of the Ruhr-area in North-Rhine Westphalia, Germany. Concentrations of NO 2, fine particle mass (PM 2.5) and filter absorbance of PM 2.5 as a marker for soot were measured at 40 sites spread throughout the study region. Fourteen-day samples were taken between March 2002 and March 2003 for each season and site. Annual average concentrations for the sites were determined after adjustment for temporal variation. Information on traffic counts in major roads, building densities and community population figures were collected in a geographical information system (GIS). This information was used to calculate different potential traffic-based predictors: (a) daily traffic flow and maximum traffic intensity of buffers with radii from 50 to 10 000 m and (b) distances to main roads and highways. NO 2 concentration and PM 2.5 absorbance were strongly correlated with the traffic-based variables. Linear regression prediction models, which involved predictors with radii of 50 to 1000 m, were developed for the Wesel region where most of the cohort members lived. They reached a model fit ( R2) of 0.81 and 0.65 for NO 2 and PM 2.5 absorbance, respectively. Regression models for the whole area required larger spatial scales and reached R2=0.90 and 0.82. Comparison of predicted values with NO 2 measurements at independent public monitoring stations showed a satisfactory association ( r=0.66). PM 2.5 concentration, however, was only slightly correlated and thus poorly predictable by traffic-based variables ( rGIS-based regression models offer a promising approach to assess individual levels of

  11. The effect of using genealogy-based haplotypes for genomic prediction.

    Science.gov (United States)

    Edriss, Vahid; Fernando, Rohan L; Su, Guosheng; Lund, Mogens S; Guldbrandtsen, Bernt

    2013-03-06

    Genomic prediction uses two sources of information: linkage disequilibrium between markers and quantitative trait loci, and additive genetic relationships between individuals. One way to increase the accuracy of genomic prediction is to capture more linkage disequilibrium by regression on haplotypes instead of regression on individual markers. The aim of this study was to investigate the accuracy of genomic prediction using haplotypes based on local genealogy information. A total of 4429 Danish Holstein bulls were genotyped with the 50K SNP chip. Haplotypes were constructed using local genealogical trees. Effects of haplotype covariates were estimated with two types of prediction models: (1) assuming that effects had the same distribution for all haplotype covariates, i.e. the GBLUP method and (2) assuming that a large proportion (π) of the haplotype covariates had zero effect, i.e. a Bayesian mixture method. About 7.5 times more covariate effects were estimated when fitting haplotypes based on local genealogical trees compared to fitting individuals markers. Genealogy-based haplotype clustering slightly increased the accuracy of genomic prediction and, in some cases, decreased the bias of prediction. With the Bayesian method, accuracy of prediction was less sensitive to parameter π when fitting haplotypes compared to fitting markers. Use of haplotypes based on genealogy can slightly increase the accuracy of genomic prediction. Improved methods to cluster the haplotypes constructed from local genealogy could lead to additional gains in accuracy.

  12. Measured and predicted electron density at 600 km over Tucuman and Huancayo

    International Nuclear Information System (INIS)

    Ezquer, R.G.; Cabrera, M.A.; Araoz, L.; Mosert, M.; Radicella, S.M.

    2002-01-01

    The electron density at 600 Km of altitude (N 600 ) predicted by IRI are compared with the measurements for a given particular time and place (not average) obtained with the Japanese Hinotori satellite. The results showed that the best agreement among predictions and measurements were obtained near the magnetic equator. Disagreements about 50% were observed near the southern peak of the equatorial anomaly (EA), when the model uses the CCIR and URSI options to obtain the peak characteristics. (author)

  13. A prediction method based on grey system theory in equipment condition based maintenance

    International Nuclear Information System (INIS)

    Yan, Shengyuan; Yan, Shengyuan; Zhang, Hongguo; Zhang, Zhijian; Peng, Minjun; Yang, Ming

    2007-01-01

    Grey prediction is a modeling method based on historical or present, known or indefinite information, which can be used for forecasting the development of the eigenvalues of the targeted equipment system and setting up the model by using less information. In this paper, the postulate of grey system theory, which includes the grey generating, the sorts of grey generating and the grey forecasting model, is introduced first. The concrete application process, which includes the grey prediction modeling, grey prediction, error calculation, equal dimension and new information approach, is introduced secondly. Application of a so-called 'Equal Dimension and New Information' (EDNI) technology in grey system theory is adopted in an application case, aiming at improving the accuracy of prediction without increasing the amount of calculation by replacing old data with new ones. The proposed method can provide a new way for solving the problem of eigenvalue data exploding in equal distance effectively, short time interval and real time prediction. The proposed method, which was based on historical or present, known or indefinite information, was verified by the vibration prediction of induced draft fan of a boiler of the Yantai Power Station in China, and the results show that the proposed method based on grey system theory is simple and provides a high accuracy in prediction. So, it is very useful and significant to the controlling and controllable management in safety production. (authors)

  14. Augment clinical measurement using a constraint-based esophageal model

    Science.gov (United States)

    Kou, Wenjun; Acharya, Shashank; Kahrilas, Peter; Patankar, Neelesh; Pandolfino, John

    2017-11-01

    Quantifying the mechanical properties of the esophageal wall is crucial to understanding impairments of trans-esophageal flow characteristic of several esophageal diseases. However, these data are unavailable owing to technological limitations of current clinical diagnostic instruments that instead display esophageal luminal cross sectional area based on intraluminal impedance change. In this work, we developed an esophageal model to predict bolus flow and the wall property based on clinical measurements. The model used the constraint-based immersed-boundary method developed previously by our group. Specifically, we first approximate the time-dependent wall geometry based on impedance planimetry data on luminal cross sectional area. We then fed these along with pressure data into the model and computed wall tension based on simulated pressure and flow fields, and the material property based on the strain-stress relationship. As examples, we applied this model to augment FLIP (Functional Luminal Imaging Probe) measurements in three clinical cases: a normal subject, achalasia, and eosinophilic esophagitis (EoE). Our findings suggest that the wall stiffness was greatest in the EoE case, followed by the achalasia case, and then the normal. This is supported by NIH Grant R01 DK56033 and R01 DK079902.

  15. GIS-BASED PREDICTION OF HURRICANE FLOOD INUNDATION

    Energy Technology Data Exchange (ETDEWEB)

    JUDI, DAVID [Los Alamos National Laboratory; KALYANAPU, ALFRED [Los Alamos National Laboratory; MCPHERSON, TIMOTHY [Los Alamos National Laboratory; BERSCHEID, ALAN [Los Alamos National Laboratory

    2007-01-17

    A simulation environment is being developed for the prediction and analysis of the inundation consequences for infrastructure systems from extreme flood events. This decision support architecture includes a GIS-based environment for model input development, simulation integration tools for meteorological, hydrologic, and infrastructure system models and damage assessment tools for infrastructure systems. The GIS-based environment processes digital elevation models (30-m from the USGS), land use/cover (30-m NLCD), stream networks from the National Hydrography Dataset (NHD) and soils data from the NRCS (STATSGO) to create stream network, subbasins, and cross-section shapefiles for drainage basins selected for analysis. Rainfall predictions are made by a numerical weather model and ingested in gridded format into the simulation environment. Runoff hydrographs are estimated using Green-Ampt infiltration excess runoff prediction and a 1D diffusive wave overland flow routing approach. The hydrographs are fed into the stream network and integrated in a dynamic wave routing module using the EPA's Storm Water Management Model (SWMM) to predict flood depth. The flood depths are then transformed into inundation maps and exported for damage assessment. Hydrologic/hydraulic results are presented for Tropical Storm Allison.

  16. A review on real time physical measurement techniques and their attempt to predict wear-out status of IGBT

    DEFF Research Database (Denmark)

    Ghimire, Pramod; Beczkowski, Szymon; Munk-Nielsen, Stig

    2013-01-01

    Insulated Gate Bipolar Transistors (IGBTs) are key component in power converters. Reliability of power converters depend on wear-out process of power modules. A physical parameter such as the on-state collector-emitter voltage (Vce) shows the status of degradation of the IGBT after a certain cycles...... of difficulties in the measurement, the offline and online Vce measurement topologies are implemented to study the reliability of the power converters. This paper presents a review in wear-out prediction methods of IGBT power modules and freewheeling diodes based on the real time Vce measurement. The measurement...

  17. NAPR: a Cloud-Based Framework for Neuroanatomical Age Prediction.

    Science.gov (United States)

    Pardoe, Heath R; Kuzniecky, Ruben

    2018-01-01

    The availability of cloud computing services has enabled the widespread adoption of the "software as a service" (SaaS) approach for software distribution, which utilizes network-based access to applications running on centralized servers. In this paper we apply the SaaS approach to neuroimaging-based age prediction. Our system, named "NAPR" (Neuroanatomical Age Prediction using R), provides access to predictive modeling software running on a persistent cloud-based Amazon Web Services (AWS) compute instance. The NAPR framework allows external users to estimate the age of individual subjects using cortical thickness maps derived from their own locally processed T1-weighted whole brain MRI scans. As a demonstration of the NAPR approach, we have developed two age prediction models that were trained using healthy control data from the ABIDE, CoRR, DLBS and NKI Rockland neuroimaging datasets (total N = 2367, age range 6-89 years). The provided age prediction models were trained using (i) relevance vector machines and (ii) Gaussian processes machine learning methods applied to cortical thickness surfaces obtained using Freesurfer v5.3. We believe that this transparent approach to out-of-sample evaluation and comparison of neuroimaging age prediction models will facilitate the development of improved age prediction models and allow for robust evaluation of the clinical utility of these methods.

  18. Prediction of highly expressed genes in microbes based on chromatin accessibility

    Directory of Open Access Journals (Sweden)

    Ussery David W

    2007-02-01

    Full Text Available Abstract Background It is well known that gene expression is dependent on chromatin structure in eukaryotes and it is likely that chromatin can play a role in bacterial gene expression as well. Here, we use a nucleosomal position preference measure of anisotropic DNA flexibility to predict highly expressed genes in microbial genomes. We compare these predictions with those based on codon adaptation index (CAI values, and also with experimental data for 6 different microbial genomes, with a particular interest in experimental data from Escherichia coli. Moreover, position preference is examined further in 328 sequenced microbial genomes. Results We find that absolute gene expression levels are correlated with the position preference in many microbial genomes. It is postulated that in these regions, the DNA may be more accessible to the transcriptional machinery. Moreover, ribosomal proteins and ribosomal RNA are encoded by DNA having significantly lower position preference values than other genes in fast-replicating microbes. Conclusion This insight into DNA structure-dependent gene expression in microbes may be exploited for predicting the expression of non-translated genes such as non-coding RNAs that may not be predicted by any of the conventional codon usage bias approaches.

  19. Measuring Personality in Context: Improving Predictive Accuracy in Selection Decision Making

    OpenAIRE

    Hoffner, Rebecca Ann

    2009-01-01

    This study examines the accuracy of a context-sensitive (i.e., goal dimensions) measure of personality compared to a traditional measure of personality (NEO-PI-R) and generalized self-efficacy (GSE) to predict variance in task performance. The goal dimensions measure takes a unique perspective in the conceptualization of personality. While traditional measures differentiate within person and collapse across context (e.g., Big Five), the goal dimensions measure employs a hierarchical structure...

  20. Can pre-season fitness measures predict time to injury in varsity athletes?: a retrospective case control study

    Directory of Open Access Journals (Sweden)

    Kennedy Michael D

    2012-07-01

    Full Text Available Abstract Background The ability to determine athletic performance in varsity athletes using preseason measures has been established. The ability of pre-season performance measures and athlete’s exposure to predict the incidence of injuries is unclear. Thus our purpose was to determine the ability of pre-season measures of athletic performance to predict time to injury in varsity athletes. Methods Male and female varsity athletes competing in basketball, volleyball and ice hockey participated in this study. The main outcome measures were injury prevalence, time to injury (based on calculated exposure and pre-season fitness measures as predictors of time to injury. Fitness measures were Apley’s range of motion, push-up, curl-ups, vertical jump, modified Illinois agility, and sit-and-reach. Cox regression models were used to identify which baseline fitness measures were predictors of time to injury. Results Seventy-six percent of the athletes reported 1 or more injuries. Mean times to initial injury were significantly different for females and males (40.6% and 66.1% of the total season (p , respectively. A significant univariate correlation was observed between push-up performance and time to injury (Pearson’s r = 0.332, p . No preseason fitness measure impacted the hazard of injury. Regardless of sport, female athletes had significantly shorter time to injury than males (Hazard Ratio = 2.2, p . Athletes playing volleyball had significantly shorter time to injury (Hazard Ratio = 4.2, p  compared to those playing hockey or basketball. Conclusions When accounting for exposure, gender, sport and fitness measures, prediction of time to injury was influenced most heavily by gender and sport.

  1. Research on Improved Depth Belief Network-Based Prediction of Cardiovascular Diseases

    Directory of Open Access Journals (Sweden)

    Peng Lu

    2018-01-01

    Full Text Available Quantitative analysis and prediction can help to reduce the risk of cardiovascular disease. Quantitative prediction based on traditional model has low accuracy. The variance of model prediction based on shallow neural network is larger. In this paper, cardiovascular disease prediction model based on improved deep belief network (DBN is proposed. Using the reconstruction error, the network depth is determined independently, and unsupervised training and supervised optimization are combined. It ensures the accuracy of model prediction while guaranteeing stability. Thirty experiments were performed independently on the Statlog (Heart and Heart Disease Database data sets in the UCI database. Experimental results showed that the mean of prediction accuracy was 91.26% and 89.78%, respectively. The variance of prediction accuracy was 5.78 and 4.46, respectively.

  2. GIMDA: Graphlet interaction-based MiRNA-disease association prediction.

    Science.gov (United States)

    Chen, Xing; Guan, Na-Na; Li, Jian-Qiang; Yan, Gui-Ying

    2018-03-01

    MicroRNAs (miRNAs) have been confirmed to be closely related to various human complex diseases by many experimental studies. It is necessary and valuable to develop powerful and effective computational models to predict potential associations between miRNAs and diseases. In this work, we presented a prediction model of Graphlet Interaction for MiRNA-Disease Association prediction (GIMDA) by integrating the disease semantic similarity, miRNA functional similarity, Gaussian interaction profile kernel similarity and the experimentally confirmed miRNA-disease associations. The related score of a miRNA to a disease was calculated by measuring the graphlet interactions between two miRNAs or two diseases. The novelty of GIMDA lies in that we used graphlet interaction to analyse the complex relationships between two nodes in a graph. The AUCs of GIMDA in global and local leave-one-out cross-validation (LOOCV) turned out to be 0.9006 and 0.8455, respectively. The average result of five-fold cross-validation reached to 0.8927 ± 0.0012. In case study for colon neoplasms, kidney neoplasms and prostate neoplasms based on the database of HMDD V2.0, 45, 45, 41 of the top 50 potential miRNAs predicted by GIMDA were validated by dbDEMC and miR2Disease. Additionally, in the case study of new diseases without any known associated miRNAs and the case study of predicting potential miRNA-disease associations using HMDD V1.0, there were also high percentages of top 50 miRNAs verified by the experimental literatures. © 2017 The Authors. Journal of Cellular and Molecular Medicine published by John Wiley & Sons Ltd and Foundation for Cellular and Molecular Medicine.

  3. Spatial Variability of Soil-Water Storage in the Southern Sierra Critical Zone Observatory: Measurement and Prediction

    Science.gov (United States)

    Oroza, C.; Bales, R. C.; Zheng, Z.; Glaser, S. D.

    2017-12-01

    Predicting the spatial distribution of soil moisture in mountain environments is confounded by multiple factors, including complex topography, spatial variably of soil texture, sub-surface flow paths, and snow-soil interactions. While remote-sensing tools such as passive-microwave monitoring can measure spatial variability of soil moisture, they only capture near-surface soil layers. Large-scale sensor networks are increasingly providing soil-moisture measurements at high temporal resolution across a broader range of depths than are accessible from remote sensing. It may be possible to combine these in-situ measurements with high-resolution LIDAR topography and canopy cover to estimate the spatial distribution of soil moisture at high spatial resolution at multiple depths. We study the feasibility of this approach using six years (2009-2014) of daily volumetric water content measurements at 10-, 30-, and 60-cm depths from the Southern Sierra Critical Zone Observatory. A non-parametric, multivariate regression algorithm, Random Forest, was used to predict the spatial distribution of depth-integrated soil-water storage, based on the in-situ measurements and a combination of node attributes (topographic wetness, northness, elevation, soil texture, and location with respect to canopy cover). We observe predictable patterns of predictor accuracy and independent variable ranking during the six-year study period. Predictor accuracy is highest during the snow-cover and early recession periods but declines during the dry period. Soil texture has consistently high feature importance. Other landscape attributes exhibit seasonal trends: northness peaks during the wet-up period, and elevation and topographic-wetness index peak during the recession and dry period, respectively.

  4. Accumulating Data to Optimally Predict Obesity Treatment (ADOPT) Core Measures: Behavioral Domain.

    Science.gov (United States)

    Lytle, Leslie A; Nicastro, Holly L; Roberts, Susan B; Evans, Mary; Jakicic, John M; Laposky, Aaron D; Loria, Catherine M

    2018-04-01

    The ability to identify and measure behaviors that are related to weight loss and the prevention of weight regain is crucial to understanding the variability in response to obesity treatment and the development of tailored treatments. The overarching goal of the Accumulating Data to Optimally Predict obesity Treatment (ADOPT) Core Measures Project is to provide obesity researchers with guidance on a set of constructs and measures that are related to weight control and that span and integrate obesity-related behavioral, biological, environmental, and psychosocial domains. This article describes how the behavioral domain subgroup identified the initial list of high-priority constructs and measures to be included, and it describes practical considerations for assessing the following four behavioral areas: eating, activity, sleep, and self-monitoring of weight. Challenges and considerations for advancing the science related to weight loss and maintenance behaviors are also discussed. Assessing a set of core behavioral measures in combination with those from other ADOPT domains is critical to improve our understanding of individual variability in response to adult obesity treatment. The selection of behavioral measures is based on the current science, although there continues to be much work needed in this field. © 2018 The Obesity Society.

  5. Moving Object Tracking and Avoidance Algorithm for Differential Driving AGV Based on Laser Measurement Technology

    Directory of Open Access Journals (Sweden)

    Pandu Sandi Pratama

    2012-12-01

    Full Text Available This paper proposed an algorithm to track the obstacle position and avoid the moving objects for differential driving Automatic Guided Vehicles (AGV system in industrial environment. This algorithm has several abilities such as: to detect the moving objects, to predict the velocity and direction of moving objects, to predict the collision possibility and to plan the avoidance maneuver. For sensing the local environment and positioning, the laser measurement system LMS-151 and laser navigation system NAV-200 are applied. Based on the measurement results of the sensors, the stationary and moving obstacles are detected and the collision possibility is calculated. The velocity and direction of the obstacle are predicted using Kalman filter algorithm. Collision possibility, time, and position can be calculated by comparing the AGV movement and obstacle prediction result obtained by Kalman filter. Finally the avoidance maneuver using the well known tangent Bug algorithm is decided based on the calculation data. The effectiveness of proposed algorithm is verified using simulation and experiment. Several examples of experiment conditions are presented using stationary obstacle, and moving obstacles. The simulation and experiment results show that the AGV can detect and avoid the obstacles successfully in all experimental condition. [Keywords— Obstacle avoidance, AGV, differential drive, laser measurement system, laser navigation system].

  6. Prediction based active ramp metering control strategy with mobility and safety assessment

    Science.gov (United States)

    Fang, Jie; Tu, Lili

    2018-04-01

    Ramp metering is one of the most direct and efficient motorway traffic flow management measures so as to improve traffic conditions. However, owing to short of traffic conditions prediction, in earlier studies, the impact on traffic flow dynamics of the applied RM control was not quantitatively evaluated. In this study, a RM control algorithm adopting Model Predictive Control (MPC) framework to predict and assess future traffic conditions, which taking both the current traffic conditions and the RM-controlled future traffic states into consideration, was presented. The designed RM control algorithm targets at optimizing the network mobility and safety performance. The designed algorithm is evaluated in a field-data-based simulation. Through comparing the presented algorithm controlled scenario with the uncontrolled scenario, it was proved that the proposed RM control algorithm can effectively relieve the congestion of traffic network with no significant compromises in safety aspect.

  7. Outcome prediction in home- and community-based brain injury rehabilitation using the Mayo-Portland Adaptability Inventory.

    Science.gov (United States)

    Malec, James F; Parrot, Devan; Altman, Irwin M; Swick, Shannon

    2015-01-01

    The objective of the study was to develop statistical formulas to predict levels of community participation on discharge from post-hospital brain injury rehabilitation using retrospective data analysis. Data were collected from seven geographically distinct programmes in a home- and community-based brain injury rehabilitation provider network. Participants were 642 individuals with post-traumatic brain injury. Interventions consisted of home- and community-based brain injury rehabilitation. The main outcome measure was the Mayo-Portland Adaptability Inventory (MPAI-4) Participation Index. Linear discriminant models using admission MPAI-4 Participation Index score and log chronicity correctly predicted excellent (no to minimal participation limitations), very good (very mild participation limitations), good (mild participation limitations), and limited (significant participation limitations) outcome levels at discharge. Predicting broad outcome categories for post-hospital rehabilitation programmes based on admission assessment data appears feasible and valid. Equations to provide patients and families with probability statements on admission about expected levels of outcome are provided. It is unknown to what degree these prediction equations can be reliably applied and valid in other settings.

  8. Morphology-based prediction of osteogenic differentiation potential of human mesenchymal stem cells.

    Directory of Open Access Journals (Sweden)

    Fumiko Matsuoka

    Full Text Available Human bone marrow mesenchymal stem cells (hBMSCs are widely used cell source for clinical bone regeneration. Achieving the greatest therapeutic effect is dependent on the osteogenic differentiation potential of the stem cells to be implanted. However, there are still no practical methods to characterize such potential non-invasively or previously. Monitoring cellular morphology is a practical and non-invasive approach for evaluating osteogenic potential. Unfortunately, such image-based approaches had been historically qualitative and requiring experienced interpretation. By combining the non-invasive attributes of microscopy with the latest technology allowing higher throughput and quantitative imaging metrics, we studied the applicability of morphometric features to quantitatively predict cellular osteogenic potential. We applied computational machine learning, combining cell morphology features with their corresponding biochemical osteogenic assay results, to develop prediction model of osteogenic differentiation. Using a dataset of 9,990 images automatically acquired by BioStation CT during osteogenic differentiation culture of hBMSCs, 666 morphometric features were extracted as parameters. Two commonly used osteogenic markers, alkaline phosphatase (ALP activity and calcium deposition were measured experimentally, and used as the true biological differentiation status to validate the prediction accuracy. Using time-course morphological features throughout differentiation culture, the prediction results highly correlated with the experimentally defined differentiation marker values (R>0.89 for both marker predictions. The clinical applicability of our morphology-based prediction was further examined with two scenarios: one using only historical cell images and the other using both historical images together with the patient's own cell images to predict a new patient's cellular potential. The prediction accuracy was found to be greatly enhanced

  9. Reducing overlay sampling for APC-based correction per exposure by replacing measured data with computational prediction

    Science.gov (United States)

    Noyes, Ben F.; Mokaberi, Babak; Oh, Jong Hun; Kim, Hyun Sik; Sung, Jun Ha; Kea, Marc

    2016-03-01

    One of the keys to successful mass production of sub-20nm nodes in the semiconductor industry is the development of an overlay correction strategy that can meet specifications, reduce the number of layers that require dedicated chuck overlay, and minimize measurement time. Three important aspects of this strategy are: correction per exposure (CPE), integrated metrology (IM), and the prioritization of automated correction over manual subrecipes. The first and third aspects are accomplished through an APC system that uses measurements from production lots to generate CPE corrections that are dynamically applied to future lots. The drawback of this method is that production overlay sampling must be extremely high in order to provide the system with enough data to generate CPE. That drawback makes IM particularly difficult because of the throughput impact that can be created on expensive bottleneck photolithography process tools. The goal is to realize the cycle time and feedback benefits of IM coupled with the enhanced overlay correction capability of automated CPE without impacting process tool throughput. This paper will discuss the development of a system that sends measured data with reduced sampling via an optimized layout to the exposure tool's computational modelling platform to predict and create "upsampled" overlay data in a customizable output layout that is compatible with the fab user CPE APC system. The result is dynamic CPE without the burden of extensive measurement time, which leads to increased utilization of IM.

  10. Micromechanics-based damage model for failure prediction in cold forming

    Energy Technology Data Exchange (ETDEWEB)

    Lu, X.Z.; Chan, L.C., E-mail: lc.chan@polyu.edu.hk

    2017-04-06

    The purpose of this study was to develop a micromechanics-based damage (micro-damage) model that was concerned with the evolution of micro-voids for failure prediction in cold forming. Typical stainless steel SS316L was selected as the specimen material, and the nonlinear isotropic hardening rule was extended to describe the large deformation of the specimen undergoing cold forming. A micro-focus high-resolution X-ray computed tomography (CT) system was employed to trace and measure the micro-voids inside the specimen directly. Three-dimensional (3D) representative volume element (RVE) models with different sizes and spatial locations were reconstructed from the processed CT images of the specimen, and the average size and volume fraction of micro-voids (VFMV) for the specimen were determined via statistical analysis. Subsequently, the micro-damage model was compiled as a user-defined material subroutine into the finite element (FE) package ABAQUS. The stress-strain responses and damage evolutions of SS316L specimens under tensile and compressive deformations at different strain rates were predicted and further verified experimentally. It was concluded that the proposed micro-damage model is convincing for failure prediction in cold forming of the SS316L material.

  11. Results on three predictions for July 2012 federal elections in Mexico based on past regularities.

    Directory of Open Access Journals (Sweden)

    H Hernández-Saldaña

    Full Text Available The Presidential Election in Mexico of July 2012 has been the third time that PREP, Previous Electoral Results Program works. PREP gives voting outcomes based in electoral certificates of each polling station that arrive to capture centers. In previous ones, some statistical regularities had been observed, three of them were selected to make predictions and were published in arXiv:1207.0078 [physics.soc-ph]. Using the database made public in July 2012, two of the predictions were completely fulfilled, while, the third one was measured and confirmed using the database obtained upon request to the electoral authorities. The first two predictions confirmed by actual measures are: (ii The Partido Revolucionario Institucional, PRI, is a sprinter and has a better performance in polling stations arriving late to capture centers during the process. (iii Distribution of vote of this party is well described by a smooth function named a Daisy model. A Gamma distribution, but compatible with a Daisy model, fits the distribution as well. The third prediction confirms that errare humanum est, since the error distributions of all the self-consistency variables appeared as a central power law with lateral lobes as in 2000 and 2006 electoral processes. The three measured regularities appeared no matter the political environment.

  12. Results on three predictions for July 2012 federal elections in Mexico based on past regularities.

    Science.gov (United States)

    Hernández-Saldaña, H

    2013-01-01

    The Presidential Election in Mexico of July 2012 has been the third time that PREP, Previous Electoral Results Program works. PREP gives voting outcomes based in electoral certificates of each polling station that arrive to capture centers. In previous ones, some statistical regularities had been observed, three of them were selected to make predictions and were published in arXiv:1207.0078 [physics.soc-ph]. Using the database made public in July 2012, two of the predictions were completely fulfilled, while, the third one was measured and confirmed using the database obtained upon request to the electoral authorities. The first two predictions confirmed by actual measures are: (ii) The Partido Revolucionario Institucional, PRI, is a sprinter and has a better performance in polling stations arriving late to capture centers during the process. (iii) Distribution of vote of this party is well described by a smooth function named a Daisy model. A Gamma distribution, but compatible with a Daisy model, fits the distribution as well. The third prediction confirms that errare humanum est, since the error distributions of all the self-consistency variables appeared as a central power law with lateral lobes as in 2000 and 2006 electoral processes. The three measured regularities appeared no matter the political environment.

  13. Implementation of neural network based non-linear predictive

    DEFF Research Database (Denmark)

    Sørensen, Paul Haase; Nørgård, Peter Magnus; Ravn, Ole

    1998-01-01

    The paper describes a control method for non-linear systems based on generalized predictive control. Generalized predictive control (GPC) was developed to control linear systems including open loop unstable and non-minimum phase systems, but has also been proposed extended for the control of non......-linear systems. GPC is model-based and in this paper we propose the use of a neural network for the modeling of the system. Based on the neural network model a controller with extended control horizon is developed and the implementation issues are discussed, with particular emphasis on an efficient Quasi......-Newton optimization algorithm. The performance is demonstrated on a pneumatic servo system....

  14. Measurement and ANN prediction of pH-dependent solubility of nitrogen-heterocyclic compounds.

    Science.gov (United States)

    Sun, Feifei; Yu, Qingni; Zhu, Jingke; Lei, Lecheng; Li, Zhongjian; Zhang, Xingwang

    2015-09-01

    Based on the solubility of 25 nitrogen-heterocyclic compounds (NHCs) measured by saturation shake-flask method, artificial neural network (ANN) was employed to the study of the quantitative relationship between the structure and pH-dependent solubility of NHCs. With genetic algorithm-multivariate linear regression (GA-MLR) approach, five out of the 1497 molecular descriptors computed by Dragon software were selected to describe the molecular structures of NHCs. Using the five selected molecular descriptors as well as pH and the partial charge on the nitrogen atom of NHCs (QN) as inputs of ANN, a quantitative structure-property relationship (QSPR) model without using Henderson-Hasselbalch (HH) equation was successfully developed to predict the aqueous solubility of NHCs in different pH water solutions. The prediction model performed well on the 25 model NHCs with an absolute average relative deviation (AARD) of 5.9%, while HH approach gave an AARD of 36.9% for the same model NHCs. It was found that QN played a very important role in the description of NHCs and, with QN, ANN became a potential tool for the prediction of pH-dependent solubility of NHCs. Copyright © 2015 Elsevier Ltd. All rights reserved.

  15. Deep-Learning-Based Approach for Prediction of Algal Blooms

    Directory of Open Access Journals (Sweden)

    Feng Zhang

    2016-10-01

    Full Text Available Algal blooms have recently become a critical global environmental concern which might put economic development and sustainability at risk. However, the accurate prediction of algal blooms remains a challenging scientific problem. In this study, a novel prediction approach for algal blooms based on deep learning is presented—a powerful tool to represent and predict highly dynamic and complex phenomena. The proposed approach constructs a five-layered model to extract detailed relationships between the density of phytoplankton cells and various environmental parameters. The algal blooms can be predicted by the phytoplankton density obtained from the output layer. A case study is conducted in coastal waters of East China using both our model and a traditional back-propagation neural network for comparison. The results show that the deep-learning-based model yields better generalization and greater accuracy in predicting algal blooms than a traditional shallow neural network does.

  16. Measurement and prediction of indoor air quality using a breathing thermal manikin.

    Science.gov (United States)

    Melikov, A; Kaczmarczyk, J

    2007-02-01

    The analyses performed in this paper reveal that a breathing thermal manikin with realistic simulation of respiration including breathing cycle, pulmonary ventilation rate, frequency and breathing mode, gas concentration, humidity and temperature of exhaled air and human body shape and surface temperature is sensitive enough to perform reliable measurement of characteristics of air as inhaled by occupants. The temperature, humidity, and pollution concentration in the inhaled air can be measured accurately with a thermal manikin without breathing simulation if they are measured at the upper lip at a distance of measured inhaled air parameters. Proper simulation of breathing, especially of exhalation, is needed for studying the transport of exhaled air between occupants. A method for predicting air acceptability based on inhaled air parameters and known exposure-response relationships established in experiments with human subjects is suggested. Recommendations for optimal simulation of human breathing by means of a breathing thermal manikin when studying pollution concentration, temperature and humidity of the inhaled air as well as the transport of exhaled air (which may carry infectious agents) between occupants are outlined. In order to compare results obtained with breathing thermal manikins, their nose and mouth geometry should be standardized.

  17. Lightning Prediction using Electric Field Measurements Associated with Convective Events at a Tropical Location

    Science.gov (United States)

    Jana, S.; Chakraborty, R.; Maitra, A.

    2017-12-01

    Nowcasting of lightning activities during intense convective events using a single electric field monitor (EFM) has been carried out at a tropical location, Kolkata (22.65oN, 88.45oE). Before and at the onset of heavy lightning, certain changes of electric field (EF) can be related to high liquid water content (LWC) and low cloud base height (CBH). The present study discusses the utility of EF observation to show a few aspects of convective events. Large convective cloud showed by high LWC and low CBH can be detected from EF variation which could be a precursor of upcoming convective events. Suitable values of EF gradient can be used as an indicator of impending lightning events. An EF variation of 0.195 kV/m/min can predict lightning within 17.5 km radius with a probability of detection (POD) of 91% and false alarm rate (FAR) of 8% with a lead time of 45 min. The total number of predicted lightning strikes is nearly 9 times less than that measured by the lightning detector. This prediction technique can, therefore, give an estimate of cloud to ground (CG) and intra cloud (IC) lighting occurrences within the surrounding area. This prediction technique involving POD, FAR and lead time information shows a better prediction capability compared to the techniques reported earlier. Thus an EFM can be effectively used for prediction of lightning events at a tropical location.

  18. A Method for Driving Route Predictions Based on Hidden Markov Model

    Directory of Open Access Journals (Sweden)

    Ning Ye

    2015-01-01

    Full Text Available We present a driving route prediction method that is based on Hidden Markov Model (HMM. This method can accurately predict a vehicle’s entire route as early in a trip’s lifetime as possible without inputting origins and destinations beforehand. Firstly, we propose the route recommendation system architecture, where route predictions play important role in the system. Secondly, we define a road network model, normalize each of driving routes in the rectangular coordinate system, and build the HMM to make preparation for route predictions using a method of training set extension based on K-means++ and the add-one (Laplace smoothing technique. Thirdly, we present the route prediction algorithm. Finally, the experimental results of the effectiveness of the route predictions that is based on HMM are shown.

  19. Star-sensor-based predictive Kalman filter for satelliteattitude estimation

    Institute of Scientific and Technical Information of China (English)

    林玉荣; 邓正隆

    2002-01-01

    A real-time attitude estimation algorithm, namely the predictive Kalman filter, is presented. This algorithm can accurately estimate the three-axis attitude of a satellite using only star sensor measurements. The implementation of the filter includes two steps: first, predicting the torque modeling error, and then estimating the attitude. Simulation results indicate that the predictive Kalman filter provides robust performance in the presence of both significant errors in the assumed model and in the initial conditions.

  20. Predicting live weight using body measurements in Afar goats in ...

    African Journals Online (AJOL)

    Bheema

    Predicting live weight using body measurements in Afar goats in north eastern. Ethiopia ... farmers get value for their stock rather than the middlemen. However ..... of West African long-legged and West African dwarf sheep in Northern Ghana.

  1. Optimization of condition-based asset management using a predictive health model

    NARCIS (Netherlands)

    Bajracharya, G.; Koltunowicz, T.; Negenborn, R.R.; Papp, Z.; Djairam, D.; De Schutter, B.; Smit, J.J.

    2009-01-01

    In this paper, a model predictive framework is used to optimize the operation and maintenance actions of power system equipment based on the predicted health sate of this equipment. In particular, this framework is used to predict the health state of transformers based on their usage. The health

  2. Selecting the minimum prediction base of historical data to perform 5-year predictions of the cancer burden: The GoF-optimal method.

    Science.gov (United States)

    Valls, Joan; Castellà, Gerard; Dyba, Tadeusz; Clèries, Ramon

    2015-06-01

    Predicting the future burden of cancer is a key issue for health services planning, where a method for selecting the predictive model and the prediction base is a challenge. A method, named here Goodness-of-Fit optimal (GoF-optimal), is presented to determine the minimum prediction base of historical data to perform 5-year predictions of the number of new cancer cases or deaths. An empirical ex-post evaluation exercise for cancer mortality data in Spain and cancer incidence in Finland using simple linear and log-linear Poisson models was performed. Prediction bases were considered within the time periods 1951-2006 in Spain and 1975-2007 in Finland, and then predictions were made for 37 and 33 single years in these periods, respectively. The performance of three fixed different prediction bases (last 5, 10, and 20 years of historical data) was compared to that of the prediction base determined by the GoF-optimal method. The coverage (COV) of the 95% prediction interval and the discrepancy ratio (DR) were calculated to assess the success of the prediction. The results showed that (i) models using the prediction base selected through GoF-optimal method reached the highest COV and the lowest DR and (ii) the best alternative strategy to GoF-optimal was the one using the base of prediction of 5-years. The GoF-optimal approach can be used as a selection criterion in order to find an adequate base of prediction. Copyright © 2015 Elsevier Ltd. All rights reserved.

  3. Theoretical bases analysis of scientific prediction on marketing principles

    OpenAIRE

    A.S. Rosohata

    2012-01-01

    The article presents an overview categorical apparatus of scientific predictions and theoretical foundations results of scientific forecasting. They are integral part of effective management of economic activities. The approaches to the prediction of scientists in different fields of Social science and the categories modification of scientific prediction, based on principles of marketing are proposed.

  4. Brachial cuff measurements of blood pressure during passive leg raising for fluid responsiveness prediction.

    Science.gov (United States)

    Lakhal, K; Ehrmann, S; Benzekri-Lefèvre, D; Runge, I; Legras, A; Dequin, P-F; Mercier, E; Wolff, M; Régnier, B; Boulain, T

    2012-05-01

    The passive leg raising maneuver (PLR) for fluid responsiveness testing relies on cardiac output (CO) measurements or invasive measurements of arterial pressure (AP) whereas the initial hemodynamic management during shock is often based solely on brachial cuff measurements. We assessed PLR-induced changes in noninvasive oscillometric readings to predict fluid responsiveness. Multicentre interventional study. In ICU sedated patients with circulatory failure, AP (invasive and noninvasive readings) and CO measurements were performed before, during PLR (trunk supine, not modified) and after 500-mL volume expansion. Areas under the ROC curves (AUC) were determined for fluid responsiveness (>10% volume expansion-induced increase in CO) prediction. In 112 patients (19% with arrhythmia), changes in noninvasive systolic AP during PLR (noninvasiveΔ(PLR)SAP) only predicted fluid responsiveness (cutoff 17%, n=21, positive likelihood ratio [LR] of 26 [18-38]), not unresponsiveness. If PLR-induced change in central venous pressure (CVP) was at least of 2 mm Hg (n=60), suggesting that PLR succeeded in altering cardiac preload, noninvasiveΔ(PLR)SAP performance was good: AUC of 0.94 [0.85-0.98], positive and negative LRs of 5.7 [4.6-6.8] and 0.07 [0.009-0.5], respectively, for a cutoff of 9%. Of note, invasive AP-derived indices did not outperform noninvasiveΔ(PLR)SAP. Regardless of CVP (i.e., during "blind PLR"), noninvasiveΔ(PLR)SAP more than 17% reliably identified fluid responders. During "CVP-guided PLR", in case of sufficient change in CVP, noninvasiveΔ(PLR)SAP performed better (cutoff of 9%). These findings, in sedated patients who had already undergone volume expansion and/or catecholamines, have to be verified during the early phase of circulatory failure (before an arterial line and/or a CO measuring device is placed). Copyright © 2012 Société française d’anesthésie et de réanimation (Sfar). Published by Elsevier SAS. All rights reserved.

  5. Measuring body composition using the bioelectrical impedance method can predict the outcomes of gemcitabine-based chemotherapy in patients with pancreatobiliary tract cancer.

    Science.gov (United States)

    Muramatsu, Mami; Tsuchiya, Aya; Ohta, Seiko; Iijima, Yukie; Maruyama, Miyuki; Onodera, Yoshiko; Hagihara, Megumi; Nakaya, Naoki; Sato, Itaru; Omura, Kenji; Ueno, Soichiro; Nakajima, Hideo

    2015-12-01

    In order to examine the effect on body composition of anticancer drug treatments, the body composition rate in patients being treated with gemcitabine (GEM)-based chemotherapy was measured over time on an outpatient basis with a simple body composition monitor using the bioelectrical impedance (BI) method. The results revealed a significant reduction in the body fat rate (P=0.01) over the course of treatment in patients with pancreatobiliary tract cancer who became unable to continue GEM-based chemotherapy due to progressive disease or a decreased performance status. Meanwhile, no changes were observed in the body composition of control patients with urothelial carcinoma receiving GEM-based chemotherapy. In association with the adverse reactions to GEM and the hematotoxicity profile, a decreased white blood cell count was more likely to occur in body fat-dominant patients (mean fat rate, 25.8%; mean muscle rate, 26.2%), whereas a decreased blood platelet count was more likely to occur in skeletal muscle-dominant patients (mean fat rate, 23.3%; mean muscle rates, 28.7%). The correlation between body composition parameters and the relative dose intensity (RDI) associated with GEM administration was also analyzed. The results revealed a positive correlation between the RDI and basal metabolism amount (P=0.03); however, the RDI did not correlate with the body fat rate, skeletal muscle rate or body mass index (P=0.61, P=0.14 and P=0.20, respectively). In conclusion, the body composition rate measurement using the BI method over time may be useful for predicting the outcome of GEM-based chemotherapy and adverse events in patients with pancreatobiliary tract cancer. In particular, the present findings indicate that the changes in body fat rate may be helpful as an adjunct index for assessing potential continuation of chemotherapy and changes in physical conditions.

  6. Predicting Future Reading Problems Based on Pre-reading Auditory Measures: A Longitudinal Study of Children with a Familial Risk of Dyslexia.

    Science.gov (United States)

    Law, Jeremy M; Vandermosten, Maaike; Ghesquière, Pol; Wouters, Jan

    2017-01-01

    Purpose: This longitudinal study examines measures of temporal auditory processing in pre-reading children with a family risk of dyslexia. Specifically, it attempts to ascertain whether pre-reading auditory processing, speech perception, and phonological awareness (PA) reliably predict later literacy achievement. Additionally, this study retrospectively examines the presence of pre-reading auditory processing, speech perception, and PA impairments in children later found to be literacy impaired. Method: Forty-four pre-reading children with and without a family risk of dyslexia were assessed at three time points (kindergarten, first, and second grade). Auditory processing measures of rise time (RT) discrimination and frequency modulation (FM) along with speech perception, PA, and various literacy tasks were assessed. Results: Kindergarten RT uniquely contributed to growth in literacy in grades one and two, even after controlling for letter knowledge and PA. Highly significant concurrent and predictive correlations were observed with kindergarten RT significantly predicting first grade PA. Retrospective analysis demonstrated atypical performance in RT and PA at all three time points in children who later developed literacy impairments. Conclusions: Although significant, kindergarten auditory processing contributions to later literacy growth lack the power to be considered as a single-cause predictor; thus results support temporal processing deficits' contribution within a multiple deficit model of dyslexia.

  7. Capillary pressure and saturation relations for supercritical CO2 and brine in sand: High-pressure Pc(Sw) controller/meter measurements and capillary scaling predictions

    Science.gov (United States)

    Tokunaga, Tetsu K.; Wan, Jiamin; Jung, Jong-Won; Kim, Tae Wook; Kim, Yongman; Dong, Wenming

    2013-08-01

    In geologic carbon sequestration, reliable predictions of CO2 storage require understanding the capillary behavior of supercritical (sc) CO2. Given the limited availability of measurements of the capillary pressure (Pc) dependence on water saturation (Sw) with scCO2 as the displacing fluid, simulations of CO2 sequestration commonly rely on modifying more familiar air/H2O and oil/H2O Pc(Sw) relations, adjusted to account for differences in interfacial tensions. In order to test such capillary scaling-based predictions, we developed a high-pressure Pc(Sw) controller/meter, allowing accurate Pc and Sw measurements. Drainage and imbibition processes were measured on quartz sand with scCO2-brine at pressures of 8.5 and 12.0 MPa (45°C), and air-brine at 21°C and 0.1 MPa. Drainage and rewetting at intermediate Sw levels shifted to Pc values that were from 30% to 90% lower than predicted based on interfacial tension changes. Augmenting interfacial tension-based predictions with differences in independently measured contact angles from different sources led to more similar scaled Pc(Sw) relations but still did not converge onto universal drainage and imbibition curves. Equilibrium capillary trapping of the nonwetting phases was determined for Pc = 0 during rewetting. The capillary-trapped volumes for scCO2 were significantly greater than for air. Given that the experiments were all conducted on a system with well-defined pore geometry (homogeneous sand), and that scCO2-brine interfacial tensions are fairly well constrained, we conclude that the observed deviations from scaling predictions resulted from scCO2-induced decreased wettability. Wettability alteration by scCO2 makes predicting hydraulic behavior more challenging than for less reactive fluids.

  8. Verification of the directivity index and other measures of directivity in predicting directional benefit

    Science.gov (United States)

    Dittberner, Andrew; Bentler, Ruth

    2005-09-01

    The relationship between various directivity measures and subject performance with directional microphone hearing aids was determined. Test devices included first- and second-order directional microphones. Recordings of sentences and noise (Hearing in Noise Test, HINT) were made through each test device in simple, complex, and anisotropic background noise conditions. Twenty-six subjects, with normal hearing, were administered the HINT test recordings, and directional benefit was computed. These measures were correlated to theoretical, free-field, and KEMAR DI values, as well as front-to-back ratios, in situ SNRs, and a newly proposed Db-SNR, wherein a predictive value of the SNR improvement is calculated as a function of the noise source incidence. The different predictive scores showed high correlation to the measured directional benefit scores in the complex (diffuse-like) background noise condition (r=0.89-0.97, pThe Db-SNR approach and the in situ SNR measures provided excellent prediction of subject performance in all background noise conditions (0.85-0.97, pthe predictive measures could account for the effects of reverberation on the speech signal (r=0.35-0.40, p<0.05).

  9. Liver stiffness measurement-based scoring system for significant inflammation related to chronic hepatitis B.

    Directory of Open Access Journals (Sweden)

    Mei-Zhu Hong

    Full Text Available Liver biopsy is indispensable because liver stiffness measurement alone cannot provide information on intrahepatic inflammation. However, the presence of fibrosis highly correlates with inflammation. We constructed a noninvasive model to determine significant inflammation in chronic hepatitis B patients by using liver stiffness measurement and serum markers.The training set included chronic hepatitis B patients (n = 327, and the validation set included 106 patients; liver biopsies were performed, liver histology was scored, and serum markers were investigated. All patients underwent liver stiffness measurement.An inflammation activity scoring system for significant inflammation was constructed. In the training set, the area under the curve, sensitivity, and specificity of the fibrosis-based activity score were 0.964, 91.9%, and 90.8% in the HBeAg(+ patients and 0.978, 85.0%, and 94.0% in the HBeAg(- patients, respectively. In the validation set, the area under the curve, sensitivity, and specificity of the fibrosis-based activity score were 0.971, 90.5%, and 92.5% in the HBeAg(+ patients and 0.977, 95.2%, and 95.8% in the HBeAg(- patients. The liver stiffness measurement-based activity score was comparable to that of the fibrosis-based activity score in both HBeAg(+ and HBeAg(- patients for recognizing significant inflammation (G ≥3.Significant inflammation can be accurately predicted by this novel method. The liver stiffness measurement-based scoring system can be used without the aid of computers and provides a noninvasive alternative for the prediction of chronic hepatitis B-related significant inflammation.

  10. Predicting Taxi-Out Time at Congested Airports with Optimization-Based Support Vector Regression Methods

    Directory of Open Access Journals (Sweden)

    Guan Lian

    2018-01-01

    Full Text Available Accurate prediction of taxi-out time is significant precondition for improving the operationality of the departure process at an airport, as well as reducing the long taxi-out time, congestion, and excessive emission of greenhouse gases. Unfortunately, several of the traditional methods of predicting taxi-out time perform unsatisfactorily at congested airports. This paper describes and tests three of those conventional methods which include Generalized Linear Model, Softmax Regression Model, and Artificial Neural Network method and two improved Support Vector Regression (SVR approaches based on swarm intelligence algorithm optimization, which include Particle Swarm Optimization (PSO and Firefly Algorithm. In order to improve the global searching ability of Firefly Algorithm, adaptive step factor and Lévy flight are implemented simultaneously when updating the location function. Six factors are analysed, of which delay is identified as one significant factor in congested airports. Through a series of specific dynamic analyses, a case study of Beijing International Airport (PEK is tested with historical data. The performance measures show that the proposed two SVR approaches, especially the Improved Firefly Algorithm (IFA optimization-based SVR method, not only perform as the best modelling measures and accuracy rate compared with the representative forecast models, but also can achieve a better predictive performance when dealing with abnormal taxi-out time states.

  11. Stereological study of the effect of ginger's alcoholic extract on the testis in busulfan-induced infertility in rats

    Directory of Open Access Journals (Sweden)

    Hossein Bordbar

    2013-06-01

    Full Text Available Background: In traditional medicine zingiber officinale used to regulate female menstural cycle and treat male infertility. Recent studies have suggested the possible role of ginger extract in improving the testicular damage of busulfan. Objective: The aim of this study was to evaluate the effects of zingiber officinale on the sperm parameters, testosterone level and the volume of the testes and seminiferous tubules by stereological methods. Materials and Methods: Fifty rats were divided into four groups. All the rats were given a single intraperitoneally injection of 5mg/kg busulfan solution. The first group was kept as busulfan control, while the other groups were orally administrated ginger extract in graded doses of 50, 100 and 150mg/kg b.wt, for 48 consecutive days. At the end, all animals were anesthetized and their testes and vas deference were removed, fixed, embedded, and stained. The volume of testes and seminiferous tubules were estimated by cavalieri methods. Results: The result showed, that zingiber officinale increased the volumes of seminiferous tubule in 100mg/kg treated group compared to control group. Sperm count (706×105 and 682×105 and the level of testosterone (50.90 ng/mL and 54.10 ng/mL enhanced in 100 mg/kg and 150 mg/kg treated groups compared to control group (p=0.00. Conclusion: It seems that zingiber officinale stimulate male reproductive system in induce busulfan infertility

  12. Combined prediction model for supply risk in nuclear power equipment manufacturing industry based on support vector machine and decision tree

    International Nuclear Information System (INIS)

    Shi Chunsheng; Meng Dapeng

    2011-01-01

    The prediction index for supply risk is developed based on the factor identifying of nuclear equipment manufacturing industry. The supply risk prediction model is established with the method of support vector machine and decision tree, based on the investigation on 3 important nuclear power equipment manufacturing enterprises and 60 suppliers. Final case study demonstrates that the combination model is better than the single prediction model, and demonstrates the feasibility and reliability of this model, which provides a method to evaluate the suppliers and measure the supply risk. (authors)

  13. Clustering gene expression data based on predicted differential effects of GV interaction.

    Science.gov (United States)

    Pan, Hai-Yan; Zhu, Jun; Han, Dan-Fu

    2005-02-01

    Microarray has become a popular biotechnology in biological and medical research. However, systematic and stochastic variabilities in microarray data are expected and unavoidable, resulting in the problem that the raw measurements have inherent "noise" within microarray experiments. Currently, logarithmic ratios are usually analyzed by various clustering methods directly, which may introduce bias interpretation in identifying groups of genes or samples. In this paper, a statistical method based on mixed model approaches was proposed for microarray data cluster analysis. The underlying rationale of this method is to partition the observed total gene expression level into various variations caused by different factors using an ANOVA model, and to predict the differential effects of GV (gene by variety) interaction using the adjusted unbiased prediction (AUP) method. The predicted GV interaction effects can then be used as the inputs of cluster analysis. We illustrated the application of our method with a gene expression dataset and elucidated the utility of our approach using an external validation.

  14. Direct measurements of liquid film roughness for the prediction of annular flow pressure drop

    International Nuclear Information System (INIS)

    Ashwood, Andrea C.; Schubring, DuWayne; Shedd, Timothy A.

    2009-01-01

    A vertical two-phase (air-water) test section has been constructed to allow for detailed visualization of flow phenomena in the annular regime. The total internal reflection (TIR) technique for film thickness estimation, originally developed by Shedd and Newell (1998), has been adapted for use in this test section. This technique uses the pattern of diffuse light reflected from the gas-liquid interface to estimate the base film thickness, i.e., the thickness between large liquid waves. Measurement of base film thickness separately from the average film thickness, which couples base film and wave behavior, allows for consideration of separate effects from each of the two zones. A modified Hurlburt-Newell (2000) correlation that separates the flow into these two zones has been generated. Data regarding the relationship between average base film thickness and wave height, along with verification of the base film thickness measured from the TIR technique, were provided by planar laser-induced fluorescence (PLIF). For the present vertical air-water up flows with liquid superficial velocities ranging from 4 to 34 cm s -1 and gas superficial velocities from 35 to 85 m s -1 , the modified Hurlburt-Newell correlation predicts pressure loss to within 10%. (author)

  15. Direct measurements of liquid film roughness for the prediction of annular flow pressure drop

    Energy Technology Data Exchange (ETDEWEB)

    Ashwood, Andrea C; Schubring, DuWayne; Shedd, Timothy A. [University of Wisconsin, Madison, WI (United States)], e-mail: cashwood@wisc.edu, e-mail: dlschubring@wisc.edu, e-mail: shedd@engr.wisc.edu

    2009-07-01

    A vertical two-phase (air-water) test section has been constructed to allow for detailed visualization of flow phenomena in the annular regime. The total internal reflection (TIR) technique for film thickness estimation, originally developed by Shedd and Newell (1998), has been adapted for use in this test section. This technique uses the pattern of diffuse light reflected from the gas-liquid interface to estimate the base film thickness, i.e., the thickness between large liquid waves. Measurement of base film thickness separately from the average film thickness, which couples base film and wave behavior, allows for consideration of separate effects from each of the two zones. A modified Hurlburt-Newell (2000) correlation that separates the flow into these two zones has been generated. Data regarding the relationship between average base film thickness and wave height, along with verification of the base film thickness measured from the TIR technique, were provided by planar laser-induced fluorescence (PLIF). For the present vertical air-water up flows with liquid superficial velocities ranging from 4 to 34 cm s{sup -1} and gas superficial velocities from 35 to 85 m s{sup -1}, the modified Hurlburt-Newell correlation predicts pressure loss to within 10%. (author)

  16. Novel prediction- and subblock-based algorithm for fractal image compression

    International Nuclear Information System (INIS)

    Chung, K.-L.; Hsu, C.-H.

    2006-01-01

    Fractal encoding is the most consuming part in fractal image compression. In this paper, a novel two-phase prediction- and subblock-based fractal encoding algorithm is presented. Initially the original gray image is partitioned into a set of variable-size blocks according to the S-tree- and interpolation-based decomposition principle. In the first phase, each current block of variable-size range block tries to find the best matched domain block based on the proposed prediction-based search strategy which utilizes the relevant neighboring variable-size domain blocks. The first phase leads to a significant computation-saving effect. If the domain block found within the predicted search space is unacceptable, in the second phase, a subblock strategy is employed to partition the current variable-size range block into smaller blocks to improve the image quality. Experimental results show that our proposed prediction- and subblock-based fractal encoding algorithm outperforms the conventional full search algorithm and the recently published spatial-correlation-based algorithm by Truong et al. in terms of encoding time and image quality. In addition, the performance comparison among our proposed algorithm and the other two algorithms, the no search-based algorithm and the quadtree-based algorithm, are also investigated

  17. Proliferation assessment in breast carcinomas using digital image analysis based on virtual Ki67/cytokeratin double staining.

    Science.gov (United States)

    Røge, Rasmus; Riber-Hansen, Rikke; Nielsen, Søren; Vyberg, Mogens

    2016-07-01

    Manual estimation of Ki67 Proliferation Index (PI) in breast carcinoma classification is labor intensive and prone to intra- and interobserver variation. Standard Digital Image Analysis (DIA) has limitations due to issues with tumor cell identification. Recently, a computer algorithm, DIA based on Virtual Double Staining (VDS), segmenting Ki67-positive and -negative tumor cells using digitally fused parallel cytokeratin (CK) and Ki67-stained slides has been introduced. In this study, we compare VDS with manual stereological counting of Ki67-positive and -negative cells and examine the impact of the physical distance of the parallel slides on the alignment of slides. TMAs, containing 140 cores of consecutively obtained breast carcinomas, were stained for CK and Ki67 using optimized staining protocols. By means of stereological principles, Ki67-positive and -negative cell profiles were counted in sampled areas and used for the estimation of PIs of the whole tissue core. The VDS principle was applied to both the same sampled areas and the whole tissue core. Additionally, five neighboring slides were stained for CK in order to examine the alignment algorithm. Correlation between manual counting and VDS in both sampled areas and whole core was almost perfect (correlation coefficients above 0.97). Bland-Altman plots did not reveal any skewness in any data ranges. There was a good agreement in alignment (>85 %) in neighboring slides, whereas agreement decreased in non-neighboring slides. VDS gave similar results compared with manual counting using stereological principles. Introduction of this method in clinical and research practice may improve accuracy and reproducibility of Ki67 PI.

  18. Model-based prediction of nephropathia epidemica outbreaks based on climatological and vegetation data and bank vole population dynamics.

    Science.gov (United States)

    Haredasht, S Amirpour; Taylor, C J; Maes, P; Verstraeten, W W; Clement, J; Barrios, M; Lagrou, K; Van Ranst, M; Coppin, P; Berckmans, D; Aerts, J-M

    2013-11-01

    could be predicted 3 months ahead with a 34% mean relative prediction error (MRPE). This took into account solely the population dynamics of the carrier species (bank voles). The time series analysis also revealed that climate change, as represented by the vegetation index, changes in forest phenology derived from satellite images and directly measured air temperature, may affect the mechanics of NE transmission. NE outbreaks in Belgium were predicted 3 months ahead with a 40% MRPE, based only on the climatological and vegetation data, in this case, without any knowledge of the bank vole's population dynamics. In this research, we demonstrated that NE outbreaks can be predicted using climate and vegetation data or the bank vole's population dynamics, by using dynamic data-based models with time-varying parameters. Such a predictive modelling approach might be used as a step towards the development of new tools for the prevention of future NE outbreaks. © 2012 Blackwell Verlag GmbH.

  19. Threshold-based prediction of the coagulation zone in sequential temperature mapping in MR-guided radiofrequency ablation of liver tumours

    Energy Technology Data Exchange (ETDEWEB)

    Rempp, Hansjoerg; Hoffmann, Ruediger; Buck, Alexandra; Claussen, Claus D.; Schick, Fritz; Clasen, Stephan [Eberhard Karls University of Tuebingen, Department on Diagnostic and Interventional Radiology, Tuebingen (Germany); Roland, Joerg; Kickhefel, Antje [Siemens Healthcare, Erlangen (Germany); Pereira, Philippe L. [Clinic for radiology, Nuclear Medicine and Minimal Invasive Therapies, SLK-Clinics, Heilbronn (Germany)

    2012-05-15

    To evaluate different cut-off temperature levels for a threshold-based prediction of the coagulation zone in magnetic resonance (MR)-guided radiofrequency (RF) ablation of liver tumours. Temperature-sensitive measurements were acquired during RF ablation of 24 patients with primary (6) and secondary liver lesions (18) using a wide-bore 1.5 T MR sytem and compared with the post-interventional coagulation zone. Temperature measurements using the proton resonance frequency shift method were performed directly subsequent to energy application. The temperature maps were registered on the contrast-enhanced follow-up MR images acquired 4 weeks after treatment. Areas with temperatures above 50 , 55 and 60 C were segmented and compared with the coagulation zones. Sensitivity and positive predictive value were calculated. No major complications occurred and all tumours were completely treated. No tumour recurrence was observed at the follow-up examination after 4 weeks. Two patients with secondary liver lesions showed local tumour recurrence after 4 and 7 months. The 60 C threshold level achieved the highest positive predictive value (87.7 {+-} 9.9) and the best prediction of the coagulation zone. For a threshold-based prediction of the coagulation zone, the 60 C cut-off level achieved the best prediction of the coagulation zone among the tested levels. (orig.)

  20. Threshold-based prediction of the coagulation zone in sequential temperature mapping in MR-guided radiofrequency ablation of liver tumours

    International Nuclear Information System (INIS)

    Rempp, Hansjoerg; Hoffmann, Ruediger; Buck, Alexandra; Claussen, Claus D.; Schick, Fritz; Clasen, Stephan; Roland, Joerg; Kickhefel, Antje; Pereira, Philippe L.

    2012-01-01

    To evaluate different cut-off temperature levels for a threshold-based prediction of the coagulation zone in magnetic resonance (MR)-guided radiofrequency (RF) ablation of liver tumours. Temperature-sensitive measurements were acquired during RF ablation of 24 patients with primary (6) and secondary liver lesions (18) using a wide-bore 1.5 T MR sytem and compared with the post-interventional coagulation zone. Temperature measurements using the proton resonance frequency shift method were performed directly subsequent to energy application. The temperature maps were registered on the contrast-enhanced follow-up MR images acquired 4 weeks after treatment. Areas with temperatures above 50 , 55 and 60 C were segmented and compared with the coagulation zones. Sensitivity and positive predictive value were calculated. No major complications occurred and all tumours were completely treated. No tumour recurrence was observed at the follow-up examination after 4 weeks. Two patients with secondary liver lesions showed local tumour recurrence after 4 and 7 months. The 60 C threshold level achieved the highest positive predictive value (87.7 ± 9.9) and the best prediction of the coagulation zone. For a threshold-based prediction of the coagulation zone, the 60 C cut-off level achieved the best prediction of the coagulation zone among the tested levels. (orig.)

  1. Simulation-Based Performance Evaluation of Predictive-Hashing Based Multicast Authentication Protocol

    Directory of Open Access Journals (Sweden)

    Seonho Choi

    2012-12-01

    Full Text Available A predictive-hashing based Denial-of-Service (DoS resistant multicast authentication protocol was proposed based upon predictive-hashing, one-way key chain, erasure codes, and distillation codes techniques [4, 5]. It was claimed that this new scheme should be more resistant to various types of DoS attacks, and its worst-case resource requirements were derived in terms of coarse-level system parameters including CPU times for signature verification and erasure/distillation decoding operations, attack levels, etc. To show the effectiveness of our approach and to analyze exact resource requirements in various attack scenarios with different parameter settings, we designed and implemented an attack simulator which is platformindependent. Various attack scenarios may be created with different attack types and parameters against a receiver equipped with the predictive-hashing based protocol. The design of the simulator is explained, and the simulation results are presented with detailed resource usage statistics. In addition, resistance level to various types of DoS attacks is formulated with a newly defined resistance metric. By comparing these results to those from another approach, PRABS [8], we show that the resistance level of our protocol is greatly enhanced even in the presence of many attack streams.

  2. Spectral BRDF-based determination of proper measurement geometries to characterize color shift of special effect coatings.

    Science.gov (United States)

    Ferrero, Alejandro; Rabal, Ana; Campos, Joaquín; Martínez-Verdú, Francisco; Chorro, Elísabet; Perales, Esther; Pons, Alicia; Hernanz, María Luisa

    2013-02-01

    A reduced set of measurement geometries allows the spectral reflectance of special effect coatings to be predicted for any other geometry. A physical model based on flake-related parameters has been used to determine nonredundant measurement geometries for the complete description of the spectral bidirectional reflectance distribution function (BRDF). The analysis of experimental spectral BRDF was carried out by means of principal component analysis. From this analysis, a set of nine measurement geometries was proposed to characterize special effect coatings. It was shown that, for two different special effect coatings, these geometries provide a good prediction of their complete color shift.

  3. Rate-Based Model Predictive Control of Turbofan Engine Clearance

    Science.gov (United States)

    DeCastro, Jonathan A.

    2006-01-01

    An innovative model predictive control strategy is developed for control of nonlinear aircraft propulsion systems and sub-systems. At the heart of the controller is a rate-based linear parameter-varying model that propagates the state derivatives across the prediction horizon, extending prediction fidelity to transient regimes where conventional models begin to lose validity. The new control law is applied to a demanding active clearance control application, where the objectives are to tightly regulate blade tip clearances and also anticipate and avoid detrimental blade-shroud rub occurrences by optimally maintaining a predefined minimum clearance. Simulation results verify that the rate-based controller is capable of satisfying the objectives during realistic flight scenarios where both a conventional Jacobian-based model predictive control law and an unconstrained linear-quadratic optimal controller are incapable of doing so. The controller is evaluated using a variety of different actuators, illustrating the efficacy and versatility of the control approach. It is concluded that the new strategy has promise for this and other nonlinear aerospace applications that place high importance on the attainment of control objectives during transient regimes.

  4. Drug-target interaction prediction from PSSM based evolutionary information.

    Science.gov (United States)

    Mousavian, Zaynab; Khakabimamaghani, Sahand; Kavousi, Kaveh; Masoudi-Nejad, Ali

    2016-01-01

    The labor-intensive and expensive experimental process of drug-target interaction prediction has motivated many researchers to focus on in silico prediction, which leads to the helpful information in supporting the experimental interaction data. Therefore, they have proposed several computational approaches for discovering new drug-target interactions. Several learning-based methods have been increasingly developed which can be categorized into two main groups: similarity-based and feature-based. In this paper, we firstly use the bi-gram features extracted from the Position Specific Scoring Matrix (PSSM) of proteins in predicting drug-target interactions. Our results demonstrate the high-confidence prediction ability of the Bigram-PSSM model in terms of several performance indicators specifically for enzymes and ion channels. Moreover, we investigate the impact of negative selection strategy on the performance of the prediction, which is not widely taken into account in the other relevant studies. This is important, as the number of non-interacting drug-target pairs are usually extremely large in comparison with the number of interacting ones in existing drug-target interaction data. An interesting observation is that different levels of performance reduction have been attained for four datasets when we change the sampling method from the random sampling to the balanced sampling. Copyright © 2015 Elsevier Inc. All rights reserved.

  5. A heat transport benchmark problem for predicting the impact of measurements on experimental facility design

    International Nuclear Information System (INIS)

    Cacuci, Dan Gabriel

    2016-01-01

    Highlights: • Predictive Modeling of Coupled Multi-Physics Systems (PM_CMPS) methodology is used. • Impact of measurements for reducing predicted uncertainties is highlighted. • Presented thermal-hydraulics benchmark illustrates generally applicable concepts. - Abstract: This work presents the application of the “Predictive Modeling of Coupled Multi-Physics Systems” (PM_CMPS) methodology conceived by Cacuci (2014) to a “test-section benchmark” problem in order to quantify the impact of measurements for reducing the uncertainties in the conceptual design of a proposed experimental facility aimed at investigating the thermal-hydraulics characteristics expected in the conceptual design of the G4M reactor (GEN4ENERGY, 2012). This “test-section benchmark” simulates the conditions experienced by the hottest rod within the conceptual design of the facility's test section, modeling the steady-state conduction in a rod heated internally by a cosinus-like heat source, as typically encountered in nuclear reactors, and cooled by forced convection to a surrounding coolant flowing along the rod. The PM_CMPS methodology constructs a prior distribution using all of the available computational and experimental information, by relying on the maximum entropy principle to maximize the impact of all available information and minimize the impact of ignorance. The PM_CMPS methodology then constructs the posterior distribution using Bayes’ theorem, and subsequently evaluates it via saddle-point methods to obtain explicit formulas for the predicted optimal temperature distributions and predicted optimal values for the thermal-hydraulics model parameters that characterized the test-section benchmark. In addition, the PM_CMPS methodology also yields reduced uncertainties for both the model parameters and responses. As a general rule, it is important to measure a quantity consistently with, and more accurately than, the information extant prior to the measurement. For

  6. Prediction of indoor radon concentration based on residence location and construction

    International Nuclear Information System (INIS)

    Maekelaeinen, I.; Voutilainen, A.; Castren, O.

    1992-01-01

    We have constructed a model for assessing indoor radon concentrations in houses where measurements cannot be performed. It has been used in an epidemiological study and to determine the radon potential of new building sites. The model is based on data from about 10,000 buildings. Integrated radon measurements were made during the cold season in all the houses; their geographic coordinates were also known. The 2-mo measurement results were corrected to annual average concentrations. Construction data were collected from questionnaires completed by residents; geological data were determined from geological maps. Data were classified according to geographical, geological, and construction factors. In order to describe different radon production levels, the country was divided into four zones. We assumed that the factors were multiplicative, and a linear concentration-prediction model was used. The most significant factor in determining radon concentration was the geographical region, followed by soil type, year of construction, and type of foundation. The predicted indoor radon concentrations given by the model varied from 50 to 440 Bq m -3 . The lower figure represents a house with a basement, built in the 1950s on clay soil, in the region with the lowest radon concentration levels. The higher value represents a house with a concrete slab in contact with the ground, built in the 1980s, on gravel, in the region with the highest average radon concentration

  7. Predictive sensor based x-ray calibration using a physical model

    International Nuclear Information System (INIS)

    Fuente, Matias de la; Lutz, Peter; Wirtz, Dieter C.; Radermacher, Klaus

    2007-01-01

    Many computer assisted surgery systems are based on intraoperative x-ray images. To achieve reliable and accurate results these images have to be calibrated concerning geometric distortions, which can be distinguished between constant distortions and distortions caused by magnetic fields. Instead of using an intraoperative calibration phantom that has to be visible within each image resulting in overlaying markers, the presented approach directly takes advantage of the physical background of the distortions. Based on a computed physical model of an image intensifier and a magnetic field sensor, an online compensation of distortions can be achieved without the need of an intraoperative calibration phantom. The model has to be adapted once to each specific image intensifier through calibration, which is based on an optimization algorithm systematically altering the physical model parameters, until a minimal error is reached. Once calibrated, the model is able to predict the distortions caused by the measured magnetic field vector and build an appropriate dewarping function. The time needed for model calibration is not yet optimized and takes up to 4 h on a 3 GHz CPU. In contrast, the time needed for distortion correction is less than 1 s and therefore absolutely acceptable for intraoperative use. First evaluations showed that by using the model based dewarping algorithm the distortions of an XRII with a 21 cm FOV could be significantly reduced. The model was able to predict and compensate distortions by approximately 80% to a remaining error of 0.45 mm (max) (0.19 mm rms)

  8. Seminal Quality Prediction Using Clustering-Based Decision Forests

    Directory of Open Access Journals (Sweden)

    Hong Wang

    2014-08-01

    Full Text Available Prediction of seminal quality with statistical learning tools is an emerging methodology in decision support systems in biomedical engineering and is very useful in early diagnosis of seminal patients and selection of semen donors candidates. However, as is common in medical diagnosis, seminal quality prediction faces the class imbalance problem. In this paper, we propose a novel supervised ensemble learning approach, namely Clustering-Based Decision Forests, to tackle unbalanced class learning problem in seminal quality prediction. Experiment results on real fertility diagnosis dataset have shown that Clustering-Based Decision Forests outperforms decision tree, Support Vector Machines, random forests, multilayer perceptron neural networks and logistic regression by a noticeable margin. Clustering-Based Decision Forests can also be used to evaluate variables’ importance and the top five important factors that may affect semen concentration obtained in this study are age, serious trauma, sitting time, the season when the semen sample is produced, and high fevers in the last year. The findings could be helpful in explaining seminal concentration problems in infertile males or pre-screening semen donor candidates.

  9. A burnout prediction model based around char morphology

    Energy Technology Data Exchange (ETDEWEB)

    Tao Wu; Edward Lester; Michael Cloke [University of Nottingham, Nottingham (United Kingdom). School of Chemical, Environmental and Mining Engineering

    2006-05-15

    Several combustion models have been developed that can make predictions about coal burnout and burnout potential. Most of these kinetic models require standard parameters such as volatile content and particle size to make a burnout prediction. This article presents a new model called the char burnout (ChB) model, which also uses detailed information about char morphology in its prediction. The input data to the model is based on information derived from two different image analysis techniques. One technique generates characterization data from real char samples, and the other predicts char types based on characterization data from image analysis of coal particles. The pyrolyzed chars in this study were created in a drop tube furnace operating at 1300{sup o}C, 200 ms, and 1% oxygen. Modeling results were compared with a different carbon burnout kinetic model as well as the actual burnout data from refiring the same chars in a drop tube furnace operating at 1300{sup o}C, 5% oxygen, and residence times of 200, 400, and 600 ms. A good agreement between ChB model and experimental data indicates that the inclusion of char morphology in combustion models could well improve model predictions. 38 refs., 5 figs., 6 tabs.

  10. Measurement and prediction of voice support and room gain

    DEFF Research Database (Denmark)

    Pelegrin Garcia, David; Brunskog, Jonas; Lyberg-Åhlander, Viveka

    2012-01-01

    and good acoustical quality lies in the range between 14 and 9 dB, whereas the room gain is in the range between 0.2 and 0.5 dB. The prediction model for voice support describes the measurements in the classrooms with a coefficient of determination of 0.84 and a standard deviation of 1.2 dB....

  11. Predicting ambient aerosol Thermal Optical Reflectance (TOR) measurements from infrared spectra: elemental carbon

    Science.gov (United States)

    Dillner, A. M.; Takahama, S.

    2015-06-01

    bias (0.00 μg m-3, concentration value based on the nominal IMPROVE sample volume of 32.8 m-3), low error (0.03 μg m-3) and reasonable normalized error (21 %). These performance metrics can be achieved with various degrees of spectral pretreatment (e.g., including or excluding substrate contributions to the absorbances) and are comparable in precision and accuracy to collocated TOR measurements. Only the normalized error is higher for the FT-IR EC measurements than for collocated TOR. FT-IR spectra are also divided into calibration and test sets by the ratios OC/EC and ammonium/EC to determine the impact of OC and ammonium on EC prediction. We conclude that FT-IR analysis with partial least squares regression is a robust method for accurately predicting TOR EC in IMPROVE network samples; providing complementary information to TOR OC predictions (Dillner and Takahama, 2015) and the organic functional group composition and organic matter (OM) estimated previously from the same set of sample spectra (Ruthenburg et al., 2014).

  12. Prediction of myotonic dystrophy clinical severity based on the number of intragenic [CTG]{sub n} trinucleotide repeats

    Energy Technology Data Exchange (ETDEWEB)

    Gennarelli, M.; Dallapiccola, B. [Universita Tor Vergata, Rome (Italy); Novelli, G. [Universita Cattolica del Sacro Cuore, Rome (Italy)] [and others

    1996-11-11

    We carried out a genotype-phenotype correlation study, based on clinical findings in 465 patients with myotonic dystrophy (DM), in order to assess [CTG] repeat number as a predictive test of disease severity. Our analysis showed that the DM subtypes defined by strict clinical criteria fall into three different classes with a log-normal distribution. This distribution is useful in predicting the probability of specific DM phenotypes based on triplet [CTG] number. This study demonstrates that measurement of triplet expansions in patients` lymphocyte DNA is highly valuable and accurate for prognostic assessment. 45 refs., 1 fig., 2 tabs.

  13. Comparison of measured and predicted long term performance of grid a connected photovoltaic system

    International Nuclear Information System (INIS)

    Mondol, Jayanta Deb; Yohanis, Yigzaw G.; Norton, Brian

    2007-01-01

    Predicted performance of a grid connected photovoltaic (PV) system using TRNSYS was compared with measured data. A site specific global-diffuse correlation model was developed and used to calculate the beam and diffuse components of global horizontal insolation. A PV module temperature equation and a correlation relating input and output power of an inverter were developed using measured data and used in TRNSYS to perform PV array and inverter outputs simulation. Different combinations of the tilted surface radiation model, global-diffuse correlation model and PV module temperature equation were used in the simulations. Statistical error analysis was performed to compare the results for each combination. The simulation accuracy was improved by using the new global-diffuse correlation and module temperature equation in the TRNSYS simulation. For an isotropic sky tilted surface radiation model, the average monthly difference between measured and predicted PV output before and after modification of the TRNSYS component were 10.2% and 3.3%, respectively, and, for an anisotropic sky model, 15.4% and 10.7%, respectively. For inverter output, the corresponding errors were 10.4% and 3.3% and 15.8% and 8.6%, respectively. Measured PV efficiency, overall system efficiency, inverter efficiency and performance ratio of the system were compared with the predicted results. The predicted PV performance parameters agreed more closely with the measured parameters in summer than in winter. The difference between predicted performances using an isotropic and an anisotropic sky tilted surface models is between 1% and 2%

  14. Evaluating the Predictive Power of Multivariate Tensor-based Morphometry in Alzheimers Disease Progression via Convex Fused Sparse Group Lasso.

    Science.gov (United States)

    Tsao, Sinchai; Gajawelli, Niharika; Zhou, Jiayu; Shi, Jie; Ye, Jieping; Wang, Yalin; Lepore, Natasha

    2014-03-21

    Prediction of Alzheimers disease (AD) progression based on baseline measures allows us to understand disease progression and has implications in decisions concerning treatment strategy. To this end we combine a predictive multi-task machine learning method 1 with novel MR-based multivariate morphometric surface map of the hippocampus 2 to predict future cognitive scores of patients. Previous work by Zhou et al. 1 has shown that a multi-task learning framework that performs prediction of all future time points (or tasks) simultaneously can be used to encode both sparsity as well as temporal smoothness. They showed that this can be used in predicting cognitive outcomes of Alzheimers Disease Neuroimaging Initiative (ADNI) subjects based on FreeSurfer-based baseline MRI features, MMSE score demographic information and ApoE status. Whilst volumetric information may hold generalized information on brain status, we hypothesized that hippocampus specific information may be more useful in predictive modeling of AD. To this end, we applied Shi et al. 2 s recently developed multivariate tensor-based (mTBM) parametric surface analysis method to extract features from the hippocampal surface. We show that by combining the power of the multi-task framework with the sensitivity of mTBM features of the hippocampus surface, we are able to improve significantly improve predictive performance of ADAS cognitive scores 6, 12, 24, 36 and 48 months from baseline.

  15. Evaluating the predictive power of multivariate tensor-based morphometry in Alzheimer's disease progression via convex fused sparse group Lasso

    Science.gov (United States)

    Tsao, Sinchai; Gajawelli, Niharika; Zhou, Jiayu; Shi, Jie; Ye, Jieping; Wang, Yalin; Lepore, Natasha

    2014-03-01

    Prediction of Alzheimers disease (AD) progression based on baseline measures allows us to understand disease progression and has implications in decisions concerning treatment strategy. To this end we combine a predictive multi-task machine learning method1 with novel MR-based multivariate morphometric surface map of the hippocampus2 to predict future cognitive scores of patients. Previous work by Zhou et al.1 has shown that a multi-task learning framework that performs prediction of all future time points (or tasks) simultaneously can be used to encode both sparsity as well as temporal smoothness. They showed that this can be used in predicting cognitive outcomes of Alzheimers Disease Neuroimaging Initiative (ADNI) subjects based on FreeSurfer-based baseline MRI features, MMSE score demographic information and ApoE status. Whilst volumetric information may hold generalized information on brain status, we hypothesized that hippocampus specific information may be more useful in predictive modeling of AD. To this end, we applied Shi et al.2s recently developed multivariate tensor-based (mTBM) parametric surface analysis method to extract features from the hippocampal surface. We show that by combining the power of the multi-task framework with the sensitivity of mTBM features of the hippocampus surface, we are able to improve significantly improve predictive performance of ADAS cognitive scores 6, 12, 24, 36 and 48 months from baseline.

  16. Analysis of rain fade duration models for Earth-to-satellite path based on data measured in Malaysia

    International Nuclear Information System (INIS)

    Dao, Hassan; Rafiqul, Islam Md; Al-Khateeb, Khalid A S

    2013-01-01

    Statistical analysis of rain fade duration is crucial information for system engineer to design and plan a fade mitigation technique (FMT) for the satellite communication system. An investigation is carried out based on data measured of one year period in Kuala Lumpur, Malaysia from satellite path of MEASAT3. This paper presents statistical analysis of measured fade duration on high elevation angle (77.4°) in Ku-band compared to three prediction models of fade duration. It is found that none of the models could predict measured fade duration distribution accurately

  17. Deep-Learning-Based Drug-Target Interaction Prediction.

    Science.gov (United States)

    Wen, Ming; Zhang, Zhimin; Niu, Shaoyu; Sha, Haozhi; Yang, Ruihan; Yun, Yonghuan; Lu, Hongmei

    2017-04-07

    Identifying interactions between known drugs and targets is a major challenge in drug repositioning. In silico prediction of drug-target interaction (DTI) can speed up the expensive and time-consuming experimental work by providing the most potent DTIs. In silico prediction of DTI can also provide insights about the potential drug-drug interaction and promote the exploration of drug side effects. Traditionally, the performance of DTI prediction depends heavily on the descriptors used to represent the drugs and the target proteins. In this paper, to accurately predict new DTIs between approved drugs and targets without separating the targets into different classes, we developed a deep-learning-based algorithmic framework named DeepDTIs. It first abstracts representations from raw input descriptors using unsupervised pretraining and then applies known label pairs of interaction to build a classification model. Compared with other methods, it is found that DeepDTIs reaches or outperforms other state-of-the-art methods. The DeepDTIs can be further used to predict whether a new drug targets to some existing targets or whether a new target interacts with some existing drugs.

  18. Prediction of bubble detachment diameter in flow boiling based on force analysis

    International Nuclear Information System (INIS)

    Chen Deqi; Pan Liangming; Ren Song

    2012-01-01

    Highlights: ► All the forces acting on the growing bubbles are taken into account in the model. ► The bubble contact diameter has significant effect on bubble detachment. ► Bubble growth force and surface tension are more significant in narrow channel. ► A good agreement between the predicted and the measured results is achieved. - Abstract: Bubble detachment diameter is one of the key parameters in the study of bubble dynamics and boiling heat transfer, and it is hard to be measured in a boiling system. In order to predict the bubble detachment diameter, a theoretical model is proposed based on forces analysis in this paper. All the forces acting on a bubble are taken into account to establish a model for different flow boiling configurations, including narrow and conventional channels, upward, downward and horizontal flows. A correlation of bubble contact circle diameter is adopted in this study, and it is found that the bubble contact circle diameter has significant effect on bubble detachment. A new correlation taking the bubble contact circle diameter into account for the evaluation of bubble growth force is proposed in this study, and it is found that the bubble growth force and surface tension force are more significant in narrow channel when comparing with that in conventional channel. A visual experiment was carried out in order to verify present model; and the experimental data from published literature are used also. A good agreement between predicted and measured results is achieved.

  19. The effect of genealogy-based haplotypes on genomic prediction

    DEFF Research Database (Denmark)

    Edriss, Vahid; Fernando, Rohan L.; Su, Guosheng

    2013-01-01

    on haplotypes instead of regression on individual markers. The aim of this study was to investigate the accuracy of genomic prediction using haplotypes based on local genealogy information. Methods A total of 4429 Danish Holstein bulls were genotyped with the 50K SNP chip. Haplotypes were constructed using...... local genealogical trees. Effects of haplotype covariates were estimated with two types of prediction models: (1) assuming that effects had the same distribution for all haplotype covariates, i.e. the GBLUP method and (2) assuming that a large proportion (pi) of the haplotype covariates had zero effect......, i.e. a Bayesian mixture method. Results About 7.5 times more covariate effects were estimated when fitting haplotypes based on local genealogical trees compared to fitting individuals markers. Genealogy-based haplotype clustering slightly increased the accuracy of genomic prediction and, in some...

  20. Prediction of CO concentrations based on a hybrid Partial Least Square and Support Vector Machine model

    Science.gov (United States)

    Yeganeh, B.; Motlagh, M. Shafie Pour; Rashidi, Y.; Kamalan, H.

    2012-08-01

    Due to the health impacts caused by exposures to air pollutants in urban areas, monitoring and forecasting of air quality parameters have become popular as an important topic in atmospheric and environmental research today. The knowledge on the dynamics and complexity of air pollutants behavior has made artificial intelligence models as a useful tool for a more accurate pollutant concentration prediction. This paper focuses on an innovative method of daily air pollution prediction using combination of Support Vector Machine (SVM) as predictor and Partial Least Square (PLS) as a data selection tool based on the measured values of CO concentrations. The CO concentrations of Rey monitoring station in the south of Tehran, from Jan. 2007 to Feb. 2011, have been used to test the effectiveness of this method. The hourly CO concentrations have been predicted using the SVM and the hybrid PLS-SVM models. Similarly, daily CO concentrations have been predicted based on the aforementioned four years measured data. Results demonstrated that both models have good prediction ability; however the hybrid PLS-SVM has better accuracy. In the analysis presented in this paper, statistic estimators including relative mean errors, root mean squared errors and the mean absolute relative error have been employed to compare performances of the models. It has been concluded that the errors decrease after size reduction and coefficients of determination increase from 56 to 81% for SVM model to 65-85% for hybrid PLS-SVM model respectively. Also it was found that the hybrid PLS-SVM model required lower computational time than SVM model as expected, hence supporting the more accurate and faster prediction ability of hybrid PLS-SVM model.