WorldWideScience

Sample records for computing non-parametric function

  1. The Support Reduction Algorithm for Computing Non-Parametric Function Estimates in Mixture Models

    GROENEBOOM, PIET; JONGBLOED, GEURT; WELLNER, JON A.

    2008-01-01

    In this paper, we study an algorithm (which we call the support reduction algorithm) that can be used to compute non-parametric M-estimators in mixture models. The algorithm is compared with natural competitors in the context of convex regression and the ‘Aspect problem’ in quantum physics.

  2. Non-Parametric Estimation of Correlation Functions

    Brincker, Rune; Rytter, Anders; Krenk, Steen

    In this paper three methods of non-parametric correlation function estimation are reviewed and evaluated: the direct method, estimation by the Fast Fourier Transform and finally estimation by the Random Decrement technique. The basic ideas of the techniques are reviewed, sources of bias are point...

  3. A NON-PARAMETRIC APPROACH TO CONSTRAIN THE TRANSFER FUNCTION IN REVERBERATION MAPPING

    Li, Yan-Rong; Wang, Jian-Min; Bai, Jin-Ming

    2016-01-01

    Broad emission lines of active galactic nuclei stem from a spatially extended region (broad-line region, BLR) that is composed of discrete clouds and photoionized by the central ionizing continuum. The temporal behaviors of these emission lines are blurred echoes of continuum variations (i.e., reverberation mapping, RM) and directly reflect the structures and kinematic information of BLRs through the so-called transfer function (also known as the velocity-delay map). Based on the previous works of Rybicki and Press and Zu et al., we develop an extended, non-parametric approach to determine the transfer function for RM data, in which the transfer function is expressed as a sum of a family of relatively displaced Gaussian response functions. Therefore, arbitrary shapes of transfer functions associated with complicated BLR geometry can be seamlessly included, enabling us to relax the presumption of a specified transfer function frequently adopted in previous studies and to let it be determined by observation data. We formulate our approach in a previously well-established framework that incorporates the statistical modeling of continuum variations as a damped random walk process and takes into account long-term secular variations which are irrelevant to RM signals. The application to RM data shows the fidelity of our approach.

  4. A NON-PARAMETRIC APPROACH TO CONSTRAIN THE TRANSFER FUNCTION IN REVERBERATION MAPPING

    Li, Yan-Rong; Wang, Jian-Min [Key Laboratory for Particle Astrophysics, Institute of High Energy Physics, Chinese Academy of Sciences, 19B Yuquan Road, Beijing 100049 (China); Bai, Jin-Ming, E-mail: liyanrong@mail.ihep.ac.cn [Yunnan Observatories, Chinese Academy of Sciences, Kunming 650011 (China)

    2016-11-10

    Broad emission lines of active galactic nuclei stem from a spatially extended region (broad-line region, BLR) that is composed of discrete clouds and photoionized by the central ionizing continuum. The temporal behaviors of these emission lines are blurred echoes of continuum variations (i.e., reverberation mapping, RM) and directly reflect the structures and kinematic information of BLRs through the so-called transfer function (also known as the velocity-delay map). Based on the previous works of Rybicki and Press and Zu et al., we develop an extended, non-parametric approach to determine the transfer function for RM data, in which the transfer function is expressed as a sum of a family of relatively displaced Gaussian response functions. Therefore, arbitrary shapes of transfer functions associated with complicated BLR geometry can be seamlessly included, enabling us to relax the presumption of a specified transfer function frequently adopted in previous studies and to let it be determined by observation data. We formulate our approach in a previously well-established framework that incorporates the statistical modeling of continuum variations as a damped random walk process and takes into account long-term secular variations which are irrelevant to RM signals. The application to RM data shows the fidelity of our approach.

  5. Speeding Up Non-Parametric Bootstrap Computations for Statistics Based on Sample Moments in Small/Moderate Sample Size Applications.

    Elias Chaibub Neto

    Full Text Available In this paper we propose a vectorized implementation of the non-parametric bootstrap for statistics based on sample moments. Basically, we adopt the multinomial sampling formulation of the non-parametric bootstrap, and compute bootstrap replications of sample moment statistics by simply weighting the observed data according to multinomial counts instead of evaluating the statistic on a resampled version of the observed data. Using this formulation we can generate a matrix of bootstrap weights and compute the entire vector of bootstrap replications with a few matrix multiplications. Vectorization is particularly important for matrix-oriented programming languages such as R, where matrix/vector calculations tend to be faster than scalar operations implemented in a loop. We illustrate the application of the vectorized implementation in real and simulated data sets, when bootstrapping Pearson's sample correlation coefficient, and compared its performance against two state-of-the-art R implementations of the non-parametric bootstrap, as well as a straightforward one based on a for loop. Our investigations spanned varying sample sizes and number of bootstrap replications. The vectorized bootstrap compared favorably against the state-of-the-art implementations in all cases tested, and was remarkably/considerably faster for small/moderate sample sizes. The same results were observed in the comparison with the straightforward implementation, except for large sample sizes, where the vectorized bootstrap was slightly slower than the straightforward implementation due to increased time expenditures in the generation of weight matrices via multinomial sampling.

  6. A non-parametric estimator for the doubly-periodic Poisson intensity function

    R. Helmers (Roelof); I.W. Mangku (Wayan); R. Zitikis

    2007-01-01

    textabstractIn a series of papers, J. Garrido and Y. Lu have proposed and investigated a doubly-periodic Poisson model, and then applied it to analyze hurricane data. The authors have suggested several parametric models for the underlying intensity function. In the present paper we construct and

  7. Non-parametric Bayesian models of response function in dynamic image sequences

    Tichý, Ondřej; Šmídl, Václav

    2016-01-01

    Roč. 151, č. 1 (2016), s. 90-100 ISSN 1077-3142 R&D Projects: GA ČR GA13-29225S Institutional support: RVO:67985556 Keywords : Response function * Blind source separation * Dynamic medical imaging * Probabilistic models * Bayesian methods Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 2.498, year: 2016 http://library.utia.cas.cz/separaty/2016/AS/tichy-0456983.pdf

  8. Non parametric forecasting of functional-valued processes: application to the electricity load

    Cugliari, J.

    2011-01-01

    This thesis addresses the problem of predicting a functional valued stochastic process. We first explore the model proposed by Antoniadis et al. (2006) in the context of a practical application -the french electrical power demand- where the hypothesis of stationarity may fail. The departure from stationarity is twofold: an evolving mean level and the existence of groups that may be seen as classes of stationarity. We explore some corrections that enhance the prediction performance. The corrections aim to take into account the presence of these nonstationary features. In particular, to handle the existence of groups, we constraint the model to use only the data that belongs to the same group of the last available data. If one knows the grouping, a simple post-treatment suffices to obtain better prediction performances. If the grouping is unknown, we propose it from data using clustering analysis. The infinite dimension of the not necessarily stationary trajectories have to be taken into account by the clustering algorithm. We propose two strategies for this, both based on wavelet transforms. The first one uses a feature extraction approach through the Discrete Wavelet Transform combined with a feature selection algorithm to select the significant features to be used in a classical clustering algorithm. The second approach clusters directly the functions by means of a dissimilarity measure of the Continuous Wavelet spectra.The third part of thesis is dedicated to explore an alternative prediction model that incorporates exogenous information. For this purpose we use the framework given by the Autoregressive Hilbertian processes. We propose a new class of processes that we call Conditional Autoregressive Hilbertian (carh) and develop the equivalent of projection and resolvent classes of estimators to predict such processes. (author)

  9. Non-parametric model selection for subject-specific topological organization of resting-state functional connectivity.

    Ferrarini, Luca; Veer, Ilya M; van Lew, Baldur; Oei, Nicole Y L; van Buchem, Mark A; Reiber, Johan H C; Rombouts, Serge A R B; Milles, J

    2011-06-01

    In recent years, graph theory has been successfully applied to study functional and anatomical connectivity networks in the human brain. Most of these networks have shown small-world topological characteristics: high efficiency in long distance communication between nodes, combined with highly interconnected local clusters of nodes. Moreover, functional studies performed at high resolutions have presented convincing evidence that resting-state functional connectivity networks exhibits (exponentially truncated) scale-free behavior. Such evidence, however, was mostly presented qualitatively, in terms of linear regressions of the degree distributions on log-log plots. Even when quantitative measures were given, these were usually limited to the r(2) correlation coefficient. However, the r(2) statistic is not an optimal estimator of explained variance, when dealing with (truncated) power-law models. Recent developments in statistics have introduced new non-parametric approaches, based on the Kolmogorov-Smirnov test, for the problem of model selection. In this work, we have built on this idea to statistically tackle the issue of model selection for the degree distribution of functional connectivity at rest. The analysis, performed at voxel level and in a subject-specific fashion, confirmed the superiority of a truncated power-law model, showing high consistency across subjects. Moreover, the most highly connected voxels were found to be consistently part of the default mode network. Our results provide statistically sound support to the evidence previously presented in literature for a truncated power-law model of resting-state functional connectivity. Copyright © 2010 Elsevier Inc. All rights reserved.

  10. Non-parametric smoothing of experimental data

    Kuketayev, A.T.; Pen'kov, F.M.

    2007-01-01

    Full text: Rapid processing of experimental data samples in nuclear physics often requires differentiation in order to find extrema. Therefore, even at the preliminary stage of data analysis, a range of noise reduction methods are used to smooth experimental data. There are many non-parametric smoothing techniques: interval averages, moving averages, exponential smoothing, etc. Nevertheless, it is more common to use a priori information about the behavior of the experimental curve in order to construct smoothing schemes based on the least squares techniques. The latter methodology's advantage is that the area under the curve can be preserved, which is equivalent to conservation of total speed of counting. The disadvantages of this approach include the lack of a priori information. For example, very often the sums of undifferentiated (by a detector) peaks are replaced with one peak during the processing of data, introducing uncontrolled errors in the determination of the physical quantities. The problem is solvable only by having experienced personnel, whose skills are much greater than the challenge. We propose a set of non-parametric techniques, which allows the use of any additional information on the nature of experimental dependence. The method is based on a construction of a functional, which includes both experimental data and a priori information. Minimum of this functional is reached on a non-parametric smoothed curve. Euler (Lagrange) differential equations are constructed for these curves; then their solutions are obtained analytically or numerically. The proposed approach allows for automated processing of nuclear physics data, eliminating the need for highly skilled laboratory personnel. Pursuant to the proposed approach is the possibility to obtain smoothing curves in a given confidence interval, e.g. according to the χ 2 distribution. This approach is applicable when constructing smooth solutions of ill-posed problems, in particular when solving

  11. The Stellar Initial Mass Function in Early-type Galaxies from Absorption Line Spectroscopy. IV. A Super-Salpeter IMF in the Center of NGC 1407 from Non-parametric Models

    Conroy, Charlie [Department of Astronomy, Harvard University, Cambridge, MA, 02138 (United States); Van Dokkum, Pieter G. [Department of Astronomy, Yale University, New Haven, CT, 06511 (United States); Villaume, Alexa [Department of Astronomy and Astrophysics, University of California, Santa Cruz, CA 95064 (United States)

    2017-03-10

    It is now well-established that the stellar initial mass function (IMF) can be determined from the absorption line spectra of old stellar systems, and this has been used to measure the IMF and its variation across the early-type galaxy population. Previous work focused on measuring the slope of the IMF over one or more stellar mass intervals, implicitly assuming that this is a good description of the IMF and that the IMF has a universal low-mass cutoff. In this work we consider more flexible IMFs, including two-component power laws with a variable low-mass cutoff and a general non-parametric model. We demonstrate with mock spectra that the detailed shape of the IMF can be accurately recovered as long as the data quality is high (S/N ≳ 300 Å{sup −1}) and cover a wide wavelength range (0.4–1.0 μ m). We apply these flexible IMF models to a high S/N spectrum of the center of the massive elliptical galaxy NGC 1407. Fitting the spectrum with non-parametric IMFs, we find that the IMF in the center shows a continuous rise extending toward the hydrogen-burning limit, with a behavior that is well-approximated by a power law with an index of −2.7. These results provide strong evidence for the existence of extreme (super-Salpeter) IMFs in the cores of massive galaxies.

  12. Using non-parametric methods in econometric production analysis

    Czekaj, Tomasz Gerard; Henningsen, Arne

    2012-01-01

    by investigating the relationship between the elasticity of scale and the farm size. We use a balanced panel data set of 371~specialised crop farms for the years 2004-2007. A non-parametric specification test shows that neither the Cobb-Douglas function nor the Translog function are consistent with the "true......Econometric estimation of production functions is one of the most common methods in applied economic production analysis. These studies usually apply parametric estimation techniques, which obligate the researcher to specify a functional form of the production function of which the Cobb...... parameter estimates, but also in biased measures which are derived from the parameters, such as elasticities. Therefore, we propose to use non-parametric econometric methods. First, these can be applied to verify the functional form used in parametric production analysis. Second, they can be directly used...

  13. Continuous/discrete non parametric Bayesian belief nets with UNICORN and UNINET

    Cooke, R.M.; Kurowicka, D.; Hanea, A.M.; Morales Napoles, O.; Ababei, D.A.; Ale, B.J.M.; Roelen, A.

    2007-01-01

    Hanea et al. (2006) presented a method for quantifying and computing continuous/discrete non parametric Bayesian Belief Nets (BBN). Influences are represented as conditional rank correlations, and the joint normal copula enables rapid sampling and conditionalization. Further mathematical background

  14. Speaker Linking and Applications using Non-Parametric Hashing Methods

    2016-09-08

    nonparametric estimate of a multivariate density function,” The Annals of Math- ematical Statistics , vol. 36, no. 3, pp. 1049–1051, 1965. [9] E. A. Patrick...Speaker Linking and Applications using Non-Parametric Hashing Methods† Douglas Sturim and William M. Campbell MIT Lincoln Laboratory, Lexington, MA...with many approaches [1, 2]. For this paper, we focus on using i-vectors [2], but the methods apply to any embedding. For the task of speaker QBE and

  15. Parametric and Non-Parametric System Modelling

    Nielsen, Henrik Aalborg

    1999-01-01

    the focus is on combinations of parametric and non-parametric methods of regression. This combination can be in terms of additive models where e.g. one or more non-parametric term is added to a linear regression model. It can also be in terms of conditional parametric models where the coefficients...... considered. It is shown that adaptive estimation in conditional parametric models can be performed by combining the well known methods of local polynomial regression and recursive least squares with exponential forgetting. The approach used for estimation in conditional parametric models also highlights how...... networks is included. In this paper, neural networks are used for predicting the electricity production of a wind farm. The results are compared with results obtained using an adaptively estimated ARX-model. Finally, two papers on stochastic differential equations are included. In the first paper, among...

  16. STATCAT, Statistical Analysis of Parametric and Non-Parametric Data

    David, Hugh

    1990-01-01

    1 - Description of program or function: A suite of 26 programs designed to facilitate the appropriate statistical analysis and data handling of parametric and non-parametric data, using classical and modern univariate and multivariate methods. 2 - Method of solution: Data is read entry by entry, using a choice of input formats, and the resultant data bank is checked for out-of- range, rare, extreme or missing data. The completed STATCAT data bank can be treated by a variety of descriptive and inferential statistical methods, and modified, using other standard programs as required

  17. Non-parametric transformation for data correlation and integration: From theory to practice

    Datta-Gupta, A.; Xue, Guoping; Lee, Sang Heon [Texas A& M Univ., College Station, TX (United States)

    1997-08-01

    The purpose of this paper is two-fold. First, we introduce the use of non-parametric transformations for correlating petrophysical data during reservoir characterization. Such transformations are completely data driven and do not require a priori functional relationship between response and predictor variables which is the case with traditional multiple regression. The transformations are very general, computationally efficient and can easily handle mixed data types for example, continuous variables such as porosity, permeability and categorical variables such as rock type, lithofacies. The power of the non-parametric transformation techniques for data correlation has been illustrated through synthetic and field examples. Second, we utilize these transformations to propose a two-stage approach for data integration during heterogeneity characterization. The principal advantages of our approach over traditional cokriging or cosimulation methods are: (1) it does not require a linear relationship between primary and secondary data, (2) it exploits the secondary information to its fullest potential by maximizing the correlation between the primary and secondary data, (3) it can be easily applied to cases where several types of secondary or soft data are involved, and (4) it significantly reduces variance function calculations and thus, greatly facilitates non-Gaussian cosimulation. We demonstrate the data integration procedure using synthetic and field examples. The field example involves estimation of pore-footage distribution using well data and multiple seismic attributes.

  18. On Parametric (and Non-Parametric Variation

    Neil Smith

    2009-11-01

    Full Text Available This article raises the issue of the correct characterization of ‘Parametric Variation’ in syntax and phonology. After specifying their theoretical commitments, the authors outline the relevant parts of the Principles–and–Parameters framework, and draw a three-way distinction among Universal Principles, Parameters, and Accidents. The core of the contribution then consists of an attempt to provide identity criteria for parametric, as opposed to non-parametric, variation. Parametric choices must be antecedently known, and it is suggested that they must also satisfy seven individually necessary and jointly sufficient criteria. These are that they be cognitively represented, systematic, dependent on the input, deterministic, discrete, mutually exclusive, and irreversible.

  19. A non-parametric framework for estimating threshold limit values

    Ulm Kurt

    2005-11-01

    Full Text Available Abstract Background To estimate a threshold limit value for a compound known to have harmful health effects, an 'elbow' threshold model is usually applied. We are interested on non-parametric flexible alternatives. Methods We describe how a step function model fitted by isotonic regression can be used to estimate threshold limit values. This method returns a set of candidate locations, and we discuss two algorithms to select the threshold among them: the reduced isotonic regression and an algorithm considering the closed family of hypotheses. We assess the performance of these two alternative approaches under different scenarios in a simulation study. We illustrate the framework by analysing the data from a study conducted by the German Research Foundation aiming to set a threshold limit value in the exposure to total dust at workplace, as a causal agent for developing chronic bronchitis. Results In the paper we demonstrate the use and the properties of the proposed methodology along with the results from an application. The method appears to detect the threshold with satisfactory success. However, its performance can be compromised by the low power to reject the constant risk assumption when the true dose-response relationship is weak. Conclusion The estimation of thresholds based on isotonic framework is conceptually simple and sufficiently powerful. Given that in threshold value estimation context there is not a gold standard method, the proposed model provides a useful non-parametric alternative to the standard approaches and can corroborate or challenge their findings.

  20. Non-parametric correlative uncertainty quantification and sensitivity analysis: Application to a Langmuir bimolecular adsorption model

    Feng, Jinchao; Lansford, Joshua; Mironenko, Alexander; Pourkargar, Davood Babaei; Vlachos, Dionisios G.; Katsoulakis, Markos A.

    2018-03-01

    We propose non-parametric methods for both local and global sensitivity analysis of chemical reaction models with correlated parameter dependencies. The developed mathematical and statistical tools are applied to a benchmark Langmuir competitive adsorption model on a close packed platinum surface, whose parameters, estimated from quantum-scale computations, are correlated and are limited in size (small data). The proposed mathematical methodology employs gradient-based methods to compute sensitivity indices. We observe that ranking influential parameters depends critically on whether or not correlations between parameters are taken into account. The impact of uncertainty in the correlation and the necessity of the proposed non-parametric perspective are demonstrated.

  1. Non-parametric correlative uncertainty quantification and sensitivity analysis: Application to a Langmuir bimolecular adsorption model

    Jinchao Feng

    2018-03-01

    Full Text Available We propose non-parametric methods for both local and global sensitivity analysis of chemical reaction models with correlated parameter dependencies. The developed mathematical and statistical tools are applied to a benchmark Langmuir competitive adsorption model on a close packed platinum surface, whose parameters, estimated from quantum-scale computations, are correlated and are limited in size (small data. The proposed mathematical methodology employs gradient-based methods to compute sensitivity indices. We observe that ranking influential parameters depends critically on whether or not correlations between parameters are taken into account. The impact of uncertainty in the correlation and the necessity of the proposed non-parametric perspective are demonstrated.

  2. Discrete non-parametric kernel estimation for global sensitivity analysis

    Senga Kiessé, Tristan; Ventura, Anne

    2016-01-01

    This work investigates the discrete kernel approach for evaluating the contribution of the variance of discrete input variables to the variance of model output, via analysis of variance (ANOVA) decomposition. Until recently only the continuous kernel approach has been applied as a metamodeling approach within sensitivity analysis framework, for both discrete and continuous input variables. Now the discrete kernel estimation is known to be suitable for smoothing discrete functions. We present a discrete non-parametric kernel estimator of ANOVA decomposition of a given model. An estimator of sensitivity indices is also presented with its asymtotic convergence rate. Some simulations on a test function analysis and a real case study from agricultural have shown that the discrete kernel approach outperforms the continuous kernel one for evaluating the contribution of moderate or most influential discrete parameters to the model output. - Highlights: • We study a discrete kernel estimation for sensitivity analysis of a model. • A discrete kernel estimator of ANOVA decomposition of the model is presented. • Sensitivity indices are calculated for discrete input parameters. • An estimator of sensitivity indices is also presented with its convergence rate. • An application is realized for improving the reliability of environmental models.

  3. Non-Parametric Analysis of Rating Transition and Default Data

    Fledelius, Peter; Lando, David; Perch Nielsen, Jens

    2004-01-01

    We demonstrate the use of non-parametric intensity estimation - including construction of pointwise confidence sets - for analyzing rating transition data. We find that transition intensities away from the class studied here for illustration strongly depend on the direction of the previous move b...

  4. Non-parametric analysis of production efficiency of poultry egg ...

    Non-parametric analysis of production efficiency of poultry egg farmers in Delta ... analysis of factors affecting the output of poultry farmers showed that stock ... should be put in place for farmers to learn the best farm practices carried out on the ...

  5. Bayesian non parametric modelling of Higgs pair production

    Scarpa Bruno

    2017-01-01

    Full Text Available Statistical classification models are commonly used to separate a signal from a background. In this talk we face the problem of isolating the signal of Higgs pair production using the decay channel in which each boson decays into a pair of b-quarks. Typically in this context non parametric methods are used, such as Random Forests or different types of boosting tools. We remain in the same non-parametric framework, but we propose to face the problem following a Bayesian approach. A Dirichlet process is used as prior for the random effects in a logit model which is fitted by leveraging the Polya-Gamma data augmentation. Refinements of the model include the insertion in the simple model of P-splines to relate explanatory variables with the response and the use of Bayesian trees (BART to describe the atoms in the Dirichlet process.

  6. Non-parametric estimation of the individual's utility map

    Noguchi, Takao; Sanborn, Adam N.; Stewart, Neil

    2013-01-01

    Models of risky choice have attracted much attention in behavioural economics. Previous research has repeatedly demonstrated that individuals' choices are not well explained by expected utility theory, and a number of alternative models have been examined using carefully selected sets of choice alternatives. The model performance however, can depend on which choice alternatives are being tested. Here we develop a non-parametric method for estimating the utility map over the wide range of choi...

  7. Digital spectral analysis parametric, non-parametric and advanced methods

    Castanié, Francis

    2013-01-01

    Digital Spectral Analysis provides a single source that offers complete coverage of the spectral analysis domain. This self-contained work includes details on advanced topics that are usually presented in scattered sources throughout the literature.The theoretical principles necessary for the understanding of spectral analysis are discussed in the first four chapters: fundamentals, digital signal processing, estimation in spectral analysis, and time-series models.An entire chapter is devoted to the non-parametric methods most widely used in industry.High resolution methods a

  8. kruX: matrix-based non-parametric eQTL discovery.

    Qi, Jianlong; Asl, Hassan Foroughi; Björkegren, Johan; Michoel, Tom

    2014-01-14

    The Kruskal-Wallis test is a popular non-parametric statistical test for identifying expression quantitative trait loci (eQTLs) from genome-wide data due to its robustness against variations in the underlying genetic model and expression trait distribution, but testing billions of marker-trait combinations one-by-one can become computationally prohibitive. We developed kruX, an algorithm implemented in Matlab, Python and R that uses matrix multiplications to simultaneously calculate the Kruskal-Wallis test statistic for several millions of marker-trait combinations at once. KruX is more than ten thousand times faster than computing associations one-by-one on a typical human dataset. We used kruX and a dataset of more than 500k SNPs and 20k expression traits measured in 102 human blood samples to compare eQTLs detected by the Kruskal-Wallis test to eQTLs detected by the parametric ANOVA and linear model methods. We found that the Kruskal-Wallis test is more robust against data outliers and heterogeneous genotype group sizes and detects a higher proportion of non-linear associations, but is more conservative for calling additive linear associations. kruX enables the use of robust non-parametric methods for massive eQTL mapping without the need for a high-performance computing infrastructure and is freely available from http://krux.googlecode.com.

  9. Non-parametric system identification from non-linear stochastic response

    Rüdinger, Finn; Krenk, Steen

    2001-01-01

    An estimation method is proposed for identification of non-linear stiffness and damping of single-degree-of-freedom systems under stationary white noise excitation. Non-parametric estimates of the stiffness and damping along with an estimate of the white noise intensity are obtained by suitable...... of the energy at mean-level crossings, which yields the damping relative to white noise intensity. Finally, an estimate of the noise intensity is extracted by estimating the absolute damping from the autocovariance functions of a set of modified phase plane variables at different energy levels. The method...

  10. Tremor Detection Using Parametric and Non-Parametric Spectral Estimation Methods: A Comparison with Clinical Assessment

    Martinez Manzanera, Octavio; Elting, Jan Willem; van der Hoeven, Johannes H.; Maurits, Natasha M.

    2016-01-01

    In the clinic, tremor is diagnosed during a time-limited process in which patients are observed and the characteristics of tremor are visually assessed. For some tremor disorders, a more detailed analysis of these characteristics is needed. Accelerometry and electromyography can be used to obtain a better insight into tremor. Typically, routine clinical assessment of accelerometry and electromyography data involves visual inspection by clinicians and occasionally computational analysis to obtain objective characteristics of tremor. However, for some tremor disorders these characteristics may be different during daily activity. This variability in presentation between the clinic and daily life makes a differential diagnosis more difficult. A long-term recording of tremor by accelerometry and/or electromyography in the home environment could help to give a better insight into the tremor disorder. However, an evaluation of such recordings using routine clinical standards would take too much time. We evaluated a range of techniques that automatically detect tremor segments in accelerometer data, as accelerometer data is more easily obtained in the home environment than electromyography data. Time can be saved if clinicians only have to evaluate the tremor characteristics of segments that have been automatically detected in longer daily activity recordings. We tested four non-parametric methods and five parametric methods on clinical accelerometer data from 14 patients with different tremor disorders. The consensus between two clinicians regarding the presence or absence of tremor on 3943 segments of accelerometer data was employed as reference. The nine methods were tested against this reference to identify their optimal parameters. Non-parametric methods generally performed better than parametric methods on our dataset when optimal parameters were used. However, one parametric method, employing the high frequency content of the tremor bandwidth under consideration

  11. Spurious Seasonality Detection: A Non-Parametric Test Proposal

    Aurelio F. Bariviera

    2018-01-01

    Full Text Available This paper offers a general and comprehensive definition of the day-of-the-week effect. Using symbolic dynamics, we develop a unique test based on ordinal patterns in order to detect it. This test uncovers the fact that the so-called “day-of-the-week” effect is partly an artifact of the hidden correlation structure of the data. We present simulations based on artificial time series as well. While time series generated with long memory are prone to exhibit daily seasonality, pure white noise signals exhibit no pattern preference. Since ours is a non-parametric test, it requires no assumptions about the distribution of returns, so that it could be a practical alternative to conventional econometric tests. We also made an exhaustive application of the here-proposed technique to 83 stock indexes around the world. Finally, the paper highlights the relevance of symbolic analysis in economic time series studies.

  12. Debt and growth: A non-parametric approach

    Brida, Juan Gabriel; Gómez, David Matesanz; Seijas, Maria Nela

    2017-11-01

    In this study, we explore the dynamic relationship between public debt and economic growth by using a non-parametric approach based on data symbolization and clustering methods. The study uses annual data of general government consolidated gross debt-to-GDP ratio and gross domestic product for sixteen countries between 1977 and 2015. Using symbolic sequences, we introduce a notion of distance between the dynamical paths of different countries. Then, a Minimal Spanning Tree and a Hierarchical Tree are constructed from time series to help detecting the existence of groups of countries sharing similar economic performance. The main finding of the study appears for the period 2008-2016 when several countries surpassed the 90% debt-to-GDP threshold. During this period, three groups (clubs) of countries are obtained: high, mid and low indebted countries, suggesting that the employed debt-to-GDP threshold drives economic dynamics for the selected countries.

  13. Multi-Directional Non-Parametric Analysis of Agricultural Efficiency

    Balezentis, Tomas

    This thesis seeks to develop methodologies for assessment of agricultural efficiency and employ them to Lithuanian family farms. In particular, we focus on three particular objectives throughout the research: (i) to perform a fully non-parametric analysis of efficiency effects, (ii) to extend...... to the Multi-Directional Efficiency Analysis approach when the proposed models were employed to analyse empirical data of Lithuanian family farm performance, we saw substantial differences in efficiencies associated with different inputs. In particular, assets appeared to be the least efficiently used input...... relative to labour, intermediate consumption and land (in some cases land was not treated as a discretionary input). These findings call for further research on relationships among financial structure, investment decisions, and efficiency in Lithuanian family farms. Application of different techniques...

  14. Robust non-parametric one-sample tests for the analysis of recurrent events.

    Rebora, Paola; Galimberti, Stefania; Valsecchi, Maria Grazia

    2010-12-30

    One-sample non-parametric tests are proposed here for inference on recurring events. The focus is on the marginal mean function of events and the basis for inference is the standardized distance between the observed and the expected number of events under a specified reference rate. Different weights are considered in order to account for various types of alternative hypotheses on the mean function of the recurrent events process. A robust version and a stratified version of the test are also proposed. The performance of these tests was investigated through simulation studies under various underlying event generation processes, such as homogeneous and nonhomogeneous Poisson processes, autoregressive and renewal processes, with and without frailty effects. The robust versions of the test have been shown to be suitable in a wide variety of event generating processes. The motivating context is a study on gene therapy in a very rare immunodeficiency in children, where a major end-point is the recurrence of severe infections. Robust non-parametric one-sample tests for recurrent events can be useful to assess efficacy and especially safety in non-randomized studies or in epidemiological studies for comparison with a standard population. Copyright © 2010 John Wiley & Sons, Ltd.

  15. Bootstrap Prediction Intervals in Non-Parametric Regression with Applications to Anomaly Detection

    Kumar, Sricharan; Srivistava, Ashok N.

    2012-01-01

    Prediction intervals provide a measure of the probable interval in which the outputs of a regression model can be expected to occur. Subsequently, these prediction intervals can be used to determine if the observed output is anomalous or not, conditioned on the input. In this paper, a procedure for determining prediction intervals for outputs of nonparametric regression models using bootstrap methods is proposed. Bootstrap methods allow for a non-parametric approach to computing prediction intervals with no specific assumptions about the sampling distribution of the noise or the data. The asymptotic fidelity of the proposed prediction intervals is theoretically proved. Subsequently, the validity of the bootstrap based prediction intervals is illustrated via simulations. Finally, the bootstrap prediction intervals are applied to the problem of anomaly detection on aviation data.

  16. A local non-parametric model for trade sign inference

    Blazejewski, Adam; Coggins, Richard

    2005-03-01

    We investigate a regularity in market order submission strategies for 12 stocks with large market capitalization on the Australian Stock Exchange. The regularity is evidenced by a predictable relationship between the trade sign (trade initiator), size of the trade, and the contents of the limit order book before the trade. We demonstrate this predictability by developing an empirical inference model to classify trades into buyer-initiated and seller-initiated. The model employs a local non-parametric method, k-nearest neighbor, which in the past was used successfully for chaotic time series prediction. The k-nearest neighbor with three predictor variables achieves an average out-of-sample classification accuracy of 71.40%, compared to 63.32% for the linear logistic regression with seven predictor variables. The result suggests that a non-linear approach may produce a more parsimonious trade sign inference model with a higher out-of-sample classification accuracy. Furthermore, for most of our stocks the observed regularity in market order submissions seems to have a memory of at least 30 trading days.

  17. Non-parametric Bayesian networks: Improving theory and reviewing applications

    Hanea, Anca; Morales Napoles, Oswaldo; Ababei, Dan

    2015-01-01

    Applications in various domains often lead to high dimensional dependence modelling. A Bayesian network (BN) is a probabilistic graphical model that provides an elegant way of expressing the joint distribution of a large number of interrelated variables. BNs have been successfully used to represent uncertain knowledge in a variety of fields. The majority of applications use discrete BNs, i.e. BNs whose nodes represent discrete variables. Integrating continuous variables in BNs is an area fraught with difficulty. Several methods that handle discrete-continuous BNs have been proposed in the literature. This paper concentrates only on one method called non-parametric BNs (NPBNs). NPBNs were introduced in 2004 and they have been or are currently being used in at least twelve professional applications. This paper provides a short introduction to NPBNs, a couple of theoretical advances, and an overview of applications. The aim of the paper is twofold: one is to present the latest improvements of the theory underlying NPBNs, and the other is to complement the existing overviews of BNs applications with the NPNBs applications. The latter opens the opportunity to discuss some difficulties that applications pose to the theoretical framework and in this way offers some NPBN modelling guidance to practitioners. - Highlights: • The paper gives an overview of the current NPBNs methodology. • We extend the NPBN methodology by relaxing the conditions of one of its fundamental theorems. • We propose improvements of the data mining algorithm for the NPBNs. • We review the professional applications of the NPBNs.

  18. A non-parametric consistency test of the ΛCDM model with Planck CMB data

    Aghamousa, Amir; Shafieloo, Arman [Korea Astronomy and Space Science Institute, Daejeon 305-348 (Korea, Republic of); Hamann, Jan, E-mail: amir@aghamousa.com, E-mail: jan.hamann@unsw.edu.au, E-mail: shafieloo@kasi.re.kr [School of Physics, The University of New South Wales, Sydney NSW 2052 (Australia)

    2017-09-01

    Non-parametric reconstruction methods, such as Gaussian process (GP) regression, provide a model-independent way of estimating an underlying function and its uncertainty from noisy data. We demonstrate how GP-reconstruction can be used as a consistency test between a given data set and a specific model by looking for structures in the residuals of the data with respect to the model's best-fit. Applying this formalism to the Planck temperature and polarisation power spectrum measurements, we test their global consistency with the predictions of the base ΛCDM model. Our results do not show any serious inconsistencies, lending further support to the interpretation of the base ΛCDM model as cosmology's gold standard.

  19. Non-parametric PSF estimation from celestial transit solar images using blind deconvolution

    González Adriana

    2016-01-01

    Full Text Available Context: Characterization of instrumental effects in astronomical imaging is important in order to extract accurate physical information from the observations. The measured image in a real optical instrument is usually represented by the convolution of an ideal image with a Point Spread Function (PSF. Additionally, the image acquisition process is also contaminated by other sources of noise (read-out, photon-counting. The problem of estimating both the PSF and a denoised image is called blind deconvolution and is ill-posed. Aims: We propose a blind deconvolution scheme that relies on image regularization. Contrarily to most methods presented in the literature, our method does not assume a parametric model of the PSF and can thus be applied to any telescope. Methods: Our scheme uses a wavelet analysis prior model on the image and weak assumptions on the PSF. We use observations from a celestial transit, where the occulting body can be assumed to be a black disk. These constraints allow us to retain meaningful solutions for the filter and the image, eliminating trivial, translated, and interchanged solutions. Under an additive Gaussian noise assumption, they also enforce noise canceling and avoid reconstruction artifacts by promoting the whiteness of the residual between the blurred observations and the cleaned data. Results: Our method is applied to synthetic and experimental data. The PSF is estimated for the SECCHI/EUVI instrument using the 2007 Lunar transit, and for SDO/AIA using the 2012 Venus transit. Results show that the proposed non-parametric blind deconvolution method is able to estimate the core of the PSF with a similar quality to parametric methods proposed in the literature. We also show that, if these parametric estimations are incorporated in the acquisition model, the resulting PSF outperforms both the parametric and non-parametric methods.

  20. Kernel bandwidth estimation for non-parametric density estimation: a comparative study

    Van der Walt, CM

    2013-12-01

    Full Text Available We investigate the performance of conventional bandwidth estimators for non-parametric kernel density estimation on a number of representative pattern-recognition tasks, to gain a better understanding of the behaviour of these estimators in high...

  1. Non-parametric order statistics method applied to uncertainty propagation in fuel rod calculations

    Arimescu, V.E.; Heins, L.

    2001-01-01

    Advances in modeling fuel rod behavior and accumulations of adequate experimental data have made possible the introduction of quantitative methods to estimate the uncertainty of predictions made with best-estimate fuel rod codes. The uncertainty range of the input variables is characterized by a truncated distribution which is typically a normal, lognormal, or uniform distribution. While the distribution for fabrication parameters is defined to cover the design or fabrication tolerances, the distribution of modeling parameters is inferred from the experimental database consisting of separate effects tests and global tests. The final step of the methodology uses a Monte Carlo type of random sampling of all relevant input variables and performs best-estimate code calculations to propagate these uncertainties in order to evaluate the uncertainty range of outputs of interest for design analysis, such as internal rod pressure and fuel centerline temperature. The statistical method underlying this Monte Carlo sampling is non-parametric order statistics, which is perfectly suited to evaluate quantiles of populations with unknown distribution. The application of this method is straightforward in the case of one single fuel rod, when a 95/95 statement is applicable: 'with a probability of 95% and confidence level of 95% the values of output of interest are below a certain value'. Therefore, the 0.95-quantile is estimated for the distribution of all possible values of one fuel rod with a statistical confidence of 95%. On the other hand, a more elaborate procedure is required if all the fuel rods in the core are being analyzed. In this case, the aim is to evaluate the following global statement: with 95% confidence level, the expected number of fuel rods which are not exceeding a certain value is all the fuel rods in the core except only a few fuel rods. In both cases, the thresholds determined by the analysis should be below the safety acceptable design limit. An indirect

  2. Using non-parametric methods in econometric production analysis

    Czekaj, Tomasz Gerard; Henningsen, Arne

    Econometric estimation of production functions is one of the most common methods in applied economic production analysis. These studies usually apply parametric estimation techniques, which obligate the researcher to specify the functional form of the production function. Most often, the Cobb...... results—including measures that are of interest of applied economists, such as elasticities. Therefore, we propose to use nonparametric econometric methods. First, they can be applied to verify the functional form used in parametric estimations of production functions. Second, they can be directly used...

  3. Rank-based permutation approaches for non-parametric factorial designs.

    Umlauft, Maria; Konietschke, Frank; Pauly, Markus

    2017-11-01

    Inference methods for null hypotheses formulated in terms of distribution functions in general non-parametric factorial designs are studied. The methods can be applied to continuous, ordinal or even ordered categorical data in a unified way, and are based only on ranks. In this set-up Wald-type statistics and ANOVA-type statistics are the current state of the art. The first method is asymptotically exact but a rather liberal statistical testing procedure for small to moderate sample size, while the latter is only an approximation which does not possess the correct asymptotic α level under the null. To bridge these gaps, a novel permutation approach is proposed which can be seen as a flexible generalization of the Kruskal-Wallis test to all kinds of factorial designs with independent observations. It is proven that the permutation principle is asymptotically correct while keeping its finite exactness property when data are exchangeable. The results of extensive simulation studies foster these theoretical findings. A real data set exemplifies its applicability. © 2017 The British Psychological Society.

  4. Non-parametric early seizure detection in an animal model of temporal lobe epilepsy

    Talathi, Sachin S.; Hwang, Dong-Uk; Spano, Mark L.; Simonotto, Jennifer; Furman, Michael D.; Myers, Stephen M.; Winters, Jason T.; Ditto, William L.; Carney, Paul R.

    2008-03-01

    The performance of five non-parametric, univariate seizure detection schemes (embedding delay, Hurst scale, wavelet scale, nonlinear autocorrelation and variance energy) were evaluated as a function of the sampling rate of EEG recordings, the electrode types used for EEG acquisition, and the spatial location of the EEG electrodes in order to determine the applicability of the measures in real-time closed-loop seizure intervention. The criteria chosen for evaluating the performance were high statistical robustness (as determined through the sensitivity and the specificity of a given measure in detecting a seizure) and the lag in seizure detection with respect to the seizure onset time (as determined by visual inspection of the EEG signal by a trained epileptologist). An optimality index was designed to evaluate the overall performance of each measure. For the EEG data recorded with microwire electrode array at a sampling rate of 12 kHz, the wavelet scale measure exhibited better overall performance in terms of its ability to detect a seizure with high optimality index value and high statistics in terms of sensitivity and specificity.

  5. Comparison between linear and non-parametric regression models for genome-enabled prediction in wheat.

    Pérez-Rodríguez, Paulino; Gianola, Daniel; González-Camacho, Juan Manuel; Crossa, José; Manès, Yann; Dreisigacker, Susanne

    2012-12-01

    In genome-enabled prediction, parametric, semi-parametric, and non-parametric regression models have been used. This study assessed the predictive ability of linear and non-linear models using dense molecular markers. The linear models were linear on marker effects and included the Bayesian LASSO, Bayesian ridge regression, Bayes A, and Bayes B. The non-linear models (this refers to non-linearity on markers) were reproducing kernel Hilbert space (RKHS) regression, Bayesian regularized neural networks (BRNN), and radial basis function neural networks (RBFNN). These statistical models were compared using 306 elite wheat lines from CIMMYT genotyped with 1717 diversity array technology (DArT) markers and two traits, days to heading (DTH) and grain yield (GY), measured in each of 12 environments. It was found that the three non-linear models had better overall prediction accuracy than the linear regression specification. Results showed a consistent superiority of RKHS and RBFNN over the Bayesian LASSO, Bayesian ridge regression, Bayes A, and Bayes B models.

  6. Modeling the World Health Organization Disability Assessment Schedule II using non-parametric item response models.

    Galindo-Garre, Francisca; Hidalgo, María Dolores; Guilera, Georgina; Pino, Oscar; Rojo, J Emilio; Gómez-Benito, Juana

    2015-03-01

    The World Health Organization Disability Assessment Schedule II (WHO-DAS II) is a multidimensional instrument developed for measuring disability. It comprises six domains (getting around, self-care, getting along with others, life activities and participation in society). The main purpose of this paper is the evaluation of the psychometric properties for each domain of the WHO-DAS II with parametric and non-parametric Item Response Theory (IRT) models. A secondary objective is to assess whether the WHO-DAS II items within each domain form a hierarchy of invariantly ordered severity indicators of disability. A sample of 352 patients with a schizophrenia spectrum disorder is used in this study. The 36 items WHO-DAS II was administered during the consultation. Partial Credit and Mokken scale models are used to study the psychometric properties of the questionnaire. The psychometric properties of the WHO-DAS II scale are satisfactory for all the domains. However, we identify a few items that do not discriminate satisfactorily between different levels of disability and cannot be invariantly ordered in the scale. In conclusion the WHO-DAS II can be used to assess overall disability in patients with schizophrenia, but some domains are too general to assess functionality in these patients because they contain items that are not applicable to this pathology. Copyright © 2014 John Wiley & Sons, Ltd.

  7. (AJST) RELATIVE EFFICIENCY OF NON-PARAMETRIC ERROR ...

    NORBERT OPIYO AKECH

    on 100 bootstrap samples, a sample of size n being taken with replacement in each initial sample of size n. .... the overlap (or optimal error rate) of the populations. However, the expression (2.3) for the computation of ..... Analysis and Machine Intelligence, 9, 628-633. Lachenbruch P. A. (1967). An almost unbiased method ...

  8. Non-parametric Bayesian inference for inhomogeneous Markov point processes

    Berthelsen, Kasper Klitgaard; Møller, Jesper; Johansen, Per Michael

    is a shot noise process, and the interaction function for a pair of points depends only on the distance between the two points and is a piecewise linear function modelled by a marked Poisson process. Simulation of the resulting posterior using a Metropolis-Hastings algorithm in the "conventional" way...

  9. Software For Computing Selected Functions

    Grant, David C.

    1992-01-01

    Technical memorandum presents collection of software packages in Ada implementing mathematical functions used in science and engineering. Provides programmer with function support in Pascal and FORTRAN, plus support for extended-precision arithmetic and complex arithmetic. Valuable for testing new computers, writing computer code, or developing new computer integrated circuits.

  10. A non-parametric peak calling algorithm for DamID-Seq.

    Renhua Li

    Full Text Available Protein-DNA interactions play a significant role in gene regulation and expression. In order to identify transcription factor binding sites (TFBS of double sex (DSX-an important transcription factor in sex determination, we applied the DNA adenine methylation identification (DamID technology to the fat body tissue of Drosophila, followed by deep sequencing (DamID-Seq. One feature of DamID-Seq data is that induced adenine methylation signals are not assured to be symmetrically distributed at TFBS, which renders the existing peak calling algorithms for ChIP-Seq, including SPP and MACS, inappropriate for DamID-Seq data. This challenged us to develop a new algorithm for peak calling. A challenge in peaking calling based on sequence data is estimating the averaged behavior of background signals. We applied a bootstrap resampling method to short sequence reads in the control (Dam only. After data quality check and mapping reads to a reference genome, the peaking calling procedure compromises the following steps: 1 reads resampling; 2 reads scaling (normalization and computing signal-to-noise fold changes; 3 filtering; 4 Calling peaks based on a statistically significant threshold. This is a non-parametric method for peak calling (NPPC. We also used irreproducible discovery rate (IDR analysis, as well as ChIP-Seq data to compare the peaks called by the NPPC. We identified approximately 6,000 peaks for DSX, which point to 1,225 genes related to the fat body tissue difference between female and male Drosophila. Statistical evidence from IDR analysis indicated that these peaks are reproducible across biological replicates. In addition, these peaks are comparable to those identified by use of ChIP-Seq on S2 cells, in terms of peak number, location, and peaks width.

  11. A non-parametric peak calling algorithm for DamID-Seq.

    Li, Renhua; Hempel, Leonie U; Jiang, Tingbo

    2015-01-01

    Protein-DNA interactions play a significant role in gene regulation and expression. In order to identify transcription factor binding sites (TFBS) of double sex (DSX)-an important transcription factor in sex determination, we applied the DNA adenine methylation identification (DamID) technology to the fat body tissue of Drosophila, followed by deep sequencing (DamID-Seq). One feature of DamID-Seq data is that induced adenine methylation signals are not assured to be symmetrically distributed at TFBS, which renders the existing peak calling algorithms for ChIP-Seq, including SPP and MACS, inappropriate for DamID-Seq data. This challenged us to develop a new algorithm for peak calling. A challenge in peaking calling based on sequence data is estimating the averaged behavior of background signals. We applied a bootstrap resampling method to short sequence reads in the control (Dam only). After data quality check and mapping reads to a reference genome, the peaking calling procedure compromises the following steps: 1) reads resampling; 2) reads scaling (normalization) and computing signal-to-noise fold changes; 3) filtering; 4) Calling peaks based on a statistically significant threshold. This is a non-parametric method for peak calling (NPPC). We also used irreproducible discovery rate (IDR) analysis, as well as ChIP-Seq data to compare the peaks called by the NPPC. We identified approximately 6,000 peaks for DSX, which point to 1,225 genes related to the fat body tissue difference between female and male Drosophila. Statistical evidence from IDR analysis indicated that these peaks are reproducible across biological replicates. In addition, these peaks are comparable to those identified by use of ChIP-Seq on S2 cells, in terms of peak number, location, and peaks width.

  12. Assessing pupil and school performance by non-parametric and parametric techniques

    de Witte, K.; Thanassoulis, E.; Simpson, G.; Battisti, G.; Charlesworth-May, A.

    2010-01-01

    This paper discusses the use of the non-parametric free disposal hull (FDH) and the parametric multi-level model (MLM) as alternative methods for measuring pupil and school attainment where hierarchical structured data are available. Using robust FDH estimates, we show how to decompose the overall

  13. Low default credit scoring using two-class non-parametric kernel density estimation

    Rademeyer, E

    2016-12-01

    Full Text Available This paper investigates the performance of two-class classification credit scoring data sets with low default ratios. The standard two-class parametric Gaussian and non-parametric Parzen classifiers are extended, using Bayes’ rule, to include either...

  14. Non-Parametric Bayesian Updating within the Assessment of Reliability for Offshore Wind Turbine Support Structures

    Ramirez, José Rangel; Sørensen, John Dalsgaard

    2011-01-01

    This work illustrates the updating and incorporation of information in the assessment of fatigue reliability for offshore wind turbine. The new information, coming from external and condition monitoring can be used to direct updating of the stochastic variables through a non-parametric Bayesian u...

  15. Non-parametric production analysis of pesticides use in the Netherlands

    Oude Lansink, A.G.J.M.; Silva, E.

    2004-01-01

    Many previous empirical studies on the productivity of pesticides suggest that pesticides are under-utilized in agriculture despite the general held believe that these inputs are substantially over-utilized. This paper uses data envelopment analysis (DEA) to calculate non-parametric measures of the

  16. Non-parametric tests of productive efficiency with errors-in-variables

    Kuosmanen, T.K.; Post, T.; Scholtes, S.

    2007-01-01

    We develop a non-parametric test of productive efficiency that accounts for errors-in-variables, following the approach of Varian. [1985. Nonparametric analysis of optimizing behavior with measurement error. Journal of Econometrics 30(1/2), 445-458]. The test is based on the general Pareto-Koopmans

  17. Non-parametric Estimation of Diffusion-Paths Using Wavelet Scaling Methods

    Høg, Esben

    In continuous time, diffusion processes have been used for modelling financial dynamics for a long time. For example the Ornstein-Uhlenbeck process (the simplest mean-reverting process) has been used to model non-speculative price processes. We discuss non--parametric estimation of these processes...

  18. Non-Parametric Estimation of Diffusion-Paths Using Wavelet Scaling Methods

    Høg, Esben

    2003-01-01

    In continuous time, diffusion processes have been used for modelling financial dynamics for a long time. For example the Ornstein-Uhlenbeck process (the simplest mean--reverting process) has been used to model non-speculative price processes. We discuss non--parametric estimation of these processes...

  19. A non-parametric Bayesian approach to decompounding from high frequency data

    Gugushvili, Shota; van der Meulen, F.H.; Spreij, Peter

    2016-01-01

    Given a sample from a discretely observed compound Poisson process, we consider non-parametric estimation of the density f0 of its jump sizes, as well as of its intensity λ0. We take a Bayesian approach to the problem and specify the prior on f0 as the Dirichlet location mixture of normal densities.

  20. A non-parametric method for correction of global radiation observations

    Bacher, Peder; Madsen, Henrik; Perers, Bengt

    2013-01-01

    in the observations are corrected. These are errors such as: tilt in the leveling of the sensor, shadowing from surrounding objects, clipping and saturation in the signal processing, and errors from dirt and wear. The method is based on a statistical non-parametric clear-sky model which is applied to both...

  1. A comparative study of non-parametric models for identification of ...

    However, the frequency response method using random binary signals was good for unpredicted white noise characteristics and considered the best method for non-parametric system identifica-tion. The autoregressive external input (ARX) model was very useful for system identification, but on applicati-on, few input ...

  2. A non-parametric hierarchical model to discover behavior dynamics from tracks

    Kooij, J.F.P.; Englebienne, G.; Gavrila, D.M.

    2012-01-01

    We present a novel non-parametric Bayesian model to jointly discover the dynamics of low-level actions and high-level behaviors of tracked people in open environments. Our model represents behaviors as Markov chains of actions which capture high-level temporal dynamics. Actions may be shared by

  3. Experimental Sentinel-2 LAI estimation using parametric, non-parametric and physical retrieval methods - A comparison

    Verrelst, Jochem; Rivera, Juan Pablo; Veroustraete, Frank; Muñoz-Marí, Jordi; Clevers, J.G.P.W.; Camps-Valls, Gustau; Moreno, José

    2015-01-01

    Given the forthcoming availability of Sentinel-2 (S2) images, this paper provides a systematic comparison of retrieval accuracy and processing speed of a multitude of parametric, non-parametric and physically-based retrieval methods using simulated S2 data. An experimental field dataset (SPARC),

  4. Computing the functional proteome

    O'Brien, Edward J.; Palsson, Bernhard

    2015-01-01

    Constraint-based models enable the computation of feasible, optimal, and realized biological phenotypes from reaction network reconstructions and constraints on their operation. To date, stoichiometric reconstructions have largely focused on metabolism, resulting in genome-scale metabolic models (M...

  5. Parametric and Non-Parametric Vibration-Based Structural Identification Under Earthquake Excitation

    Pentaris, Fragkiskos P.; Fouskitakis, George N.

    2014-05-01

    The problem of modal identification in civil structures is of crucial importance, and thus has been receiving increasing attention in recent years. Vibration-based methods are quite promising as they are capable of identifying the structure's global characteristics, they are relatively easy to implement and they tend to be time effective and less expensive than most alternatives [1]. This paper focuses on the off-line structural/modal identification of civil (concrete) structures subjected to low-level earthquake excitations, under which, they remain within their linear operating regime. Earthquakes and their details are recorded and provided by the seismological network of Crete [2], which 'monitors' the broad region of south Hellenic arc, an active seismic region which functions as a natural laboratory for earthquake engineering of this kind. A sufficient number of seismic events are analyzed in order to reveal the modal characteristics of the structures under study, that consist of the two concrete buildings of the School of Applied Sciences, Technological Education Institute of Crete, located in Chania, Crete, Hellas. Both buildings are equipped with high-sensitivity and accuracy seismographs - providing acceleration measurements - established at the basement (structure's foundation) presently considered as the ground's acceleration (excitation) and at all levels (ground floor, 1st floor, 2nd floor and terrace). Further details regarding the instrumentation setup and data acquisition may be found in [3]. The present study invokes stochastic, both non-parametric (frequency-based) and parametric methods for structural/modal identification (natural frequencies and/or damping ratios). Non-parametric methods include Welch-based spectrum and Frequency response Function (FrF) estimation, while parametric methods, include AutoRegressive (AR), AutoRegressive with eXogeneous input (ARX) and Autoregressive Moving-Average with eXogeneous input (ARMAX) models[4, 5

  6. Non-Parametric Kinetic (NPK Analysis of Thermal Oxidation of Carbon Aerogels

    Azadeh Seifi

    2017-05-01

    Full Text Available In recent years, much attention has been paid to aerogel materials (especially carbon aerogels due to their potential uses in energy-related applications, such as thermal energy storage and thermal protection systems. These open cell carbon-based porous materials (carbon aerogels can strongly react with oxygen at relatively low temperatures (~ 400°C. Therefore, it is necessary to evaluate the thermal performance of carbon aerogels in view of their energy-related applications at high temperatures and under thermal oxidation conditions. The objective of this paper is to study theoretically and experimentally the oxidation reaction kinetics of carbon aerogel using the non-parametric kinetic (NPK as a powerful method. For this purpose, a non-isothermal thermogravimetric analysis, at three different heating rates, was performed on three samples each with its specific pore structure, density and specific surface area. The most significant feature of this method, in comparison with the model-free isoconversional methods, is its ability to separate the functionality of the reaction rate with the degree of conversion and temperature by the direct use of thermogravimetric data. Using this method, it was observed that the Nomen-Sempere model could provide the best fit to the data, while the temperature dependence of the rate constant was best explained by a Vogel-Fulcher relationship, where the reference temperature was the onset temperature of oxidation. Moreover, it was found from the results of this work that the assumption of the Arrhenius relation for the temperature dependence of the rate constant led to over-estimation of the apparent activation energy (up to 160 kJ/mol that was considerably different from the values (up to 3.5 kJ/mol predicted by the Vogel-Fulcher relationship in isoconversional methods

  7. A Non-Parametric Item Response Theory Evaluation of the CAGE Instrument Among Older Adults.

    Abdin, Edimansyah; Sagayadevan, Vathsala; Vaingankar, Janhavi Ajit; Picco, Louisa; Chong, Siow Ann; Subramaniam, Mythily

    2018-02-23

    The validity of the CAGE using item response theory (IRT) has not yet been examined in older adult population. This study aims to investigate the psychometric properties of the CAGE using both non-parametric and parametric IRT models, assess whether there is any differential item functioning (DIF) by age, gender and ethnicity and examine the measurement precision at the cut-off scores. We used data from the Well-being of the Singapore Elderly study to conduct Mokken scaling analysis (MSA), dichotomous Rasch and 2-parameter logistic IRT models. The measurement precision at the cut-off scores were evaluated using classification accuracy (CA) and classification consistency (CC). The MSA showed the overall scalability H index was 0.459, indicating a medium performing instrument. All items were found to be homogenous, measuring the same construct and able to discriminate well between respondents with high levels of the construct and the ones with lower levels. The item discrimination ranged from 1.07 to 6.73 while the item difficulty ranged from 0.33 to 2.80. Significant DIF was found for 2-item across ethnic group. More than 90% (CC and CA ranged from 92.5% to 94.3%) of the respondents were consistently and accurately classified by the CAGE cut-off scores of 2 and 3. The current study provides new evidence on the validity of the CAGE from the IRT perspective. This study provides valuable information of each item in the assessment of the overall severity of alcohol problem and the precision of the cut-off scores in older adult population.

  8. Computational Methods and Function Theory

    Saff, Edward; Salinas, Luis; Varga, Richard

    1990-01-01

    The volume is devoted to the interaction of modern scientific computation and classical function theory. Many problems in pure and more applied function theory can be tackled using modern computing facilities: numerically as well as in the sense of computer algebra. On the other hand, computer algorithms are often based on complex function theory, and dedicated research on their theoretical foundations can lead to great enhancements in performance. The contributions - original research articles, a survey and a collection of problems - cover a broad range of such problems.

  9. Functional programming for computer vision

    Breuel, Thomas M.

    1992-04-01

    Functional programming is a style of programming that avoids the use of side effects (like assignment) and uses functions as first class data objects. Compared with imperative programs, functional programs can be parallelized better, and provide better encapsulation, type checking, and abstractions. This is important for building and integrating large vision software systems. In the past, efficiency has been an obstacle to the application of functional programming techniques in computationally intensive areas such as computer vision. We discuss and evaluate several 'functional' data structures for representing efficiently data structures and objects common in computer vision. In particular, we will address: automatic storage allocation and reclamation issues; abstraction of control structures; efficient sequential update of large data structures; representing images as functions; and object-oriented programming. Our experience suggests that functional techniques are feasible for high- performance vision systems, and that a functional approach simplifies the implementation and integration of vision systems greatly. Examples in C++ and SML are given.

  10. A non-parametric meta-analysis approach for combining independent microarray datasets: application using two microarray datasets pertaining to chronic allograft nephropathy

    Archer Kellie J

    2008-02-01

    Full Text Available Abstract Background With the popularity of DNA microarray technology, multiple groups of researchers have studied the gene expression of similar biological conditions. Different methods have been developed to integrate the results from various microarray studies, though most of them rely on distributional assumptions, such as the t-statistic based, mixed-effects model, or Bayesian model methods. However, often the sample size for each individual microarray experiment is small. Therefore, in this paper we present a non-parametric meta-analysis approach for combining data from independent microarray studies, and illustrate its application on two independent Affymetrix GeneChip studies that compared the gene expression of biopsies from kidney transplant recipients with chronic allograft nephropathy (CAN to those with normal functioning allograft. Results The simulation study comparing the non-parametric meta-analysis approach to a commonly used t-statistic based approach shows that the non-parametric approach has better sensitivity and specificity. For the application on the two CAN studies, we identified 309 distinct genes that expressed differently in CAN. By applying Fisher's exact test to identify enriched KEGG pathways among those genes called differentially expressed, we found 6 KEGG pathways to be over-represented among the identified genes. We used the expression measurements of the identified genes as predictors to predict the class labels for 6 additional biopsy samples, and the predicted results all conformed to their pathologist diagnosed class labels. Conclusion We present a new approach for combining data from multiple independent microarray studies. This approach is non-parametric and does not rely on any distributional assumptions. The rationale behind the approach is logically intuitive and can be easily understood by researchers not having advanced training in statistics. Some of the identified genes and pathways have been

  11. Hadron Energy Reconstruction for ATLAS Barrel Combined Calorimeter Using Non-Parametrical Method

    Kulchitskii, Yu A

    2000-01-01

    Hadron energy reconstruction for the ATLAS barrel prototype combined calorimeter in the framework of the non-parametrical method is discussed. The non-parametrical method utilizes only the known e/h ratios and the electron calibration constants and does not require the determination of any parameters by a minimization technique. Thus, this technique lends itself to fast energy reconstruction in a first level trigger. The reconstructed mean values of the hadron energies are within \\pm1% of the true values and the fractional energy resolution is [(58\\pm 3)%{\\sqrt{GeV}}/\\sqrt{E}+(2.5\\pm0.3)%]\\bigoplus(1.7\\pm0.2) GeV/E. The value of the e/h ratio obtained for the electromagnetic compartment of the combined calorimeter is 1.74\\pm0.04. Results of a study of the longitudinal hadronic shower development are also presented.

  12. A multitemporal and non-parametric approach for assessing the impacts of drought on vegetation greenness

    Carrao, Hugo; Sepulcre, Guadalupe; Horion, Stéphanie Marie Anne F

    2013-01-01

    This study evaluates the relationship between the frequency and duration of meteorological droughts and the subsequent temporal changes on the quantity of actively photosynthesizing biomass (greenness) estimated from satellite imagery on rainfed croplands in Latin America. An innovative non-parametric...... and non-supervised approach, based on the Fisher-Jenks optimal classification algorithm, is used to identify multi-scale meteorological droughts on the basis of empirical cumulative distributions of 1, 3, 6, and 12-monthly precipitation totals. As input data for the classifier, we use the gridded GPCC...... for the period between 1998 and 2010. The time-series analysis of vegetation greenness is performed during the growing season with a non-parametric method, namely the seasonal Relative Greenness (RG) of spatially accumulated fAPAR. The Global Land Cover map of 2000 and the GlobCover maps of 2005/2006 and 2009...

  13. Efficiency Analysis of German Electricity Distribution Utilities : Non-Parametric and Parametric Tests

    von Hirschhausen, Christian R.; Cullmann, Astrid

    2005-01-01

    Abstract This paper applies parametric and non-parametric and parametric tests to assess the efficiency of electricity distribution companies in Germany. We address traditional issues in electricity sector benchmarking, such as the role of scale effects and optimal utility size, as well as new evidence specific to the situation in Germany. We use labour, capital, and peak load capacity as inputs, and units sold and the number of customers as output. The data cover 307 (out of 553) ...

  14. A simple non-parametric goodness-of-fit test for elliptical copulas

    Jaser Miriam

    2017-12-01

    Full Text Available In this paper, we propose a simple non-parametric goodness-of-fit test for elliptical copulas of any dimension. It is based on the equality of Kendall’s tau and Blomqvist’s beta for all bivariate margins. Nominal level and power of the proposed test are investigated in a Monte Carlo study. An empirical application illustrates our goodness-of-fit test at work.

  15. Bootstrapping the economy -- a non-parametric method of generating consistent future scenarios

    Müller, Ulrich A; Bürgi, Roland; Dacorogna, Michel M

    2004-01-01

    The fortune and the risk of a business venture depends on the future course of the economy. There is a strong demand for economic forecasts and scenarios that can be applied to planning and modeling. While there is an ongoing debate on modeling economic scenarios, the bootstrapping (or resampling) approach presented here has several advantages. As a non-parametric method, it directly relies on past market behaviors rather than debatable assumptions on models and parameters. Simultaneous dep...

  16. Generalized Correlation Coefficient for Non-Parametric Analysis of Microarray Time-Course Data

    Tan, Qihua; Thomassen, Mads; Burton, Mark

    2017-01-01

    the heterogeneous time-course gene expression patterns. Application of the method identified nonlinear time-course patterns in high agreement with parametric analysis. We conclude that the non-parametric nature in the generalized correlation analysis could be an useful and efficient tool for analyzing microarray...... time-course data and for exploring the complex relationships in the omics data for studying their association with disease and health....

  17. Monitoring coastal marshes biomass with CASI: a comparison of parametric and non-parametric models

    Mo, Y.; Kearney, M.

    2017-12-01

    Coastal marshes are important carbon sinks that face multiple natural and anthropogenic stresses. Optical remote sensing is a powerful tool for closely monitoring the biomass of coastal marshes. However, application of hyperspectral sensors on assessing the biomass of diverse coastal marsh ecosystems is limited. This study samples spectral and biophysical data from coastal freshwater, intermediate, brackish, and saline marshes in Louisiana, and develops parametric and non-parametric models for using the Compact Airborne Spectrographic Imager (CASI) to retrieve the marshes' biomass. Linear models and random forest models are developed from simulated CASI data (48 bands, 380-1050 nm, bandwidth 14 nm). Linear models are also developed using narrowband vegetation indices computed from all possible band combinations from the blue, red, and near infrared wavelengths. It is found that the linear models derived from the optimal narrowband vegetation indices provide strong predictions for the marshes' Leaf Area Index (LAI; R2 > 0.74 for ARVI), but not for their Aboveground Green Biomass (AGB; R2 > 0.25). The linear models derived from the simulated CASI data strongly predict the marshes' LAI (R2 = 0.93) and AGB (R2 = 0.71) and have 27 and 30 bands/variables in the final models through stepwise regression, respectively. The random forest models derived from the simulated CASI data also strongly predict the marshes' LAI and AGB (R2 = 0.91 and 0.84, respectively), where the most important variables for predicting LAI are near infrared bands at 784 and 756 nm and for predicting ABG are red bands at 684 and 670 nm. In sum, the random forest model is preferable for assessing coastal marsh biomass using CASI data as it offers high R2 for both LAI and AGB. The superior performance of the random forest model is likely to due to that it fully utilizes the full-spectrum data and makes no assumption of the approximate normality of the sampling population. This study offers solutions

  18. Using multinomial and imprecise probability for non-parametric modelling of rainfall in Manizales (Colombia

    Ibsen Chivatá Cárdenas

    2008-05-01

    Full Text Available This article presents a rainfall model constructed by applying non-parametric modelling and imprecise probabilities; these tools were used because there was not enough homogeneous information in the study area. The area’s hydro-logical information regarding rainfall was scarce and existing hydrological time series were not uniform. A distributed extended rainfall model was constructed from so-called probability boxes (p-boxes, multinomial probability distribu-tion and confidence intervals (a friendly algorithm was constructed for non-parametric modelling by combining the last two tools. This model confirmed the high level of uncertainty involved in local rainfall modelling. Uncertainty en-compassed the whole range (domain of probability values thereby showing the severe limitations on information, leading to the conclusion that a detailed estimation of probability would lead to significant error. Nevertheless, rele-vant information was extracted; it was estimated that maximum daily rainfall threshold (70 mm would be surpassed at least once every three years and the magnitude of uncertainty affecting hydrological parameter estimation. This paper’s conclusions may be of interest to non-parametric modellers and decisions-makers as such modelling and imprecise probability represents an alternative for hydrological variable assessment and maybe an obligatory proce-dure in the future. Its potential lies in treating scarce information and represents a robust modelling strategy for non-seasonal stochastic modelling conditions

  19. Non-parametric Tuning of PID Controllers A Modified Relay-Feedback-Test Approach

    Boiko, Igor

    2013-01-01

    The relay feedback test (RFT) has become a popular and efficient  tool used in process identification and automatic controller tuning. Non-parametric Tuning of PID Controllers couples new modifications of classical RFT with application-specific optimal tuning rules to form a non-parametric method of test-and-tuning. Test and tuning are coordinated through a set of common parameters so that a PID controller can obtain the desired gain or phase margins in a system exactly, even with unknown process dynamics. The concept of process-specific optimal tuning rules in the nonparametric setup, with corresponding tuning rules for flow, level pressure, and temperature control loops is presented in the text.   Common problems of tuning accuracy based on parametric and non-parametric approaches are addressed. In addition, the text treats the parametric approach to tuning based on the modified RFT approach and the exact model of oscillations in the system under test using the locus of a perturbedrelay system (LPRS) meth...

  20. A non-parametric Data Envelopment Analysis approach for improving energy efficiency of grape production

    Khoshroo, Alireza; Mulwa, Richard; Emrouznejad, Ali; Arabi, Behrouz

    2013-01-01

    Grape is one of the world's largest fruit crops with approximately 67.5 million tonnes produced each year and energy is an important element in modern grape productions as it heavily depends on fossil and other energy resources. Efficient use of these energies is a necessary step toward reducing environmental hazards, preventing destruction of natural resources and ensuring agricultural sustainability. Hence, identifying excessive use of energy as well as reducing energy resources is the main focus of this paper to optimize energy consumption in grape production. In this study we use a two-stage methodology to find the association of energy efficiency and performance explained by farmers' specific characteristics. In the first stage a non-parametric Data Envelopment Analysis is used to model efficiencies as an explicit function of human labor, machinery, chemicals, FYM (farmyard manure), diesel fuel, electricity and water for irrigation energies. In the second step, farm specific variables such as farmers' age, gender, level of education and agricultural experience are used in a Tobit regression framework to explain how these factors influence efficiency of grape farming. The result of the first stage shows substantial inefficiency between the grape producers in the studied area while the second stage shows that the main difference between efficient and inefficient farmers was in the use of chemicals, diesel fuel and water for irrigation. The use of chemicals such as insecticides, herbicides and fungicides were considerably less than inefficient ones. The results revealed that the more educated farmers are more energy efficient in comparison with their less educated counterparts. - Highlights: • The focus of this paper is to identify excessive use of energy and optimize energy consumption in grape production. • We measure the efficiency as a function of labor/machinery/chemicals/farmyard manure/diesel-fuel/electricity/water. • Data were obtained from 41 grape

  1. Automatic computation of transfer functions

    Atcitty, Stanley; Watson, Luke Dale

    2015-04-14

    Technologies pertaining to the automatic computation of transfer functions for a physical system are described herein. The physical system is one of an electrical system, a mechanical system, an electromechanical system, an electrochemical system, or an electromagnetic system. A netlist in the form of a matrix comprises data that is indicative of elements in the physical system, values for the elements in the physical system, and structure of the physical system. Transfer functions for the physical system are computed based upon the netlist.

  2. Estimation of the limit of detection with a bootstrap-derived standard error by a partly non-parametric approach. Application to HPLC drug assays

    Linnet, Kristian

    2005-01-01

    Bootstrap, HPLC, limit of blank, limit of detection, non-parametric statistics, type I and II errors......Bootstrap, HPLC, limit of blank, limit of detection, non-parametric statistics, type I and II errors...

  3. Functional Programming in Computer Science

    Anderson, Loren James [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Davis, Marion Kei [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-01-19

    We explore functional programming through a 16-week internship at Los Alamos National Laboratory. Functional programming is a branch of computer science that has exploded in popularity over the past decade due to its high-level syntax, ease of parallelization, and abundant applications. First, we summarize functional programming by listing the advantages of functional programming languages over the usual imperative languages, and we introduce the concept of parsing. Second, we discuss the importance of lambda calculus in the theory of functional programming. Lambda calculus was invented by Alonzo Church in the 1930s to formalize the concept of effective computability, and every functional language is essentially some implementation of lambda calculus. Finally, we display the lasting products of the internship: additions to a compiler and runtime system for the pure functional language STG, including both a set of tests that indicate the validity of updates to the compiler and a compiler pass that checks for illegal instances of duplicate names.

  4. Comparative Study of Parametric and Non-parametric Approaches in Fault Detection and Isolation

    Katebi, S.D.; Blanke, M.; Katebi, M.R.

    This report describes a comparative study between two approaches to fault detection and isolation in dynamic systems. The first approach uses a parametric model of the system. The main components of such techniques are residual and signature generation for processing and analyzing. The second...... approach is non-parametric in the sense that the signature analysis is only dependent on the frequency or time domain information extracted directly from the input-output signals. Based on these approaches, two different fault monitoring schemes are developed where the feature extraction and fault decision...

  5. The geometry of distributional preferences and a non-parametric identification approach: The Equality Equivalence Test.

    Kerschbamer, Rudolf

    2015-05-01

    This paper proposes a geometric delineation of distributional preference types and a non-parametric approach for their identification in a two-person context. It starts with a small set of assumptions on preferences and shows that this set (i) naturally results in a taxonomy of distributional archetypes that nests all empirically relevant types considered in previous work; and (ii) gives rise to a clean experimental identification procedure - the Equality Equivalence Test - that discriminates between archetypes according to core features of preferences rather than properties of specific modeling variants. As a by-product the test yields a two-dimensional index of preference intensity.

  6. Generalized Correlation Coefficient for Non-Parametric Analysis of Microarray Time-Course Data.

    Tan, Qihua; Thomassen, Mads; Burton, Mark; Mose, Kristian Fredløv; Andersen, Klaus Ejner; Hjelmborg, Jacob; Kruse, Torben

    2017-06-06

    Modeling complex time-course patterns is a challenging issue in microarray study due to complex gene expression patterns in response to the time-course experiment. We introduce the generalized correlation coefficient and propose a combinatory approach for detecting, testing and clustering the heterogeneous time-course gene expression patterns. Application of the method identified nonlinear time-course patterns in high agreement with parametric analysis. We conclude that the non-parametric nature in the generalized correlation analysis could be an useful and efficient tool for analyzing microarray time-course data and for exploring the complex relationships in the omics data for studying their association with disease and health.

  7. Computation of hyperspherical Bessel functions

    Tram, Thomas

    2013-01-01

    In this paper we present a fast and accurate numerical algorithm for the computation of hyperspherical Bessel functions of large order and real arguments. For the hyperspherical Bessel functions of closed type, no stable algorithm existed so far due to the lack of a backwards recurrence. We solved this problem by establishing a relation to Gegenbauer polynomials. All our algorithms are written in C and are publicly available at Github [https://github.com/lesgourg/class_public]. A Python wrapp...

  8. Deterministic computation of functional integrals

    Lobanov, Yu.Yu.

    1995-09-01

    A new method of numerical integration in functional spaces is described. This method is based on the rigorous definition of a functional integral in complete separable metric space and on the use of approximation formulas which we constructed for this kind of integral. The method is applicable to solution of some partial differential equations and to calculation of various characteristics in quantum physics. No preliminary discretization of space and time is required in this method, as well as no simplifying assumptions like semi-classical, mean field approximations, collective excitations, introduction of ''short-time'' propagators, etc are necessary in our approach. The constructed approximation formulas satisfy the condition of being exact on a given class of functionals, namely polynomial functionals of a given degree. The employment of these formulas replaces the evaluation of a functional integral by computation of the ''ordinary'' (Riemannian) integral of a low dimension, thus allowing to use the more preferable deterministic algorithms (normally - Gaussian quadratures) in computations rather than traditional stochastic (Monte Carlo) methods which are commonly used for solution of the problem under consideration. The results of application of the method to computation of the Green function of the Schroedinger equation in imaginary time as well as the study of some models of Euclidean quantum mechanics are presented. The comparison with results of other authors shows that our method gives significant (by an order of magnitude) economy of computer time and memory versus other known methods while providing the results with the same or better accuracy. The funcitonal measure of the Gaussian type is considered and some of its particular cases, namely conditional Wiener measure in quantum statistical mechanics and functional measure in a Schwartz distribution space in two-dimensional quantum field theory are studied in detail. Numerical examples demonstrating the

  9. Accurate computation of Mathieu functions

    Bibby, Malcolm M

    2013-01-01

    This lecture presents a modern approach for the computation of Mathieu functions. These functions find application in boundary value analysis such as electromagnetic scattering from elliptic cylinders and flat strips, as well as the analogous acoustic and optical problems, and many other applications in science and engineering. The authors review the traditional approach used for these functions, show its limitations, and provide an alternative ""tuned"" approach enabling improved accuracy and convergence. The performance of this approach is investigated for a wide range of parameters and mach

  10. Measuring energy performance with sectoral heterogeneity: A non-parametric frontier approach

    Wang, H.; Ang, B.W.; Wang, Q.W.; Zhou, P.

    2017-01-01

    Evaluating economy-wide energy performance is an integral part of assessing the effectiveness of a country's energy efficiency policy. Non-parametric frontier approach has been widely used by researchers for such a purpose. This paper proposes an extended non-parametric frontier approach to studying economy-wide energy efficiency and productivity performances by accounting for sectoral heterogeneity. Relevant techniques in index number theory are incorporated to quantify the driving forces behind changes in the economy-wide energy productivity index. The proposed approach facilitates flexible modelling of different sectors' production processes, and helps to examine sectors' impact on the aggregate energy performance. A case study of China's economy-wide energy efficiency and productivity performances in its 11th five-year plan period (2006–2010) is presented. It is found that sectoral heterogeneities in terms of energy performance are significant in China. Meanwhile, China's economy-wide energy productivity increased slightly during the study period, mainly driven by the technical efficiency improvement. A number of other findings have also been reported. - Highlights: • We model economy-wide energy performance by considering sectoral heterogeneity. • The proposed approach can identify sectors' impact on the aggregate energy performance. • Obvious sectoral heterogeneities are identified in evaluating China's energy performance.

  11. MEASURING DARK MATTER PROFILES NON-PARAMETRICALLY IN DWARF SPHEROIDALS: AN APPLICATION TO DRACO

    Jardel, John R.; Gebhardt, Karl; Fabricius, Maximilian H.; Williams, Michael J.; Drory, Niv

    2013-01-01

    We introduce a novel implementation of orbit-based (or Schwarzschild) modeling that allows dark matter density profiles to be calculated non-parametrically in nearby galaxies. Our models require no assumptions to be made about velocity anisotropy or the dark matter profile. The technique can be applied to any dispersion-supported stellar system, and we demonstrate its use by studying the Local Group dwarf spheroidal galaxy (dSph) Draco. We use existing kinematic data at larger radii and also present 12 new radial velocities within the central 13 pc obtained with the VIRUS-W integral field spectrograph on the 2.7 m telescope at McDonald Observatory. Our non-parametric Schwarzschild models find strong evidence that the dark matter profile in Draco is cuspy for 20 ≤ r ≤ 700 pc. The profile for r ≥ 20 pc is well fit by a power law with slope α = –1.0 ± 0.2, consistent with predictions from cold dark matter simulations. Our models confirm that, despite its low baryon content relative to other dSphs, Draco lives in a massive halo.

  12. Non-parametric reconstruction of an inflaton potential from Einstein–Cartan–Sciama–Kibble gravity with particle production

    Shantanu Desai

    2016-04-01

    Full Text Available The coupling between spin and torsion in the Einstein–Cartan–Sciama–Kibble theory of gravity generates gravitational repulsion at very high densities, which prevents a singularity in a black hole and may create there a new universe. We show that quantum particle production in such a universe near the last bounce, which represents the Big Bang, gives the dynamics that solves the horizon, flatness, and homogeneity problems in cosmology. For a particular range of the particle production coefficient, we obtain a nearly constant Hubble parameter that gives an exponential expansion of the universe with more than 60 e-folds, which lasts about ∼10−42 s. This scenario can thus explain cosmic inflation without requiring a fundamental scalar field and reheating. From the obtained time dependence of the scale factor, we follow the prescription of Ellis and Madsen to reconstruct in a non-parametric way a scalar field potential which gives the same dynamics of the early universe. This potential gives the slow-roll parameters of cosmic inflation, from which we calculate the tensor-to-scalar ratio, the scalar spectral index of density perturbations, and its running as functions of the production coefficient. We find that these quantities do not significantly depend on the scale factor at the Big Bounce. Our predictions for these quantities are consistent with the Planck 2015 observations.

  13. Computational complexity of Boolean functions

    Korshunov, Aleksei D [Sobolev Institute of Mathematics, Siberian Branch of the Russian Academy of Sciences, Novosibirsk (Russian Federation)

    2012-02-28

    Boolean functions are among the fundamental objects of discrete mathematics, especially in those of its subdisciplines which fall under mathematical logic and mathematical cybernetics. The language of Boolean functions is convenient for describing the operation of many discrete systems such as contact networks, Boolean circuits, branching programs, and some others. An important parameter of discrete systems of this kind is their complexity. This characteristic has been actively investigated starting from Shannon's works. There is a large body of scientific literature presenting many fundamental results. The purpose of this survey is to give an account of the main results over the last sixty years related to the complexity of computation (realization) of Boolean functions by contact networks, Boolean circuits, and Boolean circuits without branching. Bibliography: 165 titles.

  14. Non-parametric adaptive importance sampling for the probability estimation of a launcher impact position

    Morio, Jerome

    2011-01-01

    Importance sampling (IS) is a useful simulation technique to estimate critical probability with a better accuracy than Monte Carlo methods. It consists in generating random weighted samples from an auxiliary distribution rather than the distribution of interest. The crucial part of this algorithm is the choice of an efficient auxiliary PDF that has to be able to simulate more rare random events. The optimisation of this auxiliary distribution is often in practice very difficult. In this article, we propose to approach the IS optimal auxiliary density with non-parametric adaptive importance sampling (NAIS). We apply this technique for the probability estimation of spatial launcher impact position since it has currently become a more and more important issue in the field of aeronautics.

  15. Assessing T cell clonal size distribution: a non-parametric approach.

    Olesya V Bolkhovskaya

    Full Text Available Clonal structure of the human peripheral T-cell repertoire is shaped by a number of homeostatic mechanisms, including antigen presentation, cytokine and cell regulation. Its accurate tuning leads to a remarkable ability to combat pathogens in all their variety, while systemic failures may lead to severe consequences like autoimmune diseases. Here we develop and make use of a non-parametric statistical approach to assess T cell clonal size distributions from recent next generation sequencing data. For 41 healthy individuals and a patient with ankylosing spondylitis, who undergone treatment, we invariably find power law scaling over several decades and for the first time calculate quantitatively meaningful values of decay exponent. It has proved to be much the same among healthy donors, significantly different for an autoimmune patient before the therapy, and converging towards a typical value afterwards. We discuss implications of the findings for theoretical understanding and mathematical modeling of adaptive immunity.

  16. Assessing T cell clonal size distribution: a non-parametric approach.

    Bolkhovskaya, Olesya V; Zorin, Daniil Yu; Ivanchenko, Mikhail V

    2014-01-01

    Clonal structure of the human peripheral T-cell repertoire is shaped by a number of homeostatic mechanisms, including antigen presentation, cytokine and cell regulation. Its accurate tuning leads to a remarkable ability to combat pathogens in all their variety, while systemic failures may lead to severe consequences like autoimmune diseases. Here we develop and make use of a non-parametric statistical approach to assess T cell clonal size distributions from recent next generation sequencing data. For 41 healthy individuals and a patient with ankylosing spondylitis, who undergone treatment, we invariably find power law scaling over several decades and for the first time calculate quantitatively meaningful values of decay exponent. It has proved to be much the same among healthy donors, significantly different for an autoimmune patient before the therapy, and converging towards a typical value afterwards. We discuss implications of the findings for theoretical understanding and mathematical modeling of adaptive immunity.

  17. The application of non-parametric statistical method for an ALARA implementation

    Cho, Young Ho; Herr, Young Hoi

    2003-01-01

    The cost-effective reduction of Occupational Radiation Dose (ORD) at a nuclear power plant could not be achieved without going through an extensive analysis of accumulated ORD data of existing plants. Through the data analysis, it is required to identify what are the jobs of repetitive high ORD at the nuclear power plant. In this study, Percentile Rank Sum Method (PRSM) is proposed to identify repetitive high ORD jobs, which is based on non-parametric statistical theory. As a case study, the method is applied to ORD data of maintenance and repair jobs at Kori units 3 and 4 that are pressurized water reactors with 950 MWe capacity and have been operated since 1986 and 1987, respectively in Korea. The results was verified and validated, and PRSM has been demonstrated to be an efficient method of analyzing the data

  18. Performance of non-parametric algorithms for spatial mapping of tropical forest structure

    Liang Xu

    2016-08-01

    Full Text Available Abstract Background Mapping tropical forest structure is a critical requirement for accurate estimation of emissions and removals from land use activities. With the availability of a wide range of remote sensing imagery of vegetation characteristics from space, development of finer resolution and more accurate maps has advanced in recent years. However, the mapping accuracy relies heavily on the quality of input layers, the algorithm chosen, and the size and quality of inventory samples for calibration and validation. Results By using airborne lidar data as the “truth” and focusing on the mean canopy height (MCH as a key structural parameter, we test two commonly-used non-parametric techniques of maximum entropy (ME and random forest (RF for developing maps over a study site in Central Gabon. Results of mapping show that both approaches have improved accuracy with more input layers in mapping canopy height at 100 m (1-ha pixels. The bias-corrected spatial models further improve estimates for small and large trees across the tails of height distributions with a trade-off in increasing overall mean squared error that can be readily compensated by increasing the sample size. Conclusions A significant improvement in tropical forest mapping can be achieved by weighting the number of inventory samples against the choice of image layers and the non-parametric algorithms. Without future satellite observations with better sensitivity to forest biomass, the maps based on existing data will remain slightly biased towards the mean of the distribution and under and over estimating the upper and lower tails of the distribution.

  19. Normal Functions As A New Way Of Defining Computable Functions

    Leszek Dubiel

    2004-01-01

    Full Text Available Report sets new method of defining computable functions. This is formalization of traditional function descriptions, so it allows to define functions in very intuitive way. Discovery of Ackermann function proved that not all functions that can be easily computed can be so easily described with Hilbert’s system of recursive functions. Normal functions lack this disadvantage.

  20. Normal Functions as a New Way of Defining Computable Functions

    Leszek Dubiel

    2004-01-01

    Full Text Available Report sets new method of defining computable functions. This is formalization of traditional function descriptions, so it allows to define functions in very intuitive way. Discovery of Ackermann function proved that not all functions that can be easily computed can be so easily described with Hilbert's system of recursive functions. Normal functions lack this disadvantage.

  1. The relationship between multilevel models and non-parametric multilevel mixture models: Discrete approximation of intraclass correlation, random coefficient distributions, and residual heteroscedasticity.

    Rights, Jason D; Sterba, Sonya K

    2016-11-01

    Multilevel data structures are common in the social sciences. Often, such nested data are analysed with multilevel models (MLMs) in which heterogeneity between clusters is modelled by continuously distributed random intercepts and/or slopes. Alternatively, the non-parametric multilevel regression mixture model (NPMM) can accommodate the same nested data structures through discrete latent class variation. The purpose of this article is to delineate analytic relationships between NPMM and MLM parameters that are useful for understanding the indirect interpretation of the NPMM as a non-parametric approximation of the MLM, with relaxed distributional assumptions. We define how seven standard and non-standard MLM specifications can be indirectly approximated by particular NPMM specifications. We provide formulas showing how the NPMM can serve as an approximation of the MLM in terms of intraclass correlation, random coefficient means and (co)variances, heteroscedasticity of residuals at level 1, and heteroscedasticity of residuals at level 2. Further, we discuss how these relationships can be useful in practice. The specific relationships are illustrated with simulated graphical demonstrations, and direct and indirect interpretations of NPMM classes are contrasted. We provide an R function to aid in implementing and visualizing an indirect interpretation of NPMM classes. An empirical example is presented and future directions are discussed. © 2016 The British Psychological Society.

  2. Further Empirical Results on Parametric Versus Non-Parametric IRT Modeling of Likert-Type Personality Data

    Maydeu-Olivares, Albert

    2005-01-01

    Chernyshenko, Stark, Chan, Drasgow, and Williams (2001) investigated the fit of Samejima's logistic graded model and Levine's non-parametric MFS model to the scales of two personality questionnaires and found that the graded model did not fit well. We attribute the poor fit of the graded model to small amounts of multidimensionality present in…

  3. The development of a new algorithm to calculate a survival function in non-parametric ways

    Ahn, Kwang Won; Kim, Yoon Ik; Chung, Chang Hyun; Kim, Kil Yoo

    2001-01-01

    In this study, a generalized formula of the Kaplan-Meier method is developed. The idea of this algorithm is that the result of the Kaplan-Meier estimator is the same as that of the redistribute-to-the right algorithm. Hence, the result of the Kaplan-Meier estimator is used when we redistribute to the right. This can be explained as the following steps, at first, the same mass is distributed to all the points. At second, when you reach the censored points, you must redistribute the mass of that point to the right according to the following rule; to normalize the masses, which are located to the right of the censored point, and redistribute the mass of the censored point to the right according to the ratio of the normalized mass. Until now, we illustrate the main idea of this algorithm.The meaning of that idea is more efficient than PL-estimator in the sense that it decreases the mass of after that area. Just like a redistribute to the right algorithm, this method is enough for the probability theory

  4. Trend Analysis of Pahang River Using Non-Parametric Analysis: Mann Kendalls Trend Test

    Nur Hishaam Sulaiman; Mohd Khairul Amri Kamarudin; Mohd Khairul Amri Kamarudin; Ahmad Dasuki Mustafa; Muhammad Azizi Amran; Fazureen Azaman; Ismail Zainal Abidin; Norsyuhada Hairoma

    2015-01-01

    Flood is common in Pahang especially during northeast monsoon season from November to February. Three river cross station: Lubuk Paku, Sg. Yap and Temerloh were selected as area of this study. The stream flow and water level data were gathered from DID record. Data set for this study were analysed by using non-parametric analysis, Mann-Kendall Trend Test. The results that obtained from stream flow and water level analysis indicate that there are positively significant trend for Lubuk Paku (0.001) and Sg. Yap (<0.0001) from 1972-2011 with the p-value < 0.05. Temerloh (0.178) data from 1963-2011 recorded no trend for stream flow parameter but negative trend for water level parameter. Hydrological pattern and trend are extremely affected by outside factors such as north east monsoon season that occurred in South China Sea and affected Pahang during November to March. There are other factors such as development and management of the areas which can be considered as factors affected the data and results. Hydrological Pattern is important to indicate the river trend such as stream flow and water level. It can be used as flood mitigation by local authorities. (author)

  5. Design Automation Using Script Languages. High-Level CAD Templates in Non-Parametric Programs

    Moreno, R.; Bazán, A. M.

    2017-10-01

    The main purpose of this work is to study the advantages offered by the application of traditional techniques of technical drawing in processes for automation of the design, with non-parametric CAD programs, provided with scripting languages. Given that an example drawing can be solved with traditional step-by-step detailed procedures, is possible to do the same with CAD applications and to generalize it later, incorporating references. In today’s modern CAD applications, there are striking absences of solutions for building engineering: oblique projections (military and cavalier), 3D modelling of complex stairs, roofs, furniture, and so on. The use of geometric references (using variables in script languages) and their incorporation into high-level CAD templates allows the automation of processes. Instead of repeatedly creating similar designs or modifying their data, users should be able to use these templates to generate future variations of the same design. This paper presents the automation process of several complex drawing examples based on CAD script files aided with parametric geometry calculation tools. The proposed method allows us to solve complex geometry designs not currently incorporated in the current CAD applications and to subsequently create other new derivatives without user intervention. Automation in the generation of complex designs not only saves time but also increases the quality of the presentations and reduces the possibility of human errors.

  6. A Non-Parametric Delphi Approach to Foster Innovation Policy Debate in Spain

    Juan Carlos Salazar-Elena

    2016-05-01

    Full Text Available The aim of this paper is to identify some changes needed in Spain’s innovation policy to fill the gap between its innovation results and those of other European countries in lieu of sustainable leadership. To do this we apply the Delphi methodology to experts from academia, business, and government. To overcome the shortcomings of traditional descriptive methods, we develop an inferential analysis by following a non-parametric bootstrap method which enables us to identify important changes that should be implemented. Particularly interesting is the support found for improving the interconnections among the relevant agents of the innovation system (instead of focusing exclusively in the provision of knowledge and technological inputs through R and D activities, or the support found for “soft” policy instruments aimed at providing a homogeneous framework to assess the innovation capabilities of firms (e.g., for funding purposes. Attention to potential innovators among small and medium enterprises (SMEs and traditional industries is particularly encouraged by experts.

  7. An artificial neural network architecture for non-parametric visual odometry in wireless capsule endoscopy

    Dimas, George; Iakovidis, Dimitris K; Karargyris, Alexandros; Ciuti, Gastone; Koulaouzidis, Anastasios

    2017-01-01

    Wireless capsule endoscopy is a non-invasive screening procedure of the gastrointestinal (GI) tract performed with an ingestible capsule endoscope (CE) of the size of a large vitamin pill. Such endoscopes are equipped with a usually low-frame-rate color camera which enables the visualization of the GI lumen and the detection of pathologies. The localization of the commercially available CEs is performed in the 3D abdominal space using radio-frequency (RF) triangulation from external sensor arrays, in combination with transit time estimation. State-of-the-art approaches, such as magnetic localization, which have been experimentally proved more accurate than the RF approach, are still at an early stage. Recently, we have demonstrated that CE localization is feasible using solely visual cues and geometric models. However, such approaches depend on camera parameters, many of which are unknown. In this paper the authors propose a novel non-parametric visual odometry (VO) approach to CE localization based on a feed-forward neural network architecture. The effectiveness of this approach in comparison to state-of-the-art geometric VO approaches is validated using a robotic-assisted in vitro experimental setup. (paper)

  8. An artificial neural network architecture for non-parametric visual odometry in wireless capsule endoscopy

    Dimas, George; Iakovidis, Dimitris K.; Karargyris, Alexandros; Ciuti, Gastone; Koulaouzidis, Anastasios

    2017-09-01

    Wireless capsule endoscopy is a non-invasive screening procedure of the gastrointestinal (GI) tract performed with an ingestible capsule endoscope (CE) of the size of a large vitamin pill. Such endoscopes are equipped with a usually low-frame-rate color camera which enables the visualization of the GI lumen and the detection of pathologies. The localization of the commercially available CEs is performed in the 3D abdominal space using radio-frequency (RF) triangulation from external sensor arrays, in combination with transit time estimation. State-of-the-art approaches, such as magnetic localization, which have been experimentally proved more accurate than the RF approach, are still at an early stage. Recently, we have demonstrated that CE localization is feasible using solely visual cues and geometric models. However, such approaches depend on camera parameters, many of which are unknown. In this paper the authors propose a novel non-parametric visual odometry (VO) approach to CE localization based on a feed-forward neural network architecture. The effectiveness of this approach in comparison to state-of-the-art geometric VO approaches is validated using a robotic-assisted in vitro experimental setup.

  9. Two non-parametric methods for derivation of constraints from radiotherapy dose–histogram data

    Ebert, M A; Kennedy, A; Joseph, D J; Gulliford, S L; Buettner, F; Foo, K; Haworth, A; Denham, J W

    2014-01-01

    Dose constraints based on histograms provide a convenient and widely-used method for informing and guiding radiotherapy treatment planning. Methods of derivation of such constraints are often poorly described. Two non-parametric methods for derivation of constraints are described and investigated in the context of determination of dose-specific cut-points—values of the free parameter (e.g., percentage volume of the irradiated organ) which best reflect resulting changes in complication incidence. A method based on receiver operating characteristic (ROC) analysis and one based on a maximally-selected standardized rank sum are described and compared using rectal toxicity data from a prostate radiotherapy trial. Multiple test corrections are applied using a free step-down resampling algorithm, which accounts for the large number of tests undertaken to search for optimal cut-points and the inherent correlation between dose–histogram points. Both methods provide consistent significant cut-point values, with the rank sum method displaying some sensitivity to the underlying data. The ROC method is simple to implement and can utilize a complication atlas, though an advantage of the rank sum method is the ability to incorporate all complication grades without the need for grade dichotomization. (note)

  10. Performances of non-parametric statistics in sensitivity analysis and parameter ranking

    Saltelli, A.

    1987-01-01

    Twelve parametric and non-parametric sensitivity analysis techniques are compared in the case of non-linear model responses. The test models used are taken from the long-term risk analysis for the disposal of high level radioactive waste in a geological formation. They describe the transport of radionuclides through a set of engineered and natural barriers from the repository to the biosphere and to man. The output data from these models are the dose rates affecting the maximum exposed individual of a critical group at a given point in time. All the techniques are applied to the output from the same Monte Carlo simulations, where a modified version of Latin Hypercube method is used for the sample selection. Hypothesis testing is systematically applied to quantify the degree of confidence in the results given by the various sensitivity estimators. The estimators are ranked according to their robustness and stability, on the basis of two test cases. The conclusions are that no estimator can be considered the best from all points of view and recommend the use of more than just one estimator in sensitivity analysis

  11. Impulse response identification with deterministic inputs using non-parametric methods

    Bhargava, U.K.; Kashyap, R.L.; Goodman, D.M.

    1985-01-01

    This paper addresses the problem of impulse response identification using non-parametric methods. Although the techniques developed herein apply to the truncated, untruncated, and the circulant models, we focus on the truncated model which is useful in certain applications. Two methods of impulse response identification will be presented. The first is based on the minimization of the C/sub L/ Statistic, which is an estimate of the mean-square prediction error; the second is a Bayesian approach. For both of these methods, we consider the effects of using both the identity matrix and the Laplacian matrix as weights on the energy in the impulse response. In addition, we present a method for estimating the effective length of the impulse response. Estimating the length is particularly important in the truncated case. Finally, we develop a method for estimating the noise variance at the output. Often, prior information on the noise variance is not available, and a good estimate is crucial to the success of estimating the impulse response with a nonparametric technique

  12. Energy-saving and emission-abatement potential of Chinese coal-fired power enterprise: A non-parametric analysis

    Wei, Chu; Löschel, Andreas; Liu, Bing

    2015-01-01

    In the context of soaring demand for electricity, mitigating and controlling greenhouse gas emissions is a great challenge for China's power sector. Increasing attention has been placed on the evaluation of energy efficiency and CO 2 abatement potential in the power sector. However, studies at the micro-level are relatively rare due to serious data limitations. This study uses the 2004 and 2008 Census data of Zhejiang province to construct a non-parametric frontier in order to assess the abatement space of energy and associated CO 2 emission from China's coal-fired power enterprises. A Weighted Russell Directional Distance Function (WRDDF) is applied to construct an energy-saving potential index and a CO 2 emission-abatement potential index. Both indicators depict the inefficiency level in terms of energy utilization and CO 2 emissions of electric power plants. Our results show a substantial variation of energy-saving potential and CO 2 abatement potential among enterprises. We find that large power enterprises are less efficient in 2004, but become more efficient than smaller enterprises in 2008. State-owned enterprises (SOE) are not significantly different in 2008 from 2004, but perform better than their non-SOE counterparts in 2008. This change in performance for large enterprises and SOE might be driven by the “top-1000 Enterprise Energy Conservation Action” that was implemented in 2006. - Highlights: • Energy-saving potential and CO 2 abatement-potential for Chinese power enterprise are evaluated. • The potential to curb energy and emission shows great variation and dynamic changes. • Large enterprise is less efficient than small enterprise in 2004, but more efficient in 2008. • The state-owned enterprise performs better than non-state-owned enterprise in 2008

  13. A Non-Parametric Surrogate-based Test of Significance for T-Wave Alternans Detection

    Nemati, Shamim; Abdala, Omar; Bazán, Violeta; Yim-Yeh, Susie; Malhotra, Atul; Clifford, Gari

    2010-01-01

    We present a non-parametric adaptive surrogate test that allows for the differentiation of statistically significant T-Wave Alternans (TWA) from alternating patterns that can be solely explained by the statistics of noise. The proposed test is based on estimating the distribution of noise induced alternating patterns in a beat sequence from a set of surrogate data derived from repeated reshuffling of the original beat sequence. Thus, in assessing the significance of the observed alternating patterns in the data no assumptions are made about the underlying noise distribution. In addition, since the distribution of noise-induced alternans magnitudes is calculated separately for each sequence of beats within the analysis window, the method is robust to data non-stationarities in both noise and TWA. The proposed surrogate method for rejecting noise was compared to the standard noise rejection methods used with the Spectral Method (SM) and the Modified Moving Average (MMA) techniques. Using a previously described realistic multi-lead model of TWA, and real physiological noise, we demonstrate the proposed approach reduces false TWA detections, while maintaining a lower missed TWA detection compared with all the other methods tested. A simple averaging-based TWA estimation algorithm was coupled with the surrogate significance testing and was evaluated on three public databases; the Normal Sinus Rhythm Database (NRSDB), the Chronic Heart Failure Database (CHFDB) and the Sudden Cardiac Death Database (SCDDB). Differences in TWA amplitudes between each database were evaluated at matched heart rate (HR) intervals from 40 to 120 beats per minute (BPM). Using the two-sample Kolmogorov-Smirnov test, we found that significant differences in TWA levels exist between each patient group at all decades of heart rates. The most marked difference was generally found at higher heart rates, and the new technique resulted in a larger margin of separability between patient populations than

  14. Parametric and non-parametric approach for sensory RATA (Rate-All-That-Apply) method of ledre profile attributes

    Hastuti, S.; Harijono; Murtini, E. S.; Fibrianto, K.

    2018-03-01

    This current study is aimed to investigate the use of parametric and non-parametric approach for sensory RATA (Rate-All-That-Apply) method. Ledre as Bojonegoro unique local food product was used as point of interest, in which 319 panelists were involved in the study. The result showed that ledre is characterized as easy-crushed texture, sticky in mouth, stingy sensation and easy to swallow. It has also strong banana flavour with brown in colour. Compared to eggroll and semprong, ledre has more variances in terms of taste as well the roll length. As RATA questionnaire is designed to collect categorical data, non-parametric approach is the common statistical procedure. However, similar results were also obtained as parametric approach, regardless the fact of non-normal distributed data. Thus, it suggests that parametric approach can be applicable for consumer study with large number of respondents, even though it may not satisfy the assumption of ANOVA (Analysis of Variances).

  15. Sensitivity of Technical Efficiency Estimates to Estimation Methods: An Empirical Comparison of Parametric and Non-Parametric Approaches

    de-Graft Acquah, Henry

    2014-01-01

    This paper highlights the sensitivity of technical efficiency estimates to estimation approaches using empirical data. Firm specific technical efficiency and mean technical efficiency are estimated using the non parametric Data Envelope Analysis (DEA) and the parametric Corrected Ordinary Least Squares (COLS) and Stochastic Frontier Analysis (SFA) approaches. Mean technical efficiency is found to be sensitive to the choice of estimation technique. Analysis of variance and Tukey’s test sugge...

  16. Estimating technical efficiency in the hospital sector with panel data: a comparison of parametric and non-parametric techniques.

    Siciliani, Luigi

    2006-01-01

    Policy makers are increasingly interested in developing performance indicators that measure hospital efficiency. These indicators may give the purchasers of health services an additional regulatory tool to contain health expenditure. Using panel data, this study compares different parametric (econometric) and non-parametric (linear programming) techniques for the measurement of a hospital's technical efficiency. This comparison was made using a sample of 17 Italian hospitals in the years 1996-9. Highest correlations are found in the efficiency scores between the non-parametric data envelopment analysis under the constant returns to scale assumption (DEA-CRS) and several parametric models. Correlation reduces markedly when using more flexible non-parametric specifications such as data envelopment analysis under the variable returns to scale assumption (DEA-VRS) and the free disposal hull (FDH) model. Correlation also generally reduces when moving from one output to two-output specifications. This analysis suggests that there is scope for developing performance indicators at hospital level using panel data, but it is important that extensive sensitivity analysis is carried out if purchasers wish to make use of these indicators in practice.

  17. Evaluation of world's largest social welfare scheme: An assessment using non-parametric approach.

    Singh, Sanjeet

    2016-08-01

    Mahatma Gandhi National Rural Employment Guarantee Act (MGNREGA) is the world's largest social welfare scheme in India for the poverty alleviation through rural employment generation. This paper aims to evaluate and rank the performance of the states in India under MGNREGA scheme. A non-parametric approach, Data Envelopment Analysis (DEA) is used to calculate the overall technical, pure technical, and scale efficiencies of states in India. The sample data is drawn from the annual official reports published by the Ministry of Rural Development, Government of India. Based on three selected input parameters (expenditure indicators) and five output parameters (employment generation indicators), I apply both input and output oriented DEA models to estimate how well the states utilize their resources and generate outputs during the financial year 2013-14. The relative performance evaluation has been made under the assumption of constant returns and also under variable returns to scale to assess the impact of scale on performance. The results indicate that the main source of inefficiency is both technical and managerial practices adopted. 11 states are overall technically efficient and operate at the optimum scale whereas 18 states are pure technical or managerially efficient. It has been found that for some states it necessary to alter scheme size to perform at par with the best performing states. For inefficient states optimal input and output targets along with the resource savings and output gains are calculated. Analysis shows that if all inefficient states operate at optimal input and output levels, on an average 17.89% of total expenditure and a total amount of $780million could have been saved in a single year. Most of the inefficient states perform poorly when it comes to the participation of women and disadvantaged sections (SC&ST) in the scheme. In order to catch up with the performance of best performing states, inefficient states on an average need to enhance

  18. RATGRAPH: Computer Graphing of Rational Functions.

    Minch, Bradley A.

    1987-01-01

    Presents an easy-to-use Applesoft BASIC program that graphs rational functions and any asymptotes that the functions might have. Discusses the nature of rational functions, graphing them manually, employing a computer to graph rational functions, and describes how the program works. (TW)

  19. Computational network design from functional specifications

    Peng, Chi Han; Yang, Yong Liang; Bao, Fan; Fink, Daniel; Yan, Dongming; Wonka, Peter; Mitra, Niloy J.

    2016-01-01

    of people in a workspace. Designing such networks from scratch is challenging as even local network changes can have large global effects. We investigate how to computationally create networks starting from only high-level functional specifications

  20. Computing the zeros of analytic functions

    Kravanja, Peter

    2000-01-01

    Computing all the zeros of an analytic function and their respective multiplicities, locating clusters of zeros and analytic fuctions, computing zeros and poles of meromorphic functions, and solving systems of analytic equations are problems in computational complex analysis that lead to a rich blend of mathematics and numerical analysis. This book treats these four problems in a unified way. It contains not only theoretical results (based on formal orthogonal polynomials or rational interpolation) but also numerical analysis and algorithmic aspects, implementation heuristics, and polished software (the package ZEAL) that is available via the CPC Program Library. Graduate studets and researchers in numerical mathematics will find this book very readable.

  1. Notes on the Implementation of Non-Parametric Statistics within the Westinghouse Realistic Large Break LOCA Evaluation Model (ASTRUM)

    Frepoli, Cesare; Oriani, Luca

    2006-01-01

    In recent years, non-parametric or order statistics methods have been widely used to assess the impact of the uncertainties within Best-Estimate LOCA evaluation models. The bounding of the uncertainties is achieved with a direct Monte Carlo sampling of the uncertainty attributes, with the minimum trial number selected to 'stabilize' the estimation of the critical output values (peak cladding temperature (PCT), local maximum oxidation (LMO), and core-wide oxidation (CWO A non-parametric order statistics uncertainty analysis was recently implemented within the Westinghouse Realistic Large Break LOCA evaluation model, also referred to as 'Automated Statistical Treatment of Uncertainty Method' (ASTRUM). The implementation or interpretation of order statistics in safety analysis is not fully consistent within the industry. This has led to an extensive public debate among regulators and researchers which can be found in the open literature. The USNRC-approved Westinghouse method follows a rigorous implementation of the order statistics theory, which leads to the execution of 124 simulations within a Large Break LOCA analysis. This is a solid approach which guarantees that a bounding value (at 95% probability) of the 95 th percentile for each of the three 10 CFR 50.46 ECCS design acceptance criteria (PCT, LMO and CWO) is obtained. The objective of this paper is to provide additional insights on the ASTRUM statistical approach, with a more in-depth analysis of pros and cons of the order statistics and of the Westinghouse approach in the implementation of this statistical methodology. (authors)

  2. Evaluation of model-based versus non-parametric monaural noise-reduction approaches for hearing aids.

    Harlander, Niklas; Rosenkranz, Tobias; Hohmann, Volker

    2012-08-01

    Single channel noise reduction has been well investigated and seems to have reached its limits in terms of speech intelligibility improvement, however, the quality of such schemes can still be advanced. This study tests to what extent novel model-based processing schemes might improve performance in particular for non-stationary noise conditions. Two prototype model-based algorithms, a speech-model-based, and a auditory-model-based algorithm were compared to a state-of-the-art non-parametric minimum statistics algorithm. A speech intelligibility test, preference rating, and listening effort scaling were performed. Additionally, three objective quality measures for the signal, background, and overall distortions were applied. For a better comparison of all algorithms, particular attention was given to the usage of the similar Wiener-based gain rule. The perceptual investigation was performed with fourteen hearing-impaired subjects. The results revealed that the non-parametric algorithm and the auditory model-based algorithm did not affect speech intelligibility, whereas the speech-model-based algorithm slightly decreased intelligibility. In terms of subjective quality, both model-based algorithms perform better than the unprocessed condition and the reference in particular for highly non-stationary noise environments. Data support the hypothesis that model-based algorithms are promising for improving performance in non-stationary noise conditions.

  3. CADDIS Volume 4. Data Analysis: PECBO Appendix - R Scripts for Non-Parametric Regressions

    Script for computing nonparametric regression analysis. Overview of using scripts to infer environmental conditions from biological observations, statistically estimating species-environment relationships, statistical scripts.

  4. Function Package for Computing Quantum Resource Measures

    Huang, Zhiming

    2018-05-01

    In this paper, we present a function package for to calculate quantum resource measures and dynamics of open systems. Our package includes common operators and operator lists, frequently-used functions for computing quantum entanglement, quantum correlation, quantum coherence, quantum Fisher information and dynamics in noisy environments. We briefly explain the functions of the package and illustrate how to use the package with several typical examples. We expect that this package is a useful tool for future research and education.

  5. On computing special functions in marine engineering

    Constantinescu, E.; Bogdan, M.

    2015-11-01

    Important modeling applications in marine engineering conduct us to a special class of solutions for difficult differential equations with variable coefficients. In order to be able to solve and implement such models (in wave theory, in acoustics, in hydrodynamics, in electromagnetic waves, but also in many other engineering fields), it is necessary to compute so called special functions: Bessel functions, modified Bessel functions, spherical Bessel functions, Hankel functions. The aim of this paper is to develop numerical solutions in Matlab for the above mentioned special functions. Taking into account the main properties for Bessel and modified Bessel functions, we shortly present analytically solutions (where possible) in the form of series. Especially it is studied the behavior of these special functions using Matlab facilities: numerical solutions and plotting. Finally, it will be compared the behavior of the special functions and point out other directions for investigating properties of Bessel and spherical Bessel functions. The asymptotic forms of Bessel functions and modified Bessel functions allow determination of important properties of these functions. The modified Bessel functions tend to look more like decaying and growing exponentials.

  6. Computer assisted functional analysis. Computer gestuetzte funktionelle Analyse

    Schmidt, H A.E.; Roesler, H

    1982-01-01

    The latest developments in computer-assisted functional analysis (CFA) in nuclear medicine are presented in about 250 papers of the 19th international annual meeting of the Society of Nuclear Medicine (Bern, September 1981). Apart from the mathematical and instrumental aspects of CFA, computerized emission tomography is given particular attention. Advances in nuclear medical diagnosis in the fields of radiopharmaceuticals, cardiology, angiology, neurology, ophthalmology, pulmonology, gastroenterology, nephrology, endocrinology, oncology and osteology are discussed.

  7. Dynamics and computation in functional shifts

    Namikawa, Jun; Hashimoto, Takashi

    2004-07-01

    We introduce a new type of shift dynamics as an extended model of symbolic dynamics, and investigate the characteristics of shift spaces from the viewpoints of both dynamics and computation. This shift dynamics is called a functional shift, which is defined by a set of bi-infinite sequences of some functions on a set of symbols. To analyse the complexity of functional shifts, we measure them in terms of topological entropy, and locate their languages in the Chomsky hierarchy. Through this study, we argue that considering functional shifts from the viewpoints of both dynamics and computation gives us opposite results about the complexity of systems. We also describe a new class of shift spaces whose languages are not recursively enumerable.

  8. BLUES function method in computational physics

    Indekeu, Joseph O.; Müller-Nedebock, Kristian K.

    2018-04-01

    We introduce a computational method in physics that goes ‘beyond linear use of equation superposition’ (BLUES). A BLUES function is defined as a solution of a nonlinear differential equation (DE) with a delta source that is at the same time a Green’s function for a related linear DE. For an arbitrary source, the BLUES function can be used to construct an exact solution to the nonlinear DE with a different, but related source. Alternatively, the BLUES function can be used to construct an approximate piecewise analytical solution to the nonlinear DE with an arbitrary source. For this alternative use the related linear DE need not be known. The method is illustrated in a few examples using analytical calculations and numerical computations. Areas for further applications are suggested.

  9. Computing complex Airy functions by numerical quadrature

    A. Gil (Amparo); J. Segura (Javier); N.M. Temme (Nico)

    2001-01-01

    textabstractIntegral representations are considered of solutions of the Airydifferential equation w''-z, w=0 for computing Airy functions for complex values of z.In a first method contour integral representations of the Airyfunctions are written as non-oscillating

  10. Computer Games Functioning as Motivation Stimulants

    Lin, Grace Hui Chin; Tsai, Tony Kung Wan; Chien, Paul Shih Chieh

    2011-01-01

    Numerous scholars have recommended computer games can function as influential motivation stimulants of English learning, showing benefits as learning tools (Clarke and Dede, 2007; Dede, 2009; Klopfer and Squire, 2009; Liu and Chu, 2010; Mitchell, Dede & Dunleavy, 2009). This study aimed to further test and verify the above suggestion,…

  11. Function Follows Performance in Evolutionary Computational Processing

    Pasold, Anke; Foged, Isak Worre

    2011-01-01

    As the title ‘Function Follows Performance in Evolutionary Computational Processing’ suggests, this paper explores the potentials of employing multiple design and evaluation criteria within one processing model in order to account for a number of performative parameters desired within varied...

  12. rSeqNP: a non-parametric approach for detecting differential expression and splicing from RNA-Seq data.

    Shi, Yang; Chinnaiyan, Arul M; Jiang, Hui

    2015-07-01

    High-throughput sequencing of transcriptomes (RNA-Seq) has become a powerful tool to study gene expression. Here we present an R package, rSeqNP, which implements a non-parametric approach to test for differential expression and splicing from RNA-Seq data. rSeqNP uses permutation tests to access statistical significance and can be applied to a variety of experimental designs. By combining information across isoforms, rSeqNP is able to detect more differentially expressed or spliced genes from RNA-Seq data. The R package with its source code and documentation are freely available at http://www-personal.umich.edu/∼jianghui/rseqnp/. jianghui@umich.edu Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  13. A non-parametric conditional bivariate reference region with an application to height/weight measurements on normal girls

    Petersen, Jørgen Holm

    2009-01-01

    A conceptually simple two-dimensional conditional reference curve is described. The curve gives a decision basis for determining whether a bivariate response from an individual is "normal" or "abnormal" when taking into account that a third (conditioning) variable may influence the bivariate...... response. The reference curve is not only characterized analytically but also by geometric properties that are easily communicated to medical doctors - the users of such curves. The reference curve estimator is completely non-parametric, so no distributional assumptions are needed about the two......-dimensional response. An example that will serve to motivate and illustrate the reference is the study of the height/weight distribution of 7-8-year-old Danish school girls born in 1930, 1950, or 1970....

  14. Computation of the Complex Probability Function

    Trainer, Amelia Jo [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Ledwith, Patrick John [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-08-22

    The complex probability function is important in many areas of physics and many techniques have been developed in an attempt to compute it for some z quickly and e ciently. Most prominent are the methods that use Gauss-Hermite quadrature, which uses the roots of the nth degree Hermite polynomial and corresponding weights to approximate the complex probability function. This document serves as an overview and discussion of the use, shortcomings, and potential improvements on the Gauss-Hermite quadrature for the complex probability function.

  15. Technical Topic 3.2.2.d Bayesian and Non-Parametric Statistics: Integration of Neural Networks with Bayesian Networks for Data Fusion and Predictive Modeling

    2016-05-31

    Distribution Unlimited UU UU UU UU 31-05-2016 15-Apr-2014 14-Jan-2015 Final Report: Technical Topic 3.2.2.d Bayesian and Non- parametric Statistics...of Papers published in non peer-reviewed journals: Final Report: Technical Topic 3.2.2.d Bayesian and Non- parametric Statistics: Integration of Neural...Transfer N/A Number of graduating undergraduates who achieved a 3.5 GPA to 4.0 (4.0 max scale ): Number of graduating undergraduates funded by a DoD funded

  16. Non-parametric co-clustering of large scale sparse bipartite networks on the GPU

    Hansen, Toke Jansen; Mørup, Morten; Hansen, Lars Kai

    2011-01-01

    of row and column clusters from a hypothesis space of an infinite number of clusters. To reach large scale applications of co-clustering we exploit that parameter inference for co-clustering is well suited for parallel computing. We develop a generic GPU framework for efficient inference on large scale...... sparse bipartite networks and achieve a speedup of two orders of magnitude compared to estimation based on conventional CPUs. In terms of scalability we find for networks with more than 100 million links that reliable inference can be achieved in less than an hour on a single GPU. To efficiently manage...

  17. Non-parametric estimation of the availability in a general repairable system

    Gamiz, M.L.; Roman, Y.

    2008-01-01

    This work deals with repairable systems with unknown failure and repair time distributions. We focus on the estimation of the instantaneous availability, that is, the probability that the system is functioning at a given time, which we consider as the most significant measure for evaluating the effectiveness of a repairable system. The estimation of the availability function is not, in general, an easy task, i.e., analytical techniques are difficult to apply. We propose a smooth estimation of the availability based on kernel estimator of the cumulative distribution functions (CDF) of the failure and repair times, for which the bandwidth parameters are obtained by bootstrap procedures. The consistency properties of the availability estimator are established by using techniques based on the Laplace transform

  18. Non-parametric estimation of the availability in a general repairable system

    Gamiz, M.L. [Departamento de Estadistica e I.O., Facultad de Ciencias, Universidad de Granada, Granada 18071 (Spain)], E-mail: mgamiz@ugr.es; Roman, Y. [Departamento de Estadistica e I.O., Facultad de Ciencias, Universidad de Granada, Granada 18071 (Spain)

    2008-08-15

    This work deals with repairable systems with unknown failure and repair time distributions. We focus on the estimation of the instantaneous availability, that is, the probability that the system is functioning at a given time, which we consider as the most significant measure for evaluating the effectiveness of a repairable system. The estimation of the availability function is not, in general, an easy task, i.e., analytical techniques are difficult to apply. We propose a smooth estimation of the availability based on kernel estimator of the cumulative distribution functions (CDF) of the failure and repair times, for which the bandwidth parameters are obtained by bootstrap procedures. The consistency properties of the availability estimator are established by using techniques based on the Laplace transform.

  19. Functions of the computer management games

    Kočí, Josef

    2016-01-01

    This thesis discusses the possibilities of using managerial games, their purpose, meaning, functions and focuses specifically on the management computer games, how it differs from classic games and what are their advantages and disadvantages. The theoretical part of thesis is also focused on why are these games discussed, why are they accepted or sometimes rejected and why they have become so popular for some managers and public gamers. This will serve me a survey conducted in the 11 April 20...

  20. Versatile Density Functionals for Computational Surface Science

    Wellendorff, Jess

    Density functional theory (DFT) emerged almost 50 years ago. Since then DFT has established itself as the central electronic structure methodology for simulating atomicscale systems from a few atoms to a few hundred atoms. This success of DFT is due to a very favorable accuracy-to-computational c......Density functional theory (DFT) emerged almost 50 years ago. Since then DFT has established itself as the central electronic structure methodology for simulating atomicscale systems from a few atoms to a few hundred atoms. This success of DFT is due to a very favorable accuracy...... resampling techniques, thereby systematically avoiding problems with overfitting. The first ever density functional presenting both reliable accuracy and convincing error estimation is generated. The methodology is general enough to be applied to more complex functional forms with higher-dimensional fitting...

  1. New Computer Simulations of Macular Neural Functioning

    Ross, Muriel D.; Doshay, D.; Linton, S.; Parnas, B.; Montgomery, K.; Chimento, T.

    1994-01-01

    We use high performance graphics workstations and supercomputers to study the functional significance of the three-dimensional (3-D) organization of gravity sensors. These sensors have a prototypic architecture foreshadowing more complex systems. Scaled-down simulations run on a Silicon Graphics workstation and scaled-up, 3-D versions run on a Cray Y-MP supercomputer. A semi-automated method of reconstruction of neural tissue from serial sections studied in a transmission electron microscope has been developed to eliminate tedious conventional photography. The reconstructions use a mesh as a step in generating a neural surface for visualization. Two meshes are required to model calyx surfaces. The meshes are connected and the resulting prisms represent the cytoplasm and the bounding membranes. A finite volume analysis method is employed to simulate voltage changes along the calyx in response to synapse activation on the calyx or on calyceal processes. The finite volume method insures that charge is conserved at the calyx-process junction. These and other models indicate that efferent processes act as voltage followers, and that the morphology of some afferent processes affects their functioning. In a final application, morphological information is symbolically represented in three dimensions in a computer. The possible functioning of the connectivities is tested using mathematical interpretations of physiological parameters taken from the literature. Symbolic, 3-D simulations are in progress to probe the functional significance of the connectivities. This research is expected to advance computer-based studies of macular functioning and of synaptic plasticity.

  2. Computer network defense through radial wave functions

    Malloy, Ian J.

    The purpose of this research is to synthesize basic and fundamental findings in quantum computing, as applied to the attack and defense of conventional computer networks. The concept focuses on uses of radio waves as a shield for, and attack against traditional computers. A logic bomb is analogous to a landmine in a computer network, and if one was to implement it as non-trivial mitigation, it will aid computer network defense. As has been seen in kinetic warfare, the use of landmines has been devastating to geopolitical regions in that they are severely difficult for a civilian to avoid triggering given the unknown position of a landmine. Thus, the importance of understanding a logic bomb is relevant and has corollaries to quantum mechanics as well. The research synthesizes quantum logic phase shifts in certain respects using the Dynamic Data Exchange protocol in software written for this work, as well as a C-NOT gate applied to a virtual quantum circuit environment by implementing a Quantum Fourier Transform. The research focus applies the principles of coherence and entanglement from quantum physics, the concept of expert systems in artificial intelligence, principles of prime number based cryptography with trapdoor functions, and modeling radio wave propagation against an event from unknown parameters. This comes as a program relying on the artificial intelligence concept of an expert system in conjunction with trigger events for a trapdoor function relying on infinite recursion, as well as system mechanics for elliptic curve cryptography along orbital angular momenta. Here trapdoor both denotes the form of cipher, as well as the implied relationship to logic bombs.

  3. Non parametric, self organizing, scalable modeling of spatiotemporal inputs: the sign language paradigm.

    Caridakis, G; Karpouzis, K; Drosopoulos, A; Kollias, S

    2012-12-01

    Modeling and recognizing spatiotemporal, as opposed to static input, is a challenging task since it incorporates input dynamics as part of the problem. The vast majority of existing methods tackle the problem as an extension of the static counterpart, using dynamics, such as input derivatives, at feature level and adopting artificial intelligence and machine learning techniques originally designed for solving problems that do not specifically address the temporal aspect. The proposed approach deals with temporal and spatial aspects of the spatiotemporal domain in a discriminative as well as coupling manner. Self Organizing Maps (SOM) model the spatial aspect of the problem and Markov models its temporal counterpart. Incorporation of adjacency, both in training and classification, enhances the overall architecture with robustness and adaptability. The proposed scheme is validated both theoretically, through an error propagation study, and experimentally, on the recognition of individual signs, performed by different, native Greek Sign Language users. Results illustrate the architecture's superiority when compared to Hidden Markov Model techniques and variations both in terms of classification performance and computational cost. Copyright © 2012 Elsevier Ltd. All rights reserved.

  4. Non-parametric classification of esophagus motility by means of neural networks

    Thøgersen, C; Rasmussen, C; Rutz, K

    1997-01-01

    . The aim of the present work has been to test the ability of neural networks to identify abnormal contraction patterns in patients with non-obstructive dysphagia (NOBD). Nineteen volunteers and 22 patients with NOBD underwent simultaneous recordings of four pressures in the esophagus for at least 23 hours......Automatic long-term recording of esophageal pressures by means of intraluminal transducers is used increasingly for evaluation of esophageal function. Most automatic analysis techniques are based on detection of derived parameters from the time series by means of arbitrary rule-based criterions...

  5. Non-parametric Bayesian graph models reveal community structure in resting state fMRI

    Andersen, Kasper Winther; Madsen, Kristoffer H.; Siebner, Hartwig Roman

    2014-01-01

    Modeling of resting state functional magnetic resonance imaging (rs-fMRI) data using network models is of increasing interest. It is often desirable to group nodes into clusters to interpret the communication patterns between nodes. In this study we consider three different nonparametric Bayesian...... models for node clustering in complex networks. In particular, we test their ability to predict unseen data and their ability to reproduce clustering across datasets. The three generative models considered are the Infinite Relational Model (IRM), Bayesian Community Detection (BCD), and the Infinite...... between clusters. BCD restricts the between-cluster link probabilities to be strictly lower than within-cluster link probabilities to conform to the community structure typically seen in social networks. IDM only models a single between-cluster link probability, which can be interpreted as a background...

  6. Computational network design from functional specifications

    Peng, Chi Han

    2016-07-11

    Connectivity and layout of underlying networks largely determine agent behavior and usage in many environments. For example, transportation networks determine the flow of traffic in a neighborhood, whereas building floorplans determine the flow of people in a workspace. Designing such networks from scratch is challenging as even local network changes can have large global effects. We investigate how to computationally create networks starting from only high-level functional specifications. Such specifications can be in the form of network density, travel time versus network length, traffic type, destination location, etc. We propose an integer programming-based approach that guarantees that the resultant networks are valid by fulfilling all the specified hard constraints and that they score favorably in terms of the objective function. We evaluate our algorithm in two different design settings, street layout and floorplans to demonstrate that diverse networks can emerge purely from high-level functional specifications.

  7. Discrete Wigner functions and quantum computation

    Galvao, E.

    2005-01-01

    Full text: Gibbons et al. have recently defined a class of discrete Wigner functions W to represent quantum states in a finite Hilbert space dimension d. I characterize the set C d of states having non-negative W simultaneously in all definitions of W in this class. I then argue that states in this set behave classically in a well-defined computational sense. I show that one-qubit states in C 2 do not provide for universal computation in a recent model proposed by Bravyi and Kitaev [quant-ph/0403025]. More generally, I show that the only pure states in C d are stabilizer states, which have an efficient description using the stabilizer formalism. This result shows that two different notions of 'classical' states coincide: states with non-negative Wigner functions are those which have an efficient description. This suggests that negativity of W may be necessary for exponential speed-up in pure-state quantum computation. (author)

  8. A new non-parametric stationarity test of time series in the time domain

    Jin, Lei

    2014-11-07

    © 2015 The Royal Statistical Society and Blackwell Publishing Ltd. We propose a new double-order selection test for checking second-order stationarity of a time series. To develop the test, a sequence of systematic samples is defined via Walsh functions. Then the deviations of the autocovariances based on these systematic samples from the corresponding autocovariances of the whole time series are calculated and the uniform asymptotic joint normality of these deviations over different systematic samples is obtained. With a double-order selection scheme, our test statistic is constructed by combining the deviations at different lags in the systematic samples. The null asymptotic distribution of the statistic proposed is derived and the consistency of the test is shown under fixed and local alternatives. Simulation studies demonstrate well-behaved finite sample properties of the method proposed. Comparisons with some existing tests in terms of power are given both analytically and empirically. In addition, the method proposed is applied to check the stationarity assumption of a chemical process viscosity readings data set.

  9. Numerical computation of generalized importance functions

    Gomit, J.M.; Nasr, M.; Ngyuen van Chi, G.; Pasquet, J.P.; Planchard, J.

    1981-01-01

    Thus far, an important effort has been devoted to developing and applying generalized perturbation theory in reactor physics analysis. In this work we are interested in the calculation of the importance functions by the method of A. Gandini. We have noted that in this method the convergence of the iterative procedure adopted is not rapid. Hence to accelerate this convergence we have used the semi-iterative technique. Two computer codes have been developed for one and two dimensional calculations (SPHINX-1D and SPHINX-2D). The advantage of our calculation was confirmed by some comparative tests in which the iteration number and the computing time were highly reduced with respect to classical calculation (CIAP-1D and CIAP-2D). (orig.) [de

  10. A new measure for gene expression biclustering based on non-parametric correlation.

    Flores, Jose L; Inza, Iñaki; Larrañaga, Pedro; Calvo, Borja

    2013-12-01

    One of the emerging techniques for performing the analysis of the DNA microarray data known as biclustering is the search of subsets of genes and conditions which are coherently expressed. These subgroups provide clues about the main biological processes. Until now, different approaches to this problem have been proposed. Most of them use the mean squared residue as quality measure but relevant and interesting patterns can not be detected such as shifting, or scaling patterns. Furthermore, recent papers show that there exist new coherence patterns involved in different kinds of cancer and tumors such as inverse relationships between genes which can not be captured. The proposed measure is called Spearman's biclustering measure (SBM) which performs an estimation of the quality of a bicluster based on the non-linear correlation among genes and conditions simultaneously. The search of biclusters is performed by using a evolutionary technique called estimation of distribution algorithms which uses the SBM measure as fitness function. This approach has been examined from different points of view by using artificial and real microarrays. The assessment process has involved the use of quality indexes, a set of bicluster patterns of reference including new patterns and a set of statistical tests. It has been also examined the performance using real microarrays and comparing to different algorithmic approaches such as Bimax, CC, OPSM, Plaid and xMotifs. SBM shows several advantages such as the ability to recognize more complex coherence patterns such as shifting, scaling and inversion and the capability to selectively marginalize genes and conditions depending on the statistical significance. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  11. Zero- vs. one-dimensional, parametric vs. non-parametric, and confidence interval vs. hypothesis testing procedures in one-dimensional biomechanical trajectory analysis.

    Pataky, Todd C; Vanrenterghem, Jos; Robinson, Mark A

    2015-05-01

    Biomechanical processes are often manifested as one-dimensional (1D) trajectories. It has been shown that 1D confidence intervals (CIs) are biased when based on 0D statistical procedures, and the non-parametric 1D bootstrap CI has emerged in the Biomechanics literature as a viable solution. The primary purpose of this paper was to clarify that, for 1D biomechanics datasets, the distinction between 0D and 1D methods is much more important than the distinction between parametric and non-parametric procedures. A secondary purpose was to demonstrate that a parametric equivalent to the 1D bootstrap exists in the form of a random field theory (RFT) correction for multiple comparisons. To emphasize these points we analyzed six datasets consisting of force and kinematic trajectories in one-sample, paired, two-sample and regression designs. Results showed, first, that the 1D bootstrap and other 1D non-parametric CIs were qualitatively identical to RFT CIs, and all were very different from 0D CIs. Second, 1D parametric and 1D non-parametric hypothesis testing results were qualitatively identical for all six datasets. Last, we highlight the limitations of 1D CIs by demonstrating that they are complex, design-dependent, and thus non-generalizable. These results suggest that (i) analyses of 1D data based on 0D models of randomness are generally biased unless one explicitly identifies 0D variables before the experiment, and (ii) parametric and non-parametric 1D hypothesis testing provide an unambiguous framework for analysis when one׳s hypothesis explicitly or implicitly pertains to whole 1D trajectories. Copyright © 2015 Elsevier Ltd. All rights reserved.

  12. A multi-instrument non-parametric reconstruction of the electron pressure profile in the galaxy cluster CLJ1226.9+3332

    Romero, C.; McWilliam, M.; Macías-Pérez, J.-F.; Adam, R.; Ade, P.; André, P.; Aussel, H.; Beelen, A.; Benoît, A.; Bideaud, A.; Billot, N.; Bourrion, O.; Calvo, M.; Catalano, A.; Coiffard, G.; Comis, B.; de Petris, M.; Désert, F.-X.; Doyle, S.; Goupy, J.; Kramer, C.; Lagache, G.; Leclercq, S.; Lestrade, J.-F.; Mauskopf, P.; Mayet, F.; Monfardini, A.; Pascale, E.; Perotto, L.; Pisano, G.; Ponthieu, N.; Revéret, V.; Ritacco, A.; Roussel, H.; Ruppin, F.; Schuster, K.; Sievers, A.; Triqueneaux, S.; Tucker, C.; Zylka, R.

    2018-04-01

    Context. In the past decade, sensitive, resolved Sunyaev-Zel'dovich (SZ) studies of galaxy clusters have become common. Whereas many previous SZ studies have parameterized the pressure profiles of galaxy clusters, non-parametric reconstructions will provide insights into the thermodynamic state of the intracluster medium. Aim. We seek to recover the non-parametric pressure profiles of the high redshift (z = 0.89) galaxy cluster CLJ 1226.9+3332 as inferred from SZ data from the MUSTANG, NIKA, Bolocam, and Planck instruments, which all probe different angular scales. Methods: Our non-parametric algorithm makes use of logarithmic interpolation, which under the assumption of ellipsoidal symmetry is analytically integrable. For MUSTANG, NIKA, and Bolocam we derive a non-parametric pressure profile independently and find good agreement among the instruments. In particular, we find that the non-parametric profiles are consistent with a fitted generalized Navaro-Frenk-White (gNFW) profile. Given the ability of Planck to constrain the total signal, we include a prior on the integrated Compton Y parameter as determined by Planck. Results: For a given instrument, constraints on the pressure profile diminish rapidly beyond the field of view. The overlap in spatial scales probed by these four datasets is therefore critical in checking for consistency between instruments. By using multiple instruments, our analysis of CLJ 1226.9+3332 covers a large radial range, from the central regions to the cluster outskirts: 0.05 R500 generation of SZ instruments such as NIKA2 and MUSTANG2.

  13. Prediction intervals for future BMI values of individual children - a non-parametric approach by quantile boosting

    Mayr Andreas

    2012-01-01

    Full Text Available Abstract Background The construction of prediction intervals (PIs for future body mass index (BMI values of individual children based on a recent German birth cohort study with n = 2007 children is problematic for standard parametric approaches, as the BMI distribution in childhood is typically skewed depending on age. Methods We avoid distributional assumptions by directly modelling the borders of PIs by additive quantile regression, estimated by boosting. We point out the concept of conditional coverage to prove the accuracy of PIs. As conditional coverage can hardly be evaluated in practical applications, we conduct a simulation study before fitting child- and covariate-specific PIs for future BMI values and BMI patterns for the present data. Results The results of our simulation study suggest that PIs fitted by quantile boosting cover future observations with the predefined coverage probability and outperform the benchmark approach. For the prediction of future BMI values, quantile boosting automatically selects informative covariates and adapts to the age-specific skewness of the BMI distribution. The lengths of the estimated PIs are child-specific and increase, as expected, with the age of the child. Conclusions Quantile boosting is a promising approach to construct PIs with correct conditional coverage in a non-parametric way. It is in particular suitable for the prediction of BMI patterns depending on covariates, since it provides an interpretable predictor structure, inherent variable selection properties and can even account for longitudinal data structures.

  14. Cliff´s Delta Calculator: A non-parametric effect size program for two groups of observations

    Guillermo Macbeth

    2011-05-01

    Full Text Available The Cliff´s Delta statistic is an effect size measure that quantifies the amount of difference between two non-parametric variables beyond p-values interpretation. This measure can be understood as a useful complementary analysis for the corresponding hypothesis testing. During the last two decades the use of effect size measures has been strongly encouraged by methodologists and leading institutions of behavioral sciences. The aim of this contribution is to introduce the Cliff´s Delta Calculator software that performs such analysis and offers some interpretation tips. Differences and similarities with the parametric case are analysed and illustrated. The implementation of this free program is fully described and compared with other calculators. Alternative algorithmic approaches are mathematically analysed and a basic linear algebra proof of its equivalence is formally presented. Two worked examples in cognitive psychology are commented. A visual interpretation of Cliff´s Delta is suggested. Availability, installation and applications of the program are presented and discussed.

  15. Convergence in energy consumption per capita across the US states, 1970–2013: An exploration through selected parametric and non-parametric methods

    Mohammadi, Hassan; Ram, Rati

    2017-01-01

    Noting the paucity of studies of convergence in energy consumption across the US states, and the usefulness of a study that shares the spirit of the enormous research on convergence in energy-related variables in cross-country contexts, this paper explores convergence in per-capita energy consumption across the US states over the 44-year period 1970–2013. Several well-known parametric and non-parametric approaches are explored partly to shed light on the substantive question and partly to provide a comparative methodological perspective on these approaches. Several statements summarize the outcome of our explorations. First, the widely-used Barro-type regressions do not indicate beta-convergence during the entire period or any of several sub-periods. Second, lack of sigma-convergence is also noted in terms of standard deviation of logarithms and coefficient of variation which do not show a decline between 1970 and 2013, but show slight upward trends. Third, kernel density function plots indicate some flattening of the distribution which is consistent with the results from sigma-convergence scenario. Fourth, intra-distribution mobility (“gamma convergence”) in terms of an index of rank concordance suggests a slow decline in the index. Fifth, the general impression from several types of panel and time-series unit-root tests is that of non-stationarity of the series and thus the lack of stochastic convergence during the period. Sixth, therefore, the overall impression seems to be that of the lack of convergence across states in per-capita energy consumption. The present interstate inequality in per-capita energy consumption may, therefore, reflect variations in structural factors and might not be expected to diminish.

  16. Use of NON-PARAMETRIC Item Response Theory to develop a shortened version of the Positive and Negative Syndrome Scale (PANSS)

    2011-01-01

    Background Nonparametric item response theory (IRT) was used to examine (a) the performance of the 30 Positive and Negative Syndrome Scale (PANSS) items and their options ((levels of severity), (b) the effectiveness of various subscales to discriminate among differences in symptom severity, and (c) the development of an abbreviated PANSS (Mini-PANSS) based on IRT and a method to link scores to the original PANSS. Methods Baseline PANSS scores from 7,187 patients with Schizophrenia or Schizoaffective disorder who were enrolled between 1995 and 2005 in psychopharmacology trials were obtained. Option characteristic curves (OCCs) and Item Characteristic Curves (ICCs) were constructed to examine the probability of rating each of seven options within each of 30 PANSS items as a function of subscale severity, and summed-score linking was applied to items selected for the Mini-PANSS. Results The majority of items forming the Positive and Negative subscales (i.e. 19 items) performed very well and discriminate better along symptom severity compared to the General Psychopathology subscale. Six of the seven Positive Symptom items, six of the seven Negative Symptom items, and seven out of the 16 General Psychopathology items were retained for inclusion in the Mini-PANSS. Summed score linking and linear interpolation was able to produce a translation table for comparing total subscale scores of the Mini-PANSS to total subscale scores on the original PANSS. Results show scores on the subscales of the Mini-PANSS can be linked to scores on the original PANSS subscales, with very little bias. Conclusions The study demonstrated the utility of non-parametric IRT in examining the item properties of the PANSS and to allow selection of items for an abbreviated PANSS scale. The comparisons between the 30-item PANSS and the Mini-PANSS revealed that the shorter version is comparable to the 30-item PANSS, but when applying IRT, the Mini-PANSS is also a good indicator of illness severity

  17. Use of non-parametric item response theory to develop a shortened version of the Positive and Negative Syndrome Scale (PANSS).

    Khan, Anzalee; Lewis, Charles; Lindenmayer, Jean-Pierre

    2011-11-16

    Nonparametric item response theory (IRT) was used to examine (a) the performance of the 30 Positive and Negative Syndrome Scale (PANSS) items and their options ((levels of severity), (b) the effectiveness of various subscales to discriminate among differences in symptom severity, and (c) the development of an abbreviated PANSS (Mini-PANSS) based on IRT and a method to link scores to the original PANSS. Baseline PANSS scores from 7,187 patients with Schizophrenia or Schizoaffective disorder who were enrolled between 1995 and 2005 in psychopharmacology trials were obtained. Option characteristic curves (OCCs) and Item Characteristic Curves (ICCs) were constructed to examine the probability of rating each of seven options within each of 30 PANSS items as a function of subscale severity, and summed-score linking was applied to items selected for the Mini-PANSS. The majority of items forming the Positive and Negative subscales (i.e. 19 items) performed very well and discriminate better along symptom severity compared to the General Psychopathology subscale. Six of the seven Positive Symptom items, six of the seven Negative Symptom items, and seven out of the 16 General Psychopathology items were retained for inclusion in the Mini-PANSS. Summed score linking and linear interpolation was able to produce a translation table for comparing total subscale scores of the Mini-PANSS to total subscale scores on the original PANSS. Results show scores on the subscales of the Mini-PANSS can be linked to scores on the original PANSS subscales, with very little bias. The study demonstrated the utility of non-parametric IRT in examining the item properties of the PANSS and to allow selection of items for an abbreviated PANSS scale. The comparisons between the 30-item PANSS and the Mini-PANSS revealed that the shorter version is comparable to the 30-item PANSS, but when applying IRT, the Mini-PANSS is also a good indicator of illness severity.

  18. Transit Timing Observations from Kepler: II. Confirmation of Two Multiplanet Systems via a Non-parametric Correlation Analysis

    Ford, Eric B.; /Florida U.; Fabrycky, Daniel C.; /Lick Observ.; Steffen, Jason H.; /Fermilab; Carter, Joshua A.; /Harvard-Smithsonian Ctr. Astrophys.; Fressin, Francois; /Harvard-Smithsonian Ctr. Astrophys.; Holman, Matthew J.; /Harvard-Smithsonian Ctr. Astrophys.; Lissauer, Jack J.; /NASA, Ames; Moorhead, Althea V.; /Florida U.; Morehead, Robert C.; /Florida U.; Ragozzine, Darin; /Harvard-Smithsonian Ctr. Astrophys.; Rowe, Jason F.; /NASA, Ames /SETI Inst., Mtn. View /San Diego State U., Astron. Dept.

    2012-01-01

    We present a new method for confirming transiting planets based on the combination of transit timing variations (TTVs) and dynamical stability. Correlated TTVs provide evidence that the pair of bodies are in the same physical system. Orbital stability provides upper limits for the masses of the transiting companions that are in the planetary regime. This paper describes a non-parametric technique for quantifying the statistical significance of TTVs based on the correlation of two TTV data sets. We apply this method to an analysis of the transit timing variations of two stars with multiple transiting planet candidates identified by Kepler. We confirm four transiting planets in two multiple planet systems based on their TTVs and the constraints imposed by dynamical stability. An additional three candidates in these same systems are not confirmed as planets, but are likely to be validated as real planets once further observations and analyses are possible. If all were confirmed, these systems would be near 4:6:9 and 2:4:6:9 period commensurabilities. Our results demonstrate that TTVs provide a powerful tool for confirming transiting planets, including low-mass planets and planets around faint stars for which Doppler follow-up is not practical with existing facilities. Continued Kepler observations will dramatically improve the constraints on the planet masses and orbits and provide sensitivity for detecting additional non-transiting planets. If Kepler observations were extended to eight years, then a similar analysis could likely confirm systems with multiple closely spaced, small transiting planets in or near the habitable zone of solar-type stars.

  19. TRANSIT TIMING OBSERVATIONS FROM KEPLER. II. CONFIRMATION OF TWO MULTIPLANET SYSTEMS VIA A NON-PARAMETRIC CORRELATION ANALYSIS

    Ford, Eric B.; Moorhead, Althea V.; Morehead, Robert C.; Fabrycky, Daniel C.; Steffen, Jason H.; Carter, Joshua A.; Fressin, Francois; Holman, Matthew J.; Ragozzine, Darin; Charbonneau, David; Lissauer, Jack J.; Rowe, Jason F.; Borucki, William J.; Bryson, Stephen T.; Burke, Christopher J.; Caldwell, Douglas A.; Welsh, William F.; Allen, Christopher; Batalha, Natalie M.; Buchhave, Lars A.

    2012-01-01

    We present a new method for confirming transiting planets based on the combination of transit timing variations (TTVs) and dynamical stability. Correlated TTVs provide evidence that the pair of bodies is in the same physical system. Orbital stability provides upper limits for the masses of the transiting companions that are in the planetary regime. This paper describes a non-parametric technique for quantifying the statistical significance of TTVs based on the correlation of two TTV data sets. We apply this method to an analysis of the TTVs of two stars with multiple transiting planet candidates identified by Kepler. We confirm four transiting planets in two multiple-planet systems based on their TTVs and the constraints imposed by dynamical stability. An additional three candidates in these same systems are not confirmed as planets, but are likely to be validated as real planets once further observations and analyses are possible. If all were confirmed, these systems would be near 4:6:9 and 2:4:6:9 period commensurabilities. Our results demonstrate that TTVs provide a powerful tool for confirming transiting planets, including low-mass planets and planets around faint stars for which Doppler follow-up is not practical with existing facilities. Continued Kepler observations will dramatically improve the constraints on the planet masses and orbits and provide sensitivity for detecting additional non-transiting planets. If Kepler observations were extended to eight years, then a similar analysis could likely confirm systems with multiple closely spaced, small transiting planets in or near the habitable zone of solar-type stars.

  20. Non-parametric trend analysis of the aridity index for three large arid and semi-arid basins in Iran

    Ahani, Hossien; Kherad, Mehrzad; Kousari, Mohammad Reza; van Roosmalen, Lieke; Aryanfar, Ramin; Hosseini, Seyyed Mashaallah

    2013-05-01

    Currently, an important scientific challenge that researchers are facing is to gain a better understanding of climate change at the regional scale, which can be especially challenging in an area with low and highly variable precipitation amounts such as Iran. Trend analysis of the medium-term change using ground station observations of meteorological variables can enhance our knowledge of the dominant processes in an area and contribute to the analysis of future climate projections. Generally, studies focus on the long-term variability of temperature and precipitation and to a lesser extent on other important parameters such as moisture indices. In this study the recent 50-year trends (1955-2005) of precipitation (P), potential evapotranspiration (PET), and aridity index (AI) in monthly time scale were studied over 14 synoptic stations in three large Iran basins using the Mann-Kendall non-parametric test. Additionally, an analysis of the monthly, seasonal and annual trend of each parameter was performed. Results showed no significant trends in the monthly time series. However, PET showed significant, mostly decreasing trends, for the seasonal values, which resulted in a significant negative trend in annual PET at five stations. Significant negative trends in seasonal P values were only found at a number of stations in spring and summer and no station showed significant negative trends in annual P. Due to the varied positive and negative trends in annual P and to a lesser extent PET, almost as many stations with negative as positive trends in annual AI were found, indicating that both drying and wetting trends occurred in Iran. Overall, the northern part of the study area showed an increasing trend in annual AI which meant that the region became wetter, while the south showed decreasing trends in AI.

  1. TRANSIT TIMING OBSERVATIONS FROM KEPLER. II. CONFIRMATION OF TWO MULTIPLANET SYSTEMS VIA A NON-PARAMETRIC CORRELATION ANALYSIS

    Ford, Eric B.; Moorhead, Althea V.; Morehead, Robert C. [Astronomy Department, University of Florida, 211 Bryant Space Sciences Center, Gainesville, FL 32611 (United States); Fabrycky, Daniel C. [UCO/Lick Observatory, University of California, Santa Cruz, CA 95064 (United States); Steffen, Jason H. [Fermilab Center for Particle Astrophysics, P.O. Box 500, MS 127, Batavia, IL 60510 (United States); Carter, Joshua A.; Fressin, Francois; Holman, Matthew J.; Ragozzine, Darin; Charbonneau, David [Harvard-Smithsonian Center for Astrophysics, 60 Garden Street, Cambridge, MA 02138 (United States); Lissauer, Jack J.; Rowe, Jason F.; Borucki, William J.; Bryson, Stephen T.; Burke, Christopher J.; Caldwell, Douglas A. [NASA Ames Research Center, Moffett Field, CA 94035 (United States); Welsh, William F. [Astronomy Department, San Diego State University, San Diego, CA 92182-1221 (United States); Allen, Christopher [Orbital Sciences Corporation/NASA Ames Research Center, Moffett Field, CA 94035 (United States); Batalha, Natalie M. [Department of Physics and Astronomy, San Jose State University, San Jose, CA 95192 (United States); Buchhave, Lars A., E-mail: eford@astro.ufl.edu [Niels Bohr Institute, Copenhagen University, DK-2100 Copenhagen (Denmark); Collaboration: Kepler Science Team; and others

    2012-05-10

    We present a new method for confirming transiting planets based on the combination of transit timing variations (TTVs) and dynamical stability. Correlated TTVs provide evidence that the pair of bodies is in the same physical system. Orbital stability provides upper limits for the masses of the transiting companions that are in the planetary regime. This paper describes a non-parametric technique for quantifying the statistical significance of TTVs based on the correlation of two TTV data sets. We apply this method to an analysis of the TTVs of two stars with multiple transiting planet candidates identified by Kepler. We confirm four transiting planets in two multiple-planet systems based on their TTVs and the constraints imposed by dynamical stability. An additional three candidates in these same systems are not confirmed as planets, but are likely to be validated as real planets once further observations and analyses are possible. If all were confirmed, these systems would be near 4:6:9 and 2:4:6:9 period commensurabilities. Our results demonstrate that TTVs provide a powerful tool for confirming transiting planets, including low-mass planets and planets around faint stars for which Doppler follow-up is not practical with existing facilities. Continued Kepler observations will dramatically improve the constraints on the planet masses and orbits and provide sensitivity for detecting additional non-transiting planets. If Kepler observations were extended to eight years, then a similar analysis could likely confirm systems with multiple closely spaced, small transiting planets in or near the habitable zone of solar-type stars.

  2. Proposing a framework for airline service quality evaluation using Type-2 Fuzzy TOPSIS and non-parametric analysis

    Navid Haghighat

    2017-12-01

    Full Text Available This paper focuses on evaluating airline service quality from the perspective of passengers' view. Until now a lot of researches has been performed in airline service quality evaluation in the world but a little research has been conducted in Iran, yet. In this study, a framework for measuring airline service quality in Iran is proposed. After reviewing airline service quality criteria, SSQAI model was selected because of its comprehensiveness in covering airline service quality dimensions. SSQAI questionnaire items were redesigned to adopt with Iranian airlines requirements and environmental circumstances in the Iran's economic and cultural context. This study includes fuzzy decision-making theory, considering the possible fuzzy subjective judgment of the evaluators during airline service quality evaluation. Fuzzy TOPSIS have been applied for ranking airlines service quality performances. Three major Iranian airlines which have the most passenger transfer volumes in domestic and foreign flights were chosen for evaluation in this research. Results demonstrated Mahan airline has got the best service quality performance rank in gaining passengers' satisfaction with delivery of high-quality services to its passengers, among the three major Iranian airlines. IranAir and Aseman airlines placed in the second and third rank, respectively, according to passenger's evaluation. Statistical analysis has been used in analyzing passenger responses. Due to the abnormality of data, Non-parametric tests were applied. To demonstrate airline ranks in every criterion separately, Friedman test was performed. Variance analysis and Tukey test were applied to study the influence of increasing in age and educational level of passengers on degree of their satisfaction from airline's service quality. Results showed that age has no significant relation to passenger satisfaction of airlines, however, increasing in educational level demonstrated a negative impact on

  3. International assessment of functional computer abilities

    Anderson, Ronald E.; Collis, Betty

    1993-01-01

    After delineating the major rationale for computer education, data are presented from Stage 1 of the IEA Computers in Education Study showing international comparisons that may reflect differential priorities. Rapid technological change and the lack of consensus on goals of computer education impedes the establishment of stable curricula for ¿general computer education¿ or computer literacy. In this context the construction of instruments for student assessment remains a challenge. Seeking to...

  4. Power of non-parametric linkage analysis in mapping genes contributing to human longevity in long-lived sib-pairs

    Tan, Qihua; Zhao, J H; Iachine, I

    2004-01-01

    This report investigates the power issue in applying the non-parametric linkage analysis of affected sib-pairs (ASP) [Kruglyak and Lander, 1995: Am J Hum Genet 57:439-454] to localize genes that contribute to human longevity using long-lived sib-pairs. Data were simulated by introducing a recently...... developed statistical model for measuring marker-longevity associations [Yashin et al., 1999: Am J Hum Genet 65:1178-1193], enabling direct power comparison between linkage and association approaches. The non-parametric linkage (NPL) scores estimated in the region harboring the causal allele are evaluated...... in case of a dominant effect. Although the power issue may depend heavily on the true genetic nature in maintaining survival, our study suggests that results from small-scale sib-pair investigations should be referred with caution, given the complexity of human longevity....

  5. Dependence between fusion temperatures and chemical components of a certain type of coal using classical, non-parametric and bootstrap techniques

    Gonzalez-Manteiga, W.; Prada-Sanchez, J.M.; Fiestras-Janeiro, M.G.; Garcia-Jurado, I. (Universidad de Santiago de Compostela, Santiago de Compostela (Spain). Dept. de Estadistica e Investigacion Operativa)

    1990-11-01

    A statistical study of the dependence between various critical fusion temperatures of a certain kind of coal and its chemical components is carried out. As well as using classical dependence techniques (multiple, stepwise and PLS regression, principal components, canonical correlation, etc.) together with the corresponding inference on the parameters of interest, non-parametric regression and bootstrap inference are also performed. 11 refs., 3 figs., 8 tabs.

  6. Functional requirements for gas characterization system computer software

    Tate, D.D.

    1996-01-01

    This document provides the Functional Requirements for the Computer Software operating the Gas Characterization System (GCS), which monitors the combustible gasses in the vapor space of selected tanks. Necessary computer functions are defined to support design, testing, operation, and change control. The GCS requires several individual computers to address the control and data acquisition functions of instruments and sensors. These computers are networked for communication, and must multi-task to accommodate operation in parallel

  7. International assessment of functional computer abilities

    Anderson, Ronald E.; Collis, Betty

    1993-01-01

    After delineating the major rationale for computer education, data are presented from Stage 1 of the IEA Computers in Education Study showing international comparisons that may reflect differential priorities. Rapid technological change and the lack of consensus on goals of computer education

  8. A summary of numerical computation for special functions

    Zhang Shanjie

    1992-01-01

    In the paper, special functions frequently encountered in science and engineering calculations are introduced. The computation of the values of Bessel function and elliptic integrals are taken as the examples, and some common algorithms for computing most special functions, such as series expansion for small argument, asymptotic approximations for large argument, polynomial approximations, recurrence formulas and iteration method, are discussed. In addition, the determination of zeros of some special functions, and the other questions related to numerical computation are also discussed

  9. Non-parametric data-based approach for the quantification and communication of uncertainties in river flood forecasts

    Van Steenbergen, N.; Willems, P.

    2012-04-01

    Reliable flood forecasts are the most important non-structural measures to reduce the impact of floods. However flood forecasting systems are subject to uncertainty originating from the input data, model structure and model parameters of the different hydraulic and hydrological submodels. To quantify this uncertainty a non-parametric data-based approach has been developed. This approach analyses the historical forecast residuals (differences between the predictions and the observations at river gauging stations) without using a predefined statistical error distribution. Because the residuals are correlated with the value of the forecasted water level and the lead time, the residuals are split up into discrete classes of simulated water levels and lead times. For each class, percentile values are calculated of the model residuals and stored in a 'three dimensional error' matrix. By 3D interpolation in this error matrix, the uncertainty in new forecasted water levels can be quantified. In addition to the quantification of the uncertainty, the communication of this uncertainty is equally important. The communication has to be done in a consistent way, reducing the chance of misinterpretation. Also, the communication needs to be adapted to the audience; the majority of the larger public is not interested in in-depth information on the uncertainty on the predicted water levels, but only is interested in information on the likelihood of exceedance of certain alarm levels. Water managers need more information, e.g. time dependent uncertainty information, because they rely on this information to undertake the appropriate flood mitigation action. There are various ways in presenting uncertainty information (numerical, linguistic, graphical, time (in)dependent, etc.) each with their advantages and disadvantages for a specific audience. A useful method to communicate uncertainty of flood forecasts is by probabilistic flood mapping. These maps give a representation of the

  10. Accurate Traffic Flow Prediction in Heterogeneous Vehicular Networks in an Intelligent Transport System Using a Supervised Non-Parametric Classifier

    Hesham El-Sayed

    2018-05-01

    Full Text Available Heterogeneous vehicular networks (HETVNETs evolve from vehicular ad hoc networks (VANETs, which allow vehicles to always be connected so as to obtain safety services within intelligent transportation systems (ITSs. The services and data provided by HETVNETs should be neither interrupted nor delayed. Therefore, Quality of Service (QoS improvement of HETVNETs is one of the topics attracting the attention of researchers and the manufacturing community. Several methodologies and frameworks have been devised by researchers to address QoS-prediction service issues. In this paper, to improve QoS, we evaluate various traffic characteristics of HETVNETs and propose a new supervised learning model to capture knowledge on all possible traffic patterns. This model is a refinement of support vector machine (SVM kernels with a radial basis function (RBF. The proposed model produces better results than SVMs, and outperforms other prediction methods used in a traffic context, as it has lower computational complexity and higher prediction accuracy.

  11. Computer program for Bessel and Hankel functions

    Kreider, Kevin L.; Saule, Arthur V.; Rice, Edward J.; Clark, Bruce J.

    1991-01-01

    A set of FORTRAN subroutines for calculating Bessel and Hankel functions is presented. The routines calculate Bessel and Hankel functions of the first and second kinds, as well as their derivatives, for wide ranges of integer order and real or complex argument in single or double precision. Depending on the order and argument, one of three evaluation methods is used: the power series definition, an Airy function expansion, or an asymptotic expansion. Routines to calculate Airy functions and their derivatives are also included.

  12. Special software for computing the special functions of wave catastrophes

    Andrey S. Kryukovsky

    2015-01-01

    Full Text Available The method of ordinary differential equations in the context of calculating the special functions of wave catastrophes is considered. Complementary numerical methods and algorithms are described. The paper shows approaches to accelerate such calculations using capabilities of modern computing systems. Methods for calculating the special functions of wave catastrophes are considered in the framework of parallel computing and distributed systems. The paper covers the development process of special software for calculating of special functions, questions of portability, extensibility and interoperability.

  13. Accurate and efficient computation of synchrotron radiation functions

    MacLeod, Allan J.

    2000-01-01

    We consider the computation of three functions which appear in the theory of synchrotron radiation. These are F(x)=x∫x∞K 5/3 (y) dy))F p (x)=xK 2/3 (x) and G p (x)=x 1/3 K 1/3 (x), where K ν denotes a modified Bessel function. Chebyshev series coefficients are given which enable the functions to be computed with an accuracy of up to 15 sig. figures

  14. On Rigorous Drought Assessment Using Daily Time Scale: Non-Stationary Frequency Analyses, Revisited Concepts, and a New Method to Yield Non-Parametric Indices

    Charles Onyutha

    2017-10-01

    Full Text Available Some of the problems in drought assessments are that: analyses tend to focus on coarse temporal scales, many of the methods yield skewed indices, a few terminologies are ambiguously used, and analyses comprise an implicit assumption that the observations come from a stationary process. To solve these problems, this paper introduces non-stationary frequency analyses of quantiles. How to use non-parametric rescaling to obtain robust indices that are not (or minimally skewed is also introduced. To avoid ambiguity, some concepts on, e.g., incidence, extremity, etc., were revisited through shift from monthly to daily time scale. Demonstrations on the introduced methods were made using daily flow and precipitation insufficiency (precipitation minus potential evapotranspiration from the Blue Nile basin in Africa. Results show that, when a significant trend exists in extreme events, stationarity-based quantiles can be far different from those when non-stationarity is considered. The introduced non-parametric indices were found to closely agree with the well-known standardized precipitation evapotranspiration indices in many aspects but skewness. Apart from revisiting some concepts, the advantages of the use of fine instead of coarse time scales in drought assessment were given. The links for obtaining freely downloadable tools on how to implement the introduced methods were provided.

  15. A large-scale evaluation of computational protein function prediction

    Radivojac, P.; Clark, W.T.; Oron, T.R.; Schnoes, A.M.; Wittkop, T.; Kourmpetis, Y.A.I.; Dijk, van A.D.J.; Friedberg, I.

    2013-01-01

    Automated annotation of protein function is challenging. As the number of sequenced genomes rapidly grows, the overwhelming majority of protein products can only be annotated computationally. If computational predictions are to be relied upon, it is crucial that the accuracy of these methods be

  16. Numerical computation of special functions with applications to physics

    Motsepe, K

    2008-09-01

    Full Text Available Students of mathematical physics, engineering, natural and biological sciences sometimes need to use special functions that are not found in ordinary mathematical software. In this paper a simple universal numerical algorithm is developed to compute...

  17. Supporting executive functions during children's preliteracy learning with the computer

    Sande, E. van de; Segers, P.C.J.; Verhoeven, L.T.W.

    2016-01-01

    The present study examined how embedded activities to support executive functions helped children to benefit from a computer intervention that targeted preliteracy skills. Three intervention groups were compared on their preliteracy gains in a randomized controlled trial design: an experimental

  18. Computer Forensic Function Testing: Media Preparation, Write Protection And Verification

    Yinghua (David Guo

    2010-06-01

    Full Text Available Normal 0 false false false EN-US JA AR-SA The growth in the computer forensic field has created a demand for new software (or increased functionality to existing software and a means to verify that this software is truly forensic i.e. capable of meeting the requirements of the trier of fact. In this work, we review our previous work---a function oriented testing framework for validation and verification of computer forensic tools. This framework consists of three parts: function mapping, requirements specification and reference set development. Through function mapping, we give a scientific and systemical description of the fundamentals of computer forensic discipline, i.e. what functions are needed in the computer forensic investigation process. We focus this paper on the functions of media preparation, write protection and verification. Specifically, we complete the function mapping of these functions and specify their requirements. Based on this work, future work can be conducted to develop corresponding reference sets to test any tools that possess these functions.

  19. Short-term monitoring of benzene air concentration in an urban area: a preliminary study of application of Kruskal-Wallis non-parametric test to assess pollutant impact on global environment and indoor.

    Mura, Maria Chiara; De Felice, Marco; Morlino, Roberta; Fuselli, Sergio

    2010-01-01

    In step with the need to develop statistical procedures to manage small-size environmental samples, in this work we have used concentration values of benzene (C6H6), concurrently detected by seven outdoor and indoor monitoring stations over 12 000 minutes, in order to assess the representativeness of collected data and the impact of the pollutant on indoor environment. Clearly, the former issue is strictly connected to sampling-site geometry, which proves critical to correctly retrieving information from analysis of pollutants of sanitary interest. Therefore, according to current criteria for network-planning, single stations have been interpreted as nodes of a set of adjoining triangles; then, a) node pairs have been taken into account in order to estimate pollutant stationarity on triangle sides, as well as b) node triplets, to statistically associate data from air-monitoring with the corresponding territory area, and c) node sextuplets, to assess the impact probability of the outdoor pollutant on indoor environment for each area. Distributions from the various node combinations are all non-Gaussian, in the consequently, Kruskal-Wallis (KW) non-parametric statistics has been exploited to test variability on continuous density function from each pair, triplet and sextuplet. Results from the above-mentioned statistical analysis have shown randomness of site selection, which has not allowed a reliable generalization of monitoring data to the entire selected territory, except for a single "forced" case (70%); most important, they suggest a possible procedure to optimize network design.

  20. Short-term monitoring of benzene air concentration in an urban area: a preliminary study of application of Kruskal-Wallis non-parametric test to assess pollutant impact on global environment and indoor

    Maria Chiara Mura

    2010-12-01

    Full Text Available In step with the need to develop statistical procedures to manage small-size environmental samples, in this work we have used concentration values of benzene (C6H6, concurrently detected by seven outdoor and indoor monitoring stations over 12 000 minutes, in order to assess the representativeness of collected data and the impact of the pollutant on indoor environment. Clearly, the former issue is strictly connected to sampling-site geometry, which proves critical to correctly retrieving information from analysis of pollutants of sanitary interest. Therefore, according to current criteria for network-planning, single stations have been interpreted as nodes of a set of adjoining triangles; then, a node pairs have been taken into account in order to estimate pollutant stationarity on triangle sides, as well as b node triplets, to statistically associate data from air-monitoring with the corresponding territory area, and c node sextuplets, to assess the impact probability of the outdoor pollutant on indoor environment for each area. Distributions from the various node combinations are all non-Gaussian, in the consequently, Kruskal-Wallis (KW non-parametric statistics has been exploited to test variability on continuous density function from each pair, triplet and sextuplet. Results from the above-mentioned statistical analysis have shown randomness of site selection, which has not allowed a reliable generalization of monitoring data to the entire selected territory, except for a single "forced" case (70%; most important, they suggest a possible procedure to optimize network design.

  1. Search of significant features in a direct non parametric pattern recognition method. Application to the classification of a multiwire spark chamber picture

    Buccheri, R.; Coffaro, P.; Di Gesu, V.; Salemi, S.; Colomba, G.

    1975-01-01

    Preliminary results are given of the application of a direct non parametric pattern recognition method to the classification of the pictures of a multiwire spark chamber. The method, developed in an earlier work for an optical spark chamber, looks promising. The picture sample used has with respect to the previous one, the following characteristis: a) the event pictures have a more complicated structure; b) the amount of background sparks in an event is greater; c) there exists a kind of noise which is almost always present in some structured way (double sparkling, bursts...). New features have been used to characterize the event pictures; the results show that the method could be also used as a super filter to reduce the cost of further analysis. (Auth.)

  2. Algebraic Functions, Computer Programming, and the Challenge of Transfer

    Schanzer, Emmanuel Tanenbaum

    2015-01-01

    Students' struggles with algebra are well documented. Prior to the introduction of functions, mathematics is typically focused on applying a set of arithmetic operations to compute an answer. The introduction of functions, however, marks the point at which mathematics begins to focus on building up abstractions as a way to solve complex problems.…

  3. Numerical computation of aeroacoustic transfer functions for realistic airfoils

    De Santana, Leandro Dantas; Miotto, Renato Fuzaro; Wolf, William Roberto

    2017-01-01

    Based on Amiet's theory formalism, we propose a numerical framework to compute the aeroacoustic transfer function of realistic airfoil geometries. The aeroacoustic transfer function relates the amplitude and phase of an incoming periodic gust to the respective unsteady lift response permitting,

  4. Geometric optical transfer function and tis computation method

    Wang Qi

    1992-01-01

    Geometric Optical Transfer Function formula is derived after expound some content to be easily ignored, and the computation method is given with Bessel function of order zero and numerical integration and Spline interpolation. The method is of advantage to ensure accuracy and to save calculation

  5. A computer program for the pointwise functions generation

    Caldeira, Alexandre D.

    1995-01-01

    A computer program that was developed with the objective of generating pointwise functions, by a combination of tabulated values and/or mathematical expressions, to be used as weighting functions for nuclear data is presented. This simple program can be an important tool for researchers involved in group constants generation. (author). 5 refs, 2 figs

  6. Positive Wigner functions render classical simulation of quantum computation efficient.

    Mari, A; Eisert, J

    2012-12-07

    We show that quantum circuits where the initial state and all the following quantum operations can be represented by positive Wigner functions can be classically efficiently simulated. This is true both for continuous-variable as well as discrete variable systems in odd prime dimensions, two cases which will be treated on entirely the same footing. Noting the fact that Clifford and Gaussian operations preserve the positivity of the Wigner function, our result generalizes the Gottesman-Knill theorem. Our algorithm provides a way of sampling from the output distribution of a computation or a simulation, including the efficient sampling from an approximate output distribution in the case of sampling imperfections for initial states, gates, or measurements. In this sense, this work highlights the role of the positive Wigner function as separating classically efficiently simulable systems from those that are potentially universal for quantum computing and simulation, and it emphasizes the role of negativity of the Wigner function as a computational resource.

  7. Inferring biological functions of guanylyl cyclases with computational methods

    Alquraishi, May Majed; Meier, Stuart Kurt

    2013-01-01

    A number of studies have shown that functionally related genes are often co-expressed and that computational based co-expression analysis can be used to accurately identify functional relationships between genes and by inference, their encoded proteins. Here we describe how a computational based co-expression analysis can be used to link the function of a specific gene of interest to a defined cellular response. Using a worked example we demonstrate how this methodology is used to link the function of the Arabidopsis Wall-Associated Kinase-Like 10 gene, which encodes a functional guanylyl cyclase, to host responses to pathogens. © Springer Science+Business Media New York 2013.

  8. Inferring biological functions of guanylyl cyclases with computational methods

    Alquraishi, May Majed

    2013-09-03

    A number of studies have shown that functionally related genes are often co-expressed and that computational based co-expression analysis can be used to accurately identify functional relationships between genes and by inference, their encoded proteins. Here we describe how a computational based co-expression analysis can be used to link the function of a specific gene of interest to a defined cellular response. Using a worked example we demonstrate how this methodology is used to link the function of the Arabidopsis Wall-Associated Kinase-Like 10 gene, which encodes a functional guanylyl cyclase, to host responses to pathogens. © Springer Science+Business Media New York 2013.

  9. Non-parametric identification of multivariable systems : a local rational modeling approach with application to a vibration isolation benchmark

    Voorhoeve, R.J.; van der Maas, A.; Oomen, T.A.J.

    2018-01-01

    Frequency response function (FRF) identification is often used as a basis for control systems design and as a starting point for subsequent parametric system identification. The aim of this paper is to develop a multiple-input multiple-output (MIMO) local parametric modeling approach for FRF

  10. The Computational Processing of Intonational Prominence: A Functional Prosody Perspective

    Nakatani, Christine Hisayo

    1997-01-01

    Intonational prominence, or accent, is a fundamental prosodic feature that is said to contribute to discourse meaning. This thesis outlines a new, computational theory of the discourse interpretation of prominence, from a FUNCTIONAL PROSODY perspective. Functional prosody makes the following two important assumptions: first, there is an aspect of prominence interpretation that centrally concerns discourse processes, namely the discourse focusing nature of prominence; and second, the role of p...

  11. Quantum computing without wavefunctions: time-dependent density functional theory for universal quantum computation.

    Tempel, David G; Aspuru-Guzik, Alán

    2012-01-01

    We prove that the theorems of TDDFT can be extended to a class of qubit Hamiltonians that are universal for quantum computation. The theorems of TDDFT applied to universal Hamiltonians imply that single-qubit expectation values can be used as the basic variables in quantum computation and information theory, rather than wavefunctions. From a practical standpoint this opens the possibility of approximating observables of interest in quantum computations directly in terms of single-qubit quantities (i.e. as density functionals). Additionally, we also demonstrate that TDDFT provides an exact prescription for simulating universal Hamiltonians with other universal Hamiltonians that have different, and possibly easier-to-realize two-qubit interactions. This establishes the foundations of TDDFT for quantum computation and opens the possibility of developing density functionals for use in quantum algorithms.

  12. Computational design of proteins with novel structure and functions

    Yang Wei; Lai Lu-Hua

    2016-01-01

    Computational design of proteins is a relatively new field, where scientists search the enormous sequence space for sequences that can fold into desired structure and perform desired functions. With the computational approach, proteins can be designed, for example, as regulators of biological processes, novel enzymes, or as biotherapeutics. These approaches not only provide valuable information for understanding of sequence–structure–function relations in proteins, but also hold promise for applications to protein engineering and biomedical research. In this review, we briefly introduce the rationale for computational protein design, then summarize the recent progress in this field, including de novo protein design, enzyme design, and design of protein–protein interactions. Challenges and future prospects of this field are also discussed. (topical review)

  13. Fast computation of complete elliptic integrals and Jacobian elliptic functions

    Fukushima, Toshio

    2009-12-01

    As a preparation step to compute Jacobian elliptic functions efficiently, we created a fast method to calculate the complete elliptic integral of the first and second kinds, K( m) and E( m), for the standard domain of the elliptic parameter, 0 procedure to compute simultaneously three Jacobian elliptic functions, sn( u| m), cn( u| m), and dn( u| m), by repeated usage of the double argument formulae starting from the Maclaurin series expansions with respect to the elliptic argument, u, after its domain is reduced to the standard range, 0 ≤ u procedure is 25-70% faster than the methods based on the Gauss transformation such as Bulirsch’s algorithm, sncndn, quoted in the Numerical Recipes even if the acceleration of computation of K( m) is not taken into account.

  14. Computing exact bundle compliance control charts via probability generating functions.

    Chen, Binchao; Matis, Timothy; Benneyan, James

    2016-06-01

    Compliance to evidenced-base practices, individually and in 'bundles', remains an important focus of healthcare quality improvement for many clinical conditions. The exact probability distribution of composite bundle compliance measures used to develop corresponding control charts and other statistical tests is based on a fairly large convolution whose direct calculation can be computationally prohibitive. Various series expansions and other approximation approaches have been proposed, each with computational and accuracy tradeoffs, especially in the tails. This same probability distribution also arises in other important healthcare applications, such as for risk-adjusted outcomes and bed demand prediction, with the same computational difficulties. As an alternative, we use probability generating functions to rapidly obtain exact results and illustrate the improved accuracy and detection over other methods. Numerical testing across a wide range of applications demonstrates the computational efficiency and accuracy of this approach.

  15. A cross-country non parametric estimation of the returns to factors of production and the elasticity of scale

    Adalmir Marquetti

    2009-06-01

    and 1995. The results support the hypotheses of constant returns to scale to factors and decreasing returns to accumulable factors. The low capital-labor ratio countries have important differences in factor elasticities in relation to other countries. The augmentation of the production function by human capital did not reduce the elasticity of physical capital as suggested by Mankiw, Romer and Weil (1992. Moreover, it is investigated if thefactors shares are really equal to their output elasticity. The wage share raises with the capital labor ratio and the sum of the output elasticity of labor and human capital is below the wage share for high capital labor ratio countries, happening the inverse for low capital labor ratio countries. It indicates the presence of externalities, or imperfect competition or that the marginal theory of distribution is inaccurate.

  16. Fast and accurate computation of projected two-point functions

    Grasshorn Gebhardt, Henry S.; Jeong, Donghui

    2018-01-01

    We present the two-point function from the fast and accurate spherical Bessel transformation (2-FAST) algorithm1Our code is available at https://github.com/hsgg/twoFAST. for a fast and accurate computation of integrals involving one or two spherical Bessel functions. These types of integrals occur when projecting the galaxy power spectrum P (k ) onto the configuration space, ξℓν(r ), or spherical harmonic space, Cℓ(χ ,χ'). First, we employ the FFTLog transformation of the power spectrum to divide the calculation into P (k )-dependent coefficients and P (k )-independent integrations of basis functions multiplied by spherical Bessel functions. We find analytical expressions for the latter integrals in terms of special functions, for which recursion provides a fast and accurate evaluation. The algorithm, therefore, circumvents direct integration of highly oscillating spherical Bessel functions.

  17. Computing the hadronic vacuum polarization function by analytic continuation

    Feng, Xu [KEK National High Energy Physics, Tsukuba (Japan); Hashimoto, Shoji [KEK National High Energy Physics, Tsukuba (Japan); The Graduate Univ. for Advanced Studies, Tsukuba (Japan). School of High Energy Accelerator Science; Hotzel, Grit [Humboldt-Universitaet, Berlin (Germany). Inst. fuer Physik; Jansen, Karl [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany). John von Neumann-Inst. fuer Computing NIC; Cyprus Univ., Nicosia (Cyprus). Dept. of Physics; Petschlies, Marcus [The Cyprus Institute, Nicosia (Cyprus); Renner, Dru B. [Thomas Jefferson National Accelerator Facility, Newport News, VA (United States)

    2013-07-15

    We propose a method to compute the hadronic vacuum polarization function on the lattice at continuous values of photon momenta bridging between the space-like and time-like regions. We provide two independent derivations of this method showing that it leads to the desired hadronic vacuum polarization function in Minkowski space-time. We show with the example of the leading- order QCD correction to the muon anomalous magnetic moment that this approach can provide a valuable alternative method for calculations of physical quantities where the hadronic vacuum polarization function enters.

  18. Variance computations for functional of absolute risk estimates.

    Pfeiffer, R M; Petracci, E

    2011-07-01

    We present a simple influence function based approach to compute the variances of estimates of absolute risk and functions of absolute risk. We apply this approach to criteria that assess the impact of changes in the risk factor distribution on absolute risk for an individual and at the population level. As an illustration we use an absolute risk prediction model for breast cancer that includes modifiable risk factors in addition to standard breast cancer risk factors. Influence function based variance estimates for absolute risk and the criteria are compared to bootstrap variance estimates.

  19. HOMOGENEOUS UGRIZ PHOTOMETRY FOR ACS VIRGO CLUSTER SURVEY GALAXIES: A NON-PARAMETRIC ANALYSIS FROM SDSS IMAGING

    Chen, Chin-Wei; Cote, Patrick; Ferrarese, Laura; West, Andrew A.; Peng, Eric W.

    2010-01-01

    We present photometric and structural parameters for 100 ACS Virgo Cluster Survey (ACSVCS) galaxies based on homogeneous, multi-wavelength (ugriz), wide-field SDSS (DR5) imaging. These early-type galaxies, which trace out the red sequence in the Virgo Cluster, span a factor of nearly ∼10 3 in g-band luminosity. We describe an automated pipeline that generates background-subtracted mosaic images, masks field sources and measures mean shapes, total magnitudes, effective radii, and effective surface brightnesses using a model-independent approach. A parametric analysis of the surface brightness profiles is also carried out to obtain Sersic-based structural parameters and mean galaxy colors. We compare the galaxy parameters to those in the literature, including those from the ACSVCS, finding good agreement in most cases, although the sizes of the brightest, and most extended, galaxies are found to be most uncertain and model dependent. Our photometry provides an external measurement of the random errors on total magnitudes from the widely used Virgo Cluster Catalog, which we estimate to be σ(B T )∼ 0.13 mag for the brightest galaxies, rising to ∼ 0.3 mag for galaxies at the faint end of our sample (B T ∼ 16). The distribution of axial ratios of low-mass ( d warf ) galaxies bears a strong resemblance to the one observed for the higher-mass ( g iant ) galaxies. The global structural parameters for the full galaxy sample-profile shape, effective radius, and mean surface brightness-are found to vary smoothly and systematically as a function of luminosity, with unmistakable evidence for changes in structural homology along the red sequence. As noted in previous studies, the ugriz galaxy colors show a nonlinear but smooth variation over a ∼7 mag range in absolute magnitude, with an enhanced scatter for the faintest systems that is likely the signature of their more diverse star formation histories.

  20. Computing three-point functions for short operators

    Bargheer, Till; Institute for Advanced Study, Princeton, NJ; Minahan, Joseph A.; Pereira, Raul

    2013-11-01

    We compute the three-point structure constants for short primary operators of N=4 super Yang.Mills theory to leading order in 1/√(λ) by mapping the problem to a flat-space string theory calculation. We check the validity of our procedure by comparing to known results for three chiral primaries. We then compute the three-point functions for any combination of chiral and non-chiral primaries, with the non-chiral primaries all dual to string states at the first massive level. Along the way we find many cancellations that leave us with simple expressions, suggesting that integrability is playing an important role.

  1. Computing three-point functions for short operators

    Bargheer, Till [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany); Institute for Advanced Study, Princeton, NJ (United States). School of Natural Sciences; Minahan, Joseph A.; Pereira, Raul [Uppsala Univ. (Sweden). Dept. of Physics and Astronomy

    2013-11-15

    We compute the three-point structure constants for short primary operators of N=4 super Yang.Mills theory to leading order in 1/√(λ) by mapping the problem to a flat-space string theory calculation. We check the validity of our procedure by comparing to known results for three chiral primaries. We then compute the three-point functions for any combination of chiral and non-chiral primaries, with the non-chiral primaries all dual to string states at the first massive level. Along the way we find many cancellations that leave us with simple expressions, suggesting that integrability is playing an important role.

  2. Structure, function, and behaviour of computational models in systems biology.

    Knüpfer, Christian; Beckstein, Clemens; Dittrich, Peter; Le Novère, Nicolas

    2013-05-31

    Systems Biology develops computational models in order to understand biological phenomena. The increasing number and complexity of such "bio-models" necessitate computer support for the overall modelling task. Computer-aided modelling has to be based on a formal semantic description of bio-models. But, even if computational bio-models themselves are represented precisely in terms of mathematical expressions their full meaning is not yet formally specified and only described in natural language. We present a conceptual framework - the meaning facets - which can be used to rigorously specify the semantics of bio-models. A bio-model has a dual interpretation: On the one hand it is a mathematical expression which can be used in computational simulations (intrinsic meaning). On the other hand the model is related to the biological reality (extrinsic meaning). We show that in both cases this interpretation should be performed from three perspectives: the meaning of the model's components (structure), the meaning of the model's intended use (function), and the meaning of the model's dynamics (behaviour). In order to demonstrate the strengths of the meaning facets framework we apply it to two semantically related models of the cell cycle. Thereby, we make use of existing approaches for computer representation of bio-models as much as possible and sketch the missing pieces. The meaning facets framework provides a systematic in-depth approach to the semantics of bio-models. It can serve two important purposes: First, it specifies and structures the information which biologists have to take into account if they build, use and exchange models. Secondly, because it can be formalised, the framework is a solid foundation for any sort of computer support in bio-modelling. The proposed conceptual framework establishes a new methodology for modelling in Systems Biology and constitutes a basis for computer-aided collaborative research.

  3. A hybrid method for the parallel computation of Green's functions

    Petersen, Dan Erik; Li, Song; Stokbro, Kurt

    2009-01-01

    of the large number of times this calculation needs to be performed, this is computationally very expensive even on supercomputers. The classical approach is based on recurrence formulas which cannot be efficiently parallelized. This practically prevents the solution of large problems with hundreds...... of thousands of atoms. We propose new recurrences for a general class of sparse matrices to calculate Green's and lesser Green's function matrices which extend formulas derived by Takahashi and others. We show that these recurrences may lead to a dramatically reduced computational cost because they only...... require computing a small number of entries of the inverse matrix. Then. we propose a parallelization strategy for block tridiagonal matrices which involves a combination of Schur complement calculations and cyclic reduction. It achieves good scalability even on problems of modest size....

  4. Image reconstruction of computed tomograms using functional algebra

    Bradaczek, M.; Bradaczek, H.

    1997-01-01

    A detailed presentation of the process for calculating computed tomograms from the measured data by means of functional algebra is given and an attempt is made to demonstrate the relationships to those inexperienced in mathematics. Suggestions are also made to the manufacturers for improving tomography software although the authors cannot exclude the possibility that some of the recommendations may have already been realized. An interpolation in Fourier space to right-angled coordinates was not employed so that additional computer time and errors resulting from the interpolation are avoided. The savings in calculation time can only be estimated but should amount to about 25%. The error-correction calculation is merely a suggestion since it depends considerably on the apparatus used. Functional algebra is introduced here because it is not so well known but does provide appreciable simplifications in comparison to an explicit presentation. Didactic reasons as well as the possibility for reducing calculation time provided the foundation for this work. (orig.) [de

  5. FCJ-131 Pervasive Computing and Prosopopoietic Modelling – Notes on computed function and creative action

    Anders Michelsen

    2011-12-01

    Full Text Available This article treats the philosophical underpinnings of the notions of ubiquity and pervasive computing from a historical perspective. The current focus on these notions reflects the ever increasing impact of new media and the underlying complexity of computed function in the broad sense of ICT that have spread vertiginiously since Mark Weiser coined the term ‘pervasive’, e.g., digitalised sensoring, monitoring, effectuation, intelligence, and display. Whereas Weiser’s original perspective may seem fulfilled since computing is everywhere, in his and Seely Brown’s (1997 terms, ‘invisible’, on the horizon, ’calm’, it also points to a much more important and slightly different perspective: that of creative action upon novel forms of artifice. Most importantly for this article, ubiquity and pervasive computing is seen to point to the continuous existence throughout the computational heritage since the mid-20th century of a paradoxical distinction/complicity between the technical organisation of computed function and the human Being, in the sense of creative action upon such function. This paradoxical distinction/complicity promotes a chiastic (Merleau-Ponty relationship of extension of one into the other. It also indicates a generative creation that itself points to important issues of ontology with methodological implications for the design of computing. In this article these implications will be conceptualised as prosopopoietic modeling on the basis of Bernward Joerges introduction of the classical rhetoric term of ’prosopopoeia’ into the debate on large technological systems. First, the paper introduces the paradoxical distinction/complicity by debating Gilbert Simondon’s notion of a ‘margin of indeterminacy’ vis-a-vis computing. Second, it debates the idea of prosopopoietic modeling, pointing to a principal role of the paradoxical distinction/complicity within the computational heritage in three cases: a. Prosopopoietic

  6. Using computational models to relate structural and functional brain connectivity

    Hlinka, Jaroslav; Coombes, S.

    2012-01-01

    Roč. 36, č. 2 (2012), s. 2137-2145 ISSN 0953-816X R&D Projects: GA MŠk 7E08027 EU Projects: European Commission(XE) 200728 - BRAINSYNC Institutional research plan: CEZ:AV0Z10300504 Keywords : brain disease * computational modelling * functional connectivity * graph theory * structural connectivity Subject RIV: FH - Neurology Impact factor: 3.753, year: 2012

  7. Brookhaven Reactor Experiment Control Facility, a distributed function computer network

    Dimmler, D.G.; Greenlaw, N.; Kelley, M.A.; Potter, D.W.; Rankowitz, S.; Stubblefield, F.W.

    1975-11-01

    A computer network for real-time data acquisition, monitoring and control of a series of experiments at the Brookhaven High Flux Beam Reactor has been developed and has been set into routine operation. This reactor experiment control facility presently services nine neutron spectrometers and one x-ray diffractometer. Several additional experiment connections are in progress. The architecture of the facility is based on a distributed function network concept. A statement of implementation and results is presented

  8. Computation of bessel functions in light scattering studies.

    Ross, W D

    1972-09-01

    Computations of light scattering require finding Bessel functions of a series of orders. These are found most easily by recurrence, but excessive rounding errors may accumulate. Satisfactory procedures for cylinder and sphere functions are described. If argument z is real, find Y(n)(z) by recurrence to high orders. From two high orders of Y(n)(z) estimate J(n)(z). Use backward recurrence to maximum J(n)(z). Correct by forward recurrence to maximum. If z is complex, estimate high orders of J(n)(z) without Y(n)(z) and use backward recurrence.

  9. Computations of nuclear response functions with MACK-IV

    Abdou, M.A.; Gohar, Y.

    1978-01-01

    The MACK computer program calculates energy pointwise and multigroup nuclear response functions from basic nuclear data in ENDF/B format. The new version of the program, MACK-IV, incorporates major developments and improvements aimed at maximizing the utilization of available nuclear data and ensuring energy conservation in nuclear heating calculations. A new library, MACKLIB-IV, of nuclear response functions was generated in the CTR energy group structure of 171 neutron groups and 36 gamma groups. The library was prepared using MACK-IV and ENDF/B-IV and is suitable for fusion, fusion-fission hybrids, and fission applications

  10. Efficient quantum algorithm for computing n-time correlation functions.

    Pedernales, J S; Di Candia, R; Egusquiza, I L; Casanova, J; Solano, E

    2014-07-11

    We propose a method for computing n-time correlation functions of arbitrary spinorial, fermionic, and bosonic operators, consisting of an efficient quantum algorithm that encodes these correlations in an initially added ancillary qubit for probe and control tasks. For spinorial and fermionic systems, the reconstruction of arbitrary n-time correlation functions requires the measurement of two ancilla observables, while for bosonic variables time derivatives of the same observables are needed. Finally, we provide examples applicable to different quantum platforms in the frame of the linear response theory.

  11. Computations of nuclear response functions with MACK-IV

    Abdou, M A; Gohar, Y

    1978-01-01

    The MACK computer program calculates energy pointwise and multigroup nuclear response functions from basic nuclear data in ENDF/B format. The new version of the program, MACK-IV, incorporates major developments and improvements aimed at maximizing the utilization of available nuclear data and ensuring energy conservation in nuclear heating calculations. A new library, MACKLIB-IV, of nuclear response functions was generated in the CTR energy group structure of 171 neutron groups and 36 gamma groups. The library was prepared using MACK-IV and ENDF/B-IV and is suitable for fusion, fusion-fission hybrids, and fission applications.

  12. Optimized Kaiser-Bessel Window Functions for Computed Tomography.

    Nilchian, Masih; Ward, John Paul; Vonesch, Cedric; Unser, Michael

    2015-11-01

    Kaiser-Bessel window functions are frequently used to discretize tomographic problems because they have two desirable properties: 1) their short support leads to a low computational cost and 2) their rotational symmetry makes their imaging transform independent of the direction. In this paper, we aim at optimizing the parameters of these basis functions. We present a formalism based on the theory of approximation and point out the importance of the partition-of-unity condition. While we prove that, for compact-support functions, this condition is incompatible with isotropy, we show that minimizing the deviation from the partition of unity condition is highly beneficial. The numerical results confirm that the proposed tuning of the Kaiser-Bessel window functions yields the best performance.

  13. Global sensitivity analysis of computer models with functional inputs

    Iooss, Bertrand; Ribatet, Mathieu

    2009-01-01

    Global sensitivity analysis is used to quantify the influence of uncertain model inputs on the response variability of a numerical model. The common quantitative methods are appropriate with computer codes having scalar model inputs. This paper aims at illustrating different variance-based sensitivity analysis techniques, based on the so-called Sobol's indices, when some model inputs are functional, such as stochastic processes or random spatial fields. In this work, we focus on large cpu time computer codes which need a preliminary metamodeling step before performing the sensitivity analysis. We propose the use of the joint modeling approach, i.e., modeling simultaneously the mean and the dispersion of the code outputs using two interlinked generalized linear models (GLMs) or generalized additive models (GAMs). The 'mean model' allows to estimate the sensitivity indices of each scalar model inputs, while the 'dispersion model' allows to derive the total sensitivity index of the functional model inputs. The proposed approach is compared to some classical sensitivity analysis methodologies on an analytical function. Lastly, the new methodology is applied to an industrial computer code that simulates the nuclear fuel irradiation.

  14. A hybrid method for the parallel computation of Green's functions

    Petersen, Dan Erik; Li Song; Stokbro, Kurt; Sorensen, Hans Henrik B.; Hansen, Per Christian; Skelboe, Stig; Darve, Eric

    2009-01-01

    Quantum transport models for nanodevices using the non-equilibrium Green's function method require the repeated calculation of the block tridiagonal part of the Green's and lesser Green's function matrices. This problem is related to the calculation of the inverse of a sparse matrix. Because of the large number of times this calculation needs to be performed, this is computationally very expensive even on supercomputers. The classical approach is based on recurrence formulas which cannot be efficiently parallelized. This practically prevents the solution of large problems with hundreds of thousands of atoms. We propose new recurrences for a general class of sparse matrices to calculate Green's and lesser Green's function matrices which extend formulas derived by Takahashi and others. We show that these recurrences may lead to a dramatically reduced computational cost because they only require computing a small number of entries of the inverse matrix. Then, we propose a parallelization strategy for block tridiagonal matrices which involves a combination of Schur complement calculations and cyclic reduction. It achieves good scalability even on problems of modest size.

  15. Estimation from PET data of transient changes in dopamine concentration induced by alcohol: support for a non-parametric signal estimation method

    Constantinescu, C C; Yoder, K K; Normandin, M D; Morris, E D [Department of Radiology, Indiana University School of Medicine, Indianapolis, IN (United States); Kareken, D A [Department of Neurology, Indiana University School of Medicine, Indianapolis, IN (United States); Bouman, C A [Weldon School of Biomedical Engineering, Purdue University, West Lafayette, IN (United States); O' Connor, S J [Department of Psychiatry, Indiana University School of Medicine, Indianapolis, IN (United States)], E-mail: emorris@iupui.edu

    2008-03-07

    We previously developed a model-independent technique (non-parametric ntPET) for extracting the transient changes in neurotransmitter concentration from paired (rest and activation) PET studies with a receptor ligand. To provide support for our method, we introduced three hypotheses of validation based on work by Endres and Carson (1998 J. Cereb. Blood Flow Metab. 18 1196-210) and Yoder et al (2004 J. Nucl. Med. 45 903-11), and tested them on experimental data. All three hypotheses describe relationships between the estimated free (synaptic) dopamine curves (F{sup DA}(t)) and the change in binding potential ({delta}BP). The veracity of the F{sup DA}(t) curves recovered by nonparametric ntPET is supported when the data adhere to the following hypothesized behaviors: (1) {delta}BP should decline with increasing DA peak time, (2) {delta}BP should increase as the strength of the temporal correlation between F{sup DA}(t) and the free raclopride (F{sup RAC}(t)) curve increases, (3) {delta}BP should decline linearly with the effective weighted availability of the receptor sites. We analyzed regional brain data from 8 healthy subjects who received two [{sup 11}C]raclopride scans: one at rest, and one during which unanticipated IV alcohol was administered to stimulate dopamine release. For several striatal regions, nonparametric ntPET was applied to recover F{sup DA}(t), and binding potential values were determined. Kendall rank-correlation analysis confirmed that the F{sup DA}(t) data followed the expected trends for all three validation hypotheses. Our findings lend credence to our model-independent estimates of F{sup DA}(t). Application of nonparametric ntPET may yield important insights into how alterations in timing of dopaminergic neurotransmission are involved in the pathologies of addiction and other psychiatric disorders.

  16. A pilot single-blind multicentre randomized controlled trial to evaluate the potential benefits of computer-assisted arm rehabilitation gaming technology on the arm function of children with spastic cerebral palsy.

    Preston, Nick; Weightman, Andrew; Gallagher, Justin; Levesley, Martin; Mon-Williams, Mark; Clarke, Mike; O'Connor, Rory J

    2016-10-01

    To evaluate the potential benefits of computer-assisted arm rehabilitation gaming technology on arm function of children with spastic cerebral palsy. A single-blind randomized controlled trial design. Power calculations indicated that 58 children would be required to demonstrate a clinically important difference. Intervention was home-based; recruitment took place in regional spasticity clinics. A total of 15 children with cerebral palsy aged five to 12 years were recruited; eight to the device group. Both study groups received 'usual follow-up treatment' following spasticity treatment with botulinum toxin; the intervention group also received a rehabilitation gaming device. ABILHAND-kids and Canadian Occupational Performance Measure were performed by blinded assessors at baseline, six and 12 weeks. An analysis of covariance showed no group differences in mean ABILHAND-kids scores between time points. A non-parametric analysis of variance on Canadian Occupational Performance Measure scores showed a statistically significant improvement across time points (χ 2 (2,15) = 6.778, p = 0.031), but this improvement did not reach minimal clinically important difference. Mean daily device use was seven minutes. Recruitment did not reach target owing to unanticipated staff shortages in clinical services. Feedback from children and their families indicated that the games were not sufficiently engaging to promote sufficient use that was likely to result in functional benefits. This study suggests that computer-assisted arm rehabilitation gaming does not benefit arm function, but a Type II error cannot be ruled out. © The Author(s) 2015.

  17. Symbolic Computation, Number Theory, Special Functions, Physics and Combinatorics

    Ismail, Mourad

    2001-01-01

    These are the proceedings of the conference "Symbolic Computation, Number Theory, Special Functions, Physics and Combinatorics" held at the Department of Mathematics, University of Florida, Gainesville, from November 11 to 13, 1999. The main emphasis of the conference was Com­ puter Algebra (i. e. symbolic computation) and how it related to the fields of Number Theory, Special Functions, Physics and Combinatorics. A subject that is common to all of these fields is q-series. We brought together those who do symbolic computation with q-series and those who need q-series in­ cluding workers in Physics and Combinatorics. The goal of the conference was to inform mathematicians and physicists who use q-series of the latest developments in the field of q-series and especially how symbolic computa­ tion has aided these developments. Over 60 people were invited to participate in the conference. We ended up having 45 participants at the conference, including six one hour plenary speakers and 28 half hour speakers. T...

  18. 21 CFR 870.1435 - Single-function, preprogrammed diagnostic computer.

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Single-function, preprogrammed diagnostic computer... Single-function, preprogrammed diagnostic computer. (a) Identification. A single-function, preprogrammed diagnostic computer is a hard-wired computer that calculates a specific physiological or blood-flow parameter...

  19. Computer functions in overall plant control of candu generating stations

    Chou, Q.B.; Stokes, H.W.

    1976-01-01

    System Planning Specifications form the basic requirements for the performance of the plant including its response to abnormal situations. The rules for the computer control programs are devised from these, taking into account limitations imposed by the reactor, heat transport and turbine-generator systems. The paper outlines these specifications and the limitations imposed by the major items of plant equipment. It describes the functions of each of the main programs, their interactions and the control modes used in the existing Ontario Hydro's nuclear station or proposed for future stations. Some simulation results showing the performance of the overall unit control system and plans for future studies are discussed. (orig.) [de

  20. Computing the effective action with the functional renormalization group

    Codello, Alessandro [CP3-Origins and the Danish IAS University of Southern Denmark, Odense (Denmark); Percacci, Roberto [SISSA, Trieste (Italy); INFN, Sezione di Trieste, Trieste (Italy); Rachwal, Leslaw [Fudan University, Department of Physics, Center for Field Theory and Particle Physics, Shanghai (China); Tonero, Alberto [ICTP-SAIFR and IFT, Sao Paulo (Brazil)

    2016-04-15

    The ''exact'' or ''functional'' renormalization group equation describes the renormalization group flow of the effective average action Γ{sub k}. The ordinary effective action Γ{sub 0} can be obtained by integrating the flow equation from an ultraviolet scale k = Λ down to k = 0. We give several examples of such calculations at one-loop, both in renormalizable and in effective field theories. We reproduce the four-point scattering amplitude in the case of a real scalar field theory with quartic potential and in the case of the pion chiral Lagrangian. In the case of gauge theories, we reproduce the vacuum polarization of QED and of Yang-Mills theory. We also compute the two-point functions for scalars and gravitons in the effective field theory of scalar fields minimally coupled to gravity. (orig.)

  1. Validation and psychometric properties of the Somatic and Psychological HEalth REport (SPHERE) in a young Australian-based population sample using non-parametric item response theory.

    Couvy-Duchesne, Baptiste; Davenport, Tracey A; Martin, Nicholas G; Wright, Margaret J; Hickie, Ian B

    2017-08-01

    The Somatic and Psychological HEalth REport (SPHERE) is a 34-item self-report questionnaire that assesses symptoms of mental distress and persistent fatigue. As it was developed as a screening instrument for use mainly in primary care-based clinical settings, its validity and psychometric properties have not been studied extensively in population-based samples. We used non-parametric Item Response Theory to assess scale validity and item properties of the SPHERE-34 scales, collected through four waves of the Brisbane Longitudinal Twin Study (N = 1707, mean age = 12, 51% females; N = 1273, mean age = 14, 50% females; N = 1513, mean age = 16, 54% females, N = 1263, mean age = 18, 56% females). We estimated the heritability of the new scores, their genetic correlation, and their predictive ability in a sub-sample (N = 1993) who completed the Composite International Diagnostic Interview. After excluding items most responsible for noise, sex or wave bias, the SPHERE-34 questionnaire was reduced to 21 items (SPHERE-21), comprising a 14-item scale for anxiety-depression and a 10-item scale for chronic fatigue (3 items overlapping). These new scores showed high internal consistency (alpha > 0.78), moderate three months reliability (ICC = 0.47-0.58) and item scalability (Hi > 0.23), and were positively correlated (phenotypic correlations r = 0.57-0.70; rG = 0.77-1.00). Heritability estimates ranged from 0.27 to 0.51. In addition, both scores were associated with later DSM-IV diagnoses of MDD, social anxiety and alcohol dependence (OR in 1.23-1.47). Finally, a post-hoc comparison showed that several psychometric properties of the SPHERE-21 were similar to those of the Beck Depression Inventory. The scales of SPHERE-21 measure valid and comparable constructs across sex and age groups (from 9 to 28 years). SPHERE-21 scores are heritable, genetically correlated and show good predictive ability of mental health in an Australian-based population

  2. Mokken scale analysis of mental health and well-being questionnaire item responses: a non-parametric IRT method in empirical research for applied health researchers

    Stochl Jan

    2012-06-01

    Full Text Available Abstract Background Mokken scaling techniques are a useful tool for researchers who wish to construct unidimensional tests or use questionnaires that comprise multiple binary or polytomous items. The stochastic cumulative scaling model offered by this approach is ideally suited when the intention is to score an underlying latent trait by simple addition of the item response values. In our experience, the Mokken model appears to be less well-known than for example the (related Rasch model, but is seeing increasing use in contemporary clinical research and public health. Mokken's method is a generalisation of Guttman scaling that can assist in the determination of the dimensionality of tests or scales, and enables consideration of reliability, without reliance on Cronbach's alpha. This paper provides a practical guide to the application and interpretation of this non-parametric item response theory method in empirical research with health and well-being questionnaires. Methods Scalability of data from 1 a cross-sectional health survey (the Scottish Health Education Population Survey and 2 a general population birth cohort study (the National Child Development Study illustrate the method and modeling steps for dichotomous and polytomous items respectively. The questionnaire data analyzed comprise responses to the 12 item General Health Questionnaire, under the binary recoding recommended for screening applications, and the ordinal/polytomous responses to the Warwick-Edinburgh Mental Well-being Scale. Results and conclusions After an initial analysis example in which we select items by phrasing (six positive versus six negatively worded items we show that all items from the 12-item General Health Questionnaire (GHQ-12 – when binary scored – were scalable according to the double monotonicity model, in two short scales comprising six items each (Bech’s “well-being” and “distress” clinical scales. An illustration of ordinal item analysis

  3. Mokken scale analysis of mental health and well-being questionnaire item responses: a non-parametric IRT method in empirical research for applied health researchers.

    Stochl, Jan; Jones, Peter B; Croudace, Tim J

    2012-06-11

    Mokken scaling techniques are a useful tool for researchers who wish to construct unidimensional tests or use questionnaires that comprise multiple binary or polytomous items. The stochastic cumulative scaling model offered by this approach is ideally suited when the intention is to score an underlying latent trait by simple addition of the item response values. In our experience, the Mokken model appears to be less well-known than for example the (related) Rasch model, but is seeing increasing use in contemporary clinical research and public health. Mokken's method is a generalisation of Guttman scaling that can assist in the determination of the dimensionality of tests or scales, and enables consideration of reliability, without reliance on Cronbach's alpha. This paper provides a practical guide to the application and interpretation of this non-parametric item response theory method in empirical research with health and well-being questionnaires. Scalability of data from 1) a cross-sectional health survey (the Scottish Health Education Population Survey) and 2) a general population birth cohort study (the National Child Development Study) illustrate the method and modeling steps for dichotomous and polytomous items respectively. The questionnaire data analyzed comprise responses to the 12 item General Health Questionnaire, under the binary recoding recommended for screening applications, and the ordinal/polytomous responses to the Warwick-Edinburgh Mental Well-being Scale. After an initial analysis example in which we select items by phrasing (six positive versus six negatively worded items) we show that all items from the 12-item General Health Questionnaire (GHQ-12)--when binary scored--were scalable according to the double monotonicity model, in two short scales comprising six items each (Bech's "well-being" and "distress" clinical scales). An illustration of ordinal item analysis confirmed that all 14 positively worded items of the Warwick-Edinburgh Mental

  4. Computational Models for Calcium-Mediated Astrocyte Functions

    Tiina Manninen

    2018-04-01

    Full Text Available The computational neuroscience field has heavily concentrated on the modeling of neuronal functions, largely ignoring other brain cells, including one type of glial cell, the astrocytes. Despite the short history of modeling astrocytic functions, we were delighted about the hundreds of models developed so far to study the role of astrocytes, most often in calcium dynamics, synchronization, information transfer, and plasticity in vitro, but also in vascular events, hyperexcitability, and homeostasis. Our goal here is to present the state-of-the-art in computational modeling of astrocytes in order to facilitate better understanding of the functions and dynamics of astrocytes in the brain. Due to the large number of models, we concentrated on a hundred models that include biophysical descriptions for calcium signaling and dynamics in astrocytes. We categorized the models into four groups: single astrocyte models, astrocyte network models, neuron-astrocyte synapse models, and neuron-astrocyte network models to ease their use in future modeling projects. We characterized the models based on which earlier models were used for building the models and which type of biological entities were described in the astrocyte models. Features of the models were compared and contrasted so that similarities and differences were more readily apparent. We discovered that most of the models were basically generated from a small set of previously published models with small variations. However, neither citations to all the previous models with similar core structure nor explanations of what was built on top of the previous models were provided, which made it possible, in some cases, to have the same models published several times without an explicit intention to make new predictions about the roles of astrocytes in brain functions. Furthermore, only a few of the models are available online which makes it difficult to reproduce the simulation results and further develop

  5. Computational Models for Calcium-Mediated Astrocyte Functions.

    Manninen, Tiina; Havela, Riikka; Linne, Marja-Leena

    2018-01-01

    The computational neuroscience field has heavily concentrated on the modeling of neuronal functions, largely ignoring other brain cells, including one type of glial cell, the astrocytes. Despite the short history of modeling astrocytic functions, we were delighted about the hundreds of models developed so far to study the role of astrocytes, most often in calcium dynamics, synchronization, information transfer, and plasticity in vitro , but also in vascular events, hyperexcitability, and homeostasis. Our goal here is to present the state-of-the-art in computational modeling of astrocytes in order to facilitate better understanding of the functions and dynamics of astrocytes in the brain. Due to the large number of models, we concentrated on a hundred models that include biophysical descriptions for calcium signaling and dynamics in astrocytes. We categorized the models into four groups: single astrocyte models, astrocyte network models, neuron-astrocyte synapse models, and neuron-astrocyte network models to ease their use in future modeling projects. We characterized the models based on which earlier models were used for building the models and which type of biological entities were described in the astrocyte models. Features of the models were compared and contrasted so that similarities and differences were more readily apparent. We discovered that most of the models were basically generated from a small set of previously published models with small variations. However, neither citations to all the previous models with similar core structure nor explanations of what was built on top of the previous models were provided, which made it possible, in some cases, to have the same models published several times without an explicit intention to make new predictions about the roles of astrocytes in brain functions. Furthermore, only a few of the models are available online which makes it difficult to reproduce the simulation results and further develop the models. Thus

  6. An evolutionary computation approach to examine functional brain plasticity

    Arnab eRoy

    2016-04-01

    Full Text Available One common research goal in systems neurosciences is to understand how the functional relationship between a pair of regions of interest (ROIs evolves over time. Examining neural connectivity in this way is well-suited for the study of developmental processes, learning, and even in recovery or treatment designs in response to injury. For most fMRI based studies, the strength of the functional relationship between two ROIs is defined as the correlation between the average signal representing each region. The drawback to this approach is that much information is lost due to averaging heterogeneous voxels, and therefore, the functional relationship between a ROI-pair that evolve at a spatial scale much finer than the ROIs remain undetected. To address this shortcoming, we introduce a novel evolutionary computation (EC based voxel-level procedure to examine functional plasticity between an investigator defined ROI-pair by simultaneously using subject-specific BOLD-fMRI data collected from two sessions seperated by finite duration of time. This data-driven procedure detects a sub-region composed of spatially connected voxels from each ROI (a so-called sub-regional-pair such that the pair shows a significant gain/loss of functional relationship strength across the two time points. The procedure is recursive and iteratively finds all statistically significant sub-regional-pairs within the ROIs. Using this approach, we examine functional plasticity between the default mode network (DMN and the executive control network (ECN during recovery from traumatic brain injury (TBI; the study includes 14 TBI and 12 healthy control subjects. We demonstrate that the EC based procedure is able to detect functional plasticity where a traditional averaging based approach fails. The subject-specific plasticity estimates obtained using the EC-procedure are highly consistent across multiple runs. Group-level analyses using these plasticity estimates showed an increase in

  7. Computer Modeling of the Earliest Cellular Structures and Functions

    Pohorille, Andrew; Chipot, Christophe; Schweighofer, Karl

    2000-01-01

    In the absence of extinct or extant record of protocells (the earliest ancestors of contemporary cells). the most direct way to test our understanding of the origin of cellular life is to construct laboratory models of protocells. Such efforts are currently underway in the NASA Astrobiology Program. They are accompanied by computational studies aimed at explaining self-organization of simple molecules into ordered structures and developing designs for molecules that perform proto-cellular functions. Many of these functions, such as import of nutrients, capture and storage of energy. and response to changes in the environment are carried out by proteins bound to membranestructures at water-membrane interfaces and insert into membranes, (b) how these peptides aggregate to form membrane-spanning structures (eg. channels), and (c) by what mechanisms such aggregates perform essential proto-cellular functions, such as proton transport of protons across cell walls, a key step in cellular bioenergetics. The simulations were performed using the molecular dynamics method, in which Newton's equations of motion for each item in the system are solved iteratively. The problems of interest required simulations on multi-nanosecond time scales, which corresponded to 10(exp 6)-10(exp 8) time steps.

  8. Computing the Partition Function for Kinetically Trapped RNA Secondary Structures

    Lorenz, William A.; Clote, Peter

    2011-01-01

    An RNA secondary structure is locally optimal if there is no lower energy structure that can be obtained by the addition or removal of a single base pair, where energy is defined according to the widely accepted Turner nearest neighbor model. Locally optimal structures form kinetic traps, since any evolution away from a locally optimal structure must involve energetically unfavorable folding steps. Here, we present a novel, efficient algorithm to compute the partition function over all locally optimal secondary structures of a given RNA sequence. Our software, RNAlocopt runs in time and space. Additionally, RNAlocopt samples a user-specified number of structures from the Boltzmann subensemble of all locally optimal structures. We apply RNAlocopt to show that (1) the number of locally optimal structures is far fewer than the total number of structures – indeed, the number of locally optimal structures approximately equal to the square root of the number of all structures, (2) the structural diversity of this subensemble may be either similar to or quite different from the structural diversity of the entire Boltzmann ensemble, a situation that depends on the type of input RNA, (3) the (modified) maximum expected accuracy structure, computed by taking into account base pairing frequencies of locally optimal structures, is a more accurate prediction of the native structure than other current thermodynamics-based methods. The software RNAlocopt constitutes a technical breakthrough in our study of the folding landscape for RNA secondary structures. For the first time, locally optimal structures (kinetic traps in the Turner energy model) can be rapidly generated for long RNA sequences, previously impossible with methods that involved exhaustive enumeration. Use of locally optimal structure leads to state-of-the-art secondary structure prediction, as benchmarked against methods involving the computation of minimum free energy and of maximum expected accuracy. Web server

  9. Computing the partition function for kinetically trapped RNA secondary structures.

    William A Lorenz

    Full Text Available An RNA secondary structure is locally optimal if there is no lower energy structure that can be obtained by the addition or removal of a single base pair, where energy is defined according to the widely accepted Turner nearest neighbor model. Locally optimal structures form kinetic traps, since any evolution away from a locally optimal structure must involve energetically unfavorable folding steps. Here, we present a novel, efficient algorithm to compute the partition function over all locally optimal secondary structures of a given RNA sequence. Our software, RNAlocopt runs in O(n3 time and O(n2 space. Additionally, RNAlocopt samples a user-specified number of structures from the Boltzmann subensemble of all locally optimal structures. We apply RNAlocopt to show that (1 the number of locally optimal structures is far fewer than the total number of structures--indeed, the number of locally optimal structures approximately equal to the square root of the number of all structures, (2 the structural diversity of this subensemble may be either similar to or quite different from the structural diversity of the entire Boltzmann ensemble, a situation that depends on the type of input RNA, (3 the (modified maximum expected accuracy structure, computed by taking into account base pairing frequencies of locally optimal structures, is a more accurate prediction of the native structure than other current thermodynamics-based methods. The software RNAlocopt constitutes a technical breakthrough in our study of the folding landscape for RNA secondary structures. For the first time, locally optimal structures (kinetic traps in the Turner energy model can be rapidly generated for long RNA sequences, previously impossible with methods that involved exhaustive enumeration. Use of locally optimal structure leads to state-of-the-art secondary structure prediction, as benchmarked against methods involving the computation of minimum free energy and of maximum expected

  10. Automatic quantitative analysis of liver functions by a computer system

    Shinpo, Takako

    1984-01-01

    In the previous paper, we confirmed the clinical usefulness of hepatic clearance (hepatic blood flow), which is the hepatic uptake and blood disappearance rate coefficients. These were obtained by the initial slope index of each minute during a period of five frames of a hepatogram by injecting sup(99m)Tc-Sn-colloid 37 MBq. To analyze the information simply, rapidly and accurately, we developed a automatic quantitative analysis for liver functions. Information was obtained every quarter minute during a period of 60 frames of the sequential image. The sequential counts were measured for the heart, whole liver, both left lobe and right lobes using a computer connected to a scintillation camera. We measured the effective hepatic blood flow, from the disappearance rate multiplied by the percentage of hepatic uptake as follows, (liver counts)/(tatal counts of the field) Our method of analysis automatically recorded the reappearance graph of the disappearance curve and uptake curve on the basis of the heart and the whole liver, respectively; and computed using BASIC language. This method makes it possible to obtain the image of the initial uptake of sup(99m)Tc-Sn-colloid into the liver by a small dose of it. (author)

  11. Imaging local brain function with emission computed tomography

    Kuhl, D.E.

    1984-01-01

    Positron emission tomography (PET) using 18 F-fluorodeoxyglucose (FDG) was used to map local cerebral glucose utilization in the study of local cerebral function. This information differs fundamentally from structural assessment by means of computed tomography (CT). In normal human volunteers, the FDG scan was used to determine the cerebral metabolic response to conrolled sensory stimulation and the effects of aging. Cerebral metabolic patterns are distinctive among depressed and demented elderly patients. The FDG scan appears normal in the depressed patient, studded with multiple metabolic defects in patients with multiple infarct dementia, and in the patients with Alzheimer disease, metabolism is particularly reduced in the parietal cortex, but only slightly reduced in the caudate and thalamus. The interictal FDG scan effectively detects hypometabolic brain zones that are sites of onset for seizures in patients with partial epilepsy, even though these zones usually appear normal on CT scans. The future prospects of PET are discussed

  12. The Impact of Computer Use on Learning of Quadratic Functions

    Pihlap, Sirje

    2017-01-01

    Studies of the impact of various types of computer use on the results of learning and student motivation have indicated that the use of computers can increase learning motivation, and that computers can have a positive effect, a negative effect, or no effect at all on learning outcomes. Some results indicate that it is not computer use itself that…

  13. A computer vision based candidate for functional balance test.

    Nalci, Alican; Khodamoradi, Alireza; Balkan, Ozgur; Nahab, Fatta; Garudadri, Harinath

    2015-08-01

    Balance in humans is a motor skill based on complex multimodal sensing, processing and control. Ability to maintain balance in activities of daily living (ADL) is compromised due to aging, diseases, injuries and environmental factors. Center for Disease Control and Prevention (CDC) estimate of the costs of falls among older adults was $34 billion in 2013 and is expected to reach $54.9 billion in 2020. In this paper, we present a brief review of balance impairments followed by subjective and objective tools currently used in clinical settings for human balance assessment. We propose a novel computer vision (CV) based approach as a candidate for functional balance test. The test will take less than a minute to administer and expected to be objective, repeatable and highly discriminative in quantifying ability to maintain posture and balance. We present an informal study with preliminary data from 10 healthy volunteers, and compare performance with a balance assessment system called BTrackS Balance Assessment Board. Our results show high degree of correlation with BTrackS. The proposed system promises to be a good candidate for objective functional balance tests and warrants further investigations to assess validity in clinical settings, including acute care, long term care and assisted living care facilities. Our long term goals include non-intrusive approaches to assess balance competence during ADL in independent living environments.

  14. Specific features of vocal fold paralysis in functional computed tomography

    Laskowska, K.; Mackiewicz-Nartowicz, H.; Serafin, Z.; Nawrocka, E.

    2008-01-01

    Vocal fold paralysis is usually recognized in laryngological examination, and detailed vocal fold function may be established based on laryngovideostroboscopy. Additional imaging should exclude any morphological causes of the paresis, which should be treated pharmacologically or surgically. The aim of this paper was to analyze the computed tomography (CT) images of the larynx in patients with unilateral vocal fold paralysis. CT examinations of the larynx were performed in 10 patients with clinically defined unilateral vocal fold paralysis. The examinations consisted of unenhanced acquisition and enhanced 3-phased acquisition: during free breathing, Valsalva maneuver, and phonation. The analysis included the following morphologic features of the paresis.the deepened epiglottic vallecula, the deepened piriform recess, the thickened and medially positioned aryepiglottic fold, the widened laryngeal pouch, the anteriorly positioned arytenoid cartilage, the thickened vocal fold, and the filled infraglottic space in frontal CT reconstruction. CT images were compared to laryngovideostroboscopy. The most common symptoms of vocal cord paralysis in CT were the deepened epiglottic vallecula and piriform recess, the widened laryngeal pouch with the filled infraglottic space, and the thickened aryepiglottic fold. Regarding the efficiency of the paralysis determination, the three functional techniques of CT larynx imaging used did not differ significantly, and laryngovideostroboscopy demonstrated its advantage over CT. CT of the larynx is a supplementary examination in the diagnosis of vocal fold paralysis, which may enable topographic analysis of the fold dysfunction. The knowledge of morphological CT features of the paralysis may help to prevent false-positive diagnosis of laryngeal cancer. (author)

  15. Functional magnetic resonance maps obtained by personal computer

    Gomez, F. j.; Manjon, J. V.; Robles, M.; Marti-Bonmati, L.; Dosda, R.; Molla, E.

    2001-01-01

    Functional magnetic resonance (fMR) is of special relevance in the analysis of certain types of brain activation. The present report describes the development of a simple software program for use with personal computers (PCs) that analyzes these images and provides functional activation maps. Activation maps are based on the temporal differences in oxyhemoglobin in tomographic images. To detect these differences, intensities registered repeatedly during brain control and activation are compared. The experiments were performed with a 1.5-Tesla MR unit. To verify the reliability of the program fMR studies were carried out in 4 healthy individuals (12 contiguous slices, 80 images per slice every 3.1 seconds for a total of 960 images). All the images were transferred to a PC and were processed pixel by pixel within each sequence to obtain an intensity/time curve. The statistical study of the results (Student's test and cross correlation analysis) made it possible to establish the activation of each pixel. The images were prepared using spatial filtering, temporal filtering, baseline correction, normalization and segmentation of the parenchyma. The postprocessing of the results involved the elimination of single pixels, superposition of an anatomical image of greater spatial resolution and anti-aliasing. The application (Xfun 1.0, Valencia, Spain) was developed in Microsoft Visual C++5.0 Developer Studio for Windows NT Workstation. As a representative example, the program took 8.2 seconds to calculate and present the results of the entire study (12 functional maps). In the motor and visual activation experiments, the activation corresponding to regions proximal to the central sulcus of the hemisphere contralateral to the hand that moved and in the occipital cortex were observed. While programs that calculate activation maps are available, the development of software for PCs running Microsoft Windows ensures several key features for its use on a daily basis: it is easy

  16. Quantitative Phylogenomics of Within-Species Mitogenome Variation: Monte Carlo and Non-Parametric Analysis of Phylogeographic Structure among Discrete Transatlantic Breeding Areas of Harp Seals (Pagophilus groenlandicus.

    Steven M Carr

    -stepping-stone biogeographic models, but not a simple 1-step trans-Atlantic model. Plots of the cumulative pairwise sequence difference curves among seals in each of the four populations provide continuous proxies for phylogenetic diversification within each. Non-parametric Kolmogorov-Smirnov (K-S tests of maximum pairwise differences between these curves indicates that the Greenland Sea population has a markedly younger phylogenetic structure than either the White Sea population or the two Northwest Atlantic populations, which are of intermediate age and homogeneous structure. The Monte Carlo and K-S assessments provide sensitive quantitative tests of within-species mitogenomic phylogeography. This is the first study to indicate that the White Sea and Greenland Sea populations have different population genetic histories. The analysis supports the hypothesis that Harp Seals comprises three genetically distinguishable breeding populations, in the White Sea, Greenland Sea, and Northwest Atlantic. Implications for an ice-dependent species during ongoing climate change are discussed.

  17. Functional Dual Adaptive Control with Recursive Gaussian Process Model

    Prüher, Jakub; Král, Ladislav

    2015-01-01

    The paper deals with dual adaptive control problem, where the functional uncertainties in the system description are modelled by a non-parametric Gaussian process regression model. Current approaches to adaptive control based on Gaussian process models are severely limited in their practical applicability, because the model is re-adjusted using all the currently available data, which keeps growing with every time step. We propose the use of recursive Gaussian process regression algorithm for significant reduction in computational requirements, thus bringing the Gaussian process-based adaptive controllers closer to their practical applicability. In this work, we design a bi-criterial dual controller based on recursive Gaussian process model for discrete-time stochastic dynamic systems given in an affine-in-control form. Using Monte Carlo simulations, we show that the proposed controller achieves comparable performance with the full Gaussian process-based controller in terms of control quality while keeping the computational demands bounded. (paper)

  18. Computation of Value Functions in Nonlinear Differential Games with State Constraints

    Botkin, Nikolai; Hoffmann, Karl-Heinz; Mayer, Natalie; Turova, Varvara

    2013-01-01

    Finite-difference schemes for the computation of value functions of nonlinear differential games with non-terminal payoff functional and state constraints are proposed. The solution method is based on the fact that the value function is a

  19. Computational Benchmarking for Ultrafast Electron Dynamics: Wave Function Methods vs Density Functional Theory.

    Oliveira, Micael J T; Mignolet, Benoit; Kus, Tomasz; Papadopoulos, Theodoros A; Remacle, F; Verstraete, Matthieu J

    2015-05-12

    Attosecond electron dynamics in small- and medium-sized molecules, induced by an ultrashort strong optical pulse, is studied computationally for a frozen nuclear geometry. The importance of exchange and correlation effects on the nonequilibrium electron dynamics induced by the interaction of the molecule with the strong optical pulse is analyzed by comparing the solution of the time-dependent Schrödinger equation based on the correlated field-free stationary electronic states computed with the equationof-motion coupled cluster singles and doubles and the complete active space multi-configurational self-consistent field methodologies on one hand, and various functionals in real-time time-dependent density functional theory (TDDFT) on the other. We aim to evaluate the performance of the latter approach, which is very widely used for nonlinear absorption processes and whose computational cost has a more favorable scaling with the system size. We focus on LiH as a toy model for a nontrivial molecule and show that our conclusions carry over to larger molecules, exemplified by ABCU (C10H19N). The molecules are probed with IR and UV pulses whose intensities are not strong enough to significantly ionize the system. By comparing the evolution of the time-dependent field-free electronic dipole moment, as well as its Fourier power spectrum, we show that TD-DFT performs qualitatively well in most cases. Contrary to previous studies, we find almost no changes in the TD-DFT excitation energies when excited states are populated. Transitions between states of different symmetries are induced using pulses polarized in different directions. We observe that the performance of TD-DFT does not depend on the symmetry of the states involved in the transition.

  20. COMPUTING

    I. Fisk

    2011-01-01

    Introduction CMS distributed computing system performed well during the 2011 start-up. The events in 2011 have more pile-up and are more complex than last year; this results in longer reconstruction times and harder events to simulate. Significant increases in computing capacity were delivered in April for all computing tiers, and the utilisation and load is close to the planning predictions. All computing centre tiers performed their expected functionalities. Heavy-Ion Programme The CMS Heavy-Ion Programme had a very strong showing at the Quark Matter conference. A large number of analyses were shown. The dedicated heavy-ion reconstruction facility at the Vanderbilt Tier-2 is still involved in some commissioning activities, but is available for processing and analysis. Facilities and Infrastructure Operations Facility and Infrastructure operations have been active with operations and several important deployment tasks. Facilities participated in the testing and deployment of WMAgent and WorkQueue+Request...

  1. A brain-computer interface to support functional recovery

    Kjaer, Troels W; Sørensen, Helge Bjarup Dissing

    2013-01-01

    Brain-computer interfaces (BCI) register changes in brain activity and utilize this to control computers. The most widely used method is based on registration of electrical signals from the cerebral cortex using extracranially placed electrodes also called electroencephalography (EEG). The features...... extracted from the EEG may, besides controlling the computer, also be fed back to the patient for instance as visual input. This facilitates a learning process. BCI allow us to utilize brain activity in the rehabilitation of patients after stroke. The activity of the cerebral cortex varies with the type...... of movement we imagine, and by letting the patient know the type of brain activity best associated with the intended movement the rehabilitation process may be faster and more efficient. The focus of BCI utilization in medicine has changed in recent years. While we previously focused on devices facilitating...

  2. The analysis of gastric function using computational techniques

    Young, Paul

    2002-01-01

    The work presented in this thesis was carried out at the Magnetic Resonance Centre, Department of Physics and Astronomy, University of Nottingham, between October 1996 and June 2000. This thesis describes the application of computerised techniques to the analysis of gastric function, in relation to Magnetic Resonance Imaging data. The implementation of a computer program enabling the measurement of motility in the lower stomach is described in Chapter 6. This method allowed the dimensional reduction of multi-slice image data sets into a 'Motility Plot', from which the motility parameters - the frequency, velocity and depth of contractions - could be measured. The technique was found to be simple, accurate and involved substantial time savings, when compared to manual analysis. The program was subsequently used in the measurement of motility in three separate studies, described in Chapter 7. In Study 1, four different meal types of varying viscosity and nutrient value were consumed by 12 volunteers. The aim of the study was (i) to assess the feasibility of using the motility program in a volunteer study and (ii) to determine the effects of the meals on motility. The results showed that the parameters were remarkably consistent between the 4 meals. However, for each meal, velocity and percentage occlusion were found to increase as contractions propagated along the antrum. The first clinical application of the motility program was carried out in Study 2. Motility from three patients was measured, after they had been referred to the Magnetic Resonance Centre with gastric problems. The results showed that one of the patients displayed an irregular motility, compared to the results of the volunteer study. This result had not been observed using other investigative techniques. In Study 3, motility was measured in Low Viscosity and High Viscosity liquid/solid meals, with the solid particulate consisting of agar beads of varying breakdown strength. The results showed that

  3. Spaceborne computer executive routine functional design specification. Volume 2: Computer executive design for space station/base

    Kennedy, J. R.; Fitzpatrick, W. S.

    1971-01-01

    The computer executive functional system design concepts derived from study of the Space Station/Base are presented. Information Management System hardware configuration as directly influencing the executive design is reviewed. The hardware configuration and generic executive design requirements are considered in detail in a previous report (System Configuration and Executive Requirements Specifications for Reusable Shuttle and Space Station/Base, 9/25/70). This report defines basic system primitives and delineates processes and process control. Supervisor states are considered for describing basic multiprogramming and multiprocessing systems. A high-level computer executive including control of scheduling, allocation of resources, system interactions, and real-time supervisory functions is defined. The description is oriented to provide a baseline for a functional simulation of the computer executive system.

  4. Computer-controlled mechanical lung model for application in pulmonary function studies

    A.F.M. Verbraak (Anton); J.E.W. Beneken; J.M. Bogaard (Jan); A. Versprille (Adrian)

    1995-01-01

    textabstractA computer controlled mechanical lung model has been developed for testing lung function equipment, validation of computer programs and simulation of impaired pulmonary mechanics. The construction, function and some applications are described. The physical model is constructed from two

  5. Functional requirements for design of the Space Ultrareliable Modular Computer (SUMC) system simulator

    Curran, R. T.; Hornfeck, W. A.

    1972-01-01

    The functional requirements for the design of an interpretive simulator for the space ultrareliable modular computer (SUMC) are presented. A review of applicable existing computer simulations is included along with constraints on the SUMC simulator functional design. Input requirements, output requirements, and language requirements for the simulator are discussed in terms of a SUMC configuration which may vary according to the application.

  6. Functional Automata - Formal Languages for Computer Science Students

    Marco T. Morazán

    2014-12-01

    Full Text Available An introductory formal languages course exposes advanced undergraduate and early graduate students to automata theory, grammars, constructive proofs, computability, and decidability. Programming students find these topics to be challenging or, in many cases, overwhelming and on the fringe of Computer Science. The existence of this perception is not completely absurd since students are asked to design and prove correct machines and grammars without being able to experiment nor get immediate feedback, which is essential in a learning context. This article puts forth the thesis that the theory of computation ought to be taught using tools for actually building computations. It describes the implementation and the classroom use of a library, FSM, designed to provide students with the opportunity to experiment and test their designs using state machines, grammars, and regular expressions. Students are able to perform random testing before proceeding with a formal proof of correctness. That is, students can test their designs much like they do in a programming course. In addition, the library easily allows students to implement the algorithms they develop as part of the constructive proofs they write. Providing students with this ability ought to be a new trend in the formal languages classroom.

  7. Design, functioning and possible applications of process computers

    Kussl, V.

    1975-01-01

    Process computers are useful as automation instruments a) when large numbers of data are processed in analog or digital form, b) for low data flow (data rate), and c) when data must be stored over short or long periods of time. (orig./AK) [de

  8. EDF: Computing electron number probability distribution functions in real space from molecular wave functions

    Francisco, E.; Pendás, A. Martín; Blanco, M. A.

    2008-04-01

    Given an N-electron molecule and an exhaustive partition of the real space ( R) into m arbitrary regions Ω,Ω,…,Ω ( ⋃i=1mΩ=R), the edf program computes all the probabilities P(n,n,…,n) of having exactly n electrons in Ω, n electrons in Ω,…, and n electrons ( n+n+⋯+n=N) in Ω. Each Ω may correspond to a single basin (atomic domain) or several such basins (functional group). In the later case, each atomic domain must belong to a single Ω. The program can manage both single- and multi-determinant wave functions which are read in from an aimpac-like wave function description ( .wfn) file (T.A. Keith et al., The AIMPAC95 programs, http://www.chemistry.mcmaster.ca/aimpac, 1995). For multi-determinantal wave functions a generalization of the original .wfn file has been introduced. The new format is completely backwards compatible, adding to the previous structure a description of the configuration interaction (CI) coefficients and the determinants of correlated wave functions. Besides the .wfn file, edf only needs the overlap integrals over all the atomic domains between the molecular orbitals (MO). After the P(n,n,…,n) probabilities are computed, edf obtains from them several magnitudes relevant to chemical bonding theory, such as average electronic populations and localization/delocalization indices. Regarding spin, edf may be used in two ways: with or without a splitting of the P(n,n,…,n) probabilities into α and β spin components. Program summaryProgram title: edf Catalogue identifier: AEAJ_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEAJ_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 5387 No. of bytes in distributed program, including test data, etc.: 52 381 Distribution format: tar.gz Programming language: Fortran 77 Computer

  9. Computer-assisted modeling: Contributions of computational approaches to elucidating macromolecular structure and function: Final report

    Walton, S.

    1987-01-01

    The Committee, asked to provide an assessment of computer-assisted modeling of molecular structure, has highlighted the signal successes and the significant limitations for a broad panoply of technologies and has projected plausible paths of development over the next decade. As with any assessment of such scope, differing opinions about present or future prospects were expressed. The conclusions and recommendations, however, represent a consensus of our views of the present status of computational efforts in this field

  10. A FUNCTIONAL MODEL OF COMPUTER-ORIENTED LEARNING ENVIRONMENT OF A POST-DEGREE PEDAGOGICAL EDUCATION

    Kateryna R. Kolos

    2014-06-01

    Full Text Available The study substantiates the need for a systematic study of the functioning of computer-oriented learning environment of a post-degree pedagogical education; it is determined the definition of “functional model of computer-oriented learning environment of a post-degree pedagogical education”; it is built a functional model of computer-oriented learning environment of a post-degree pedagogical education in accordance with the functions of business, information and communication technology, academic, administrative staff and peculiarities of training courses teachers.

  11. Comparação de duas metodologias de amostragem atmosférica com ferramenta estatística não paramétrica Comparison of two atmospheric sampling methodologies with non-parametric statistical tools

    Maria João Nunes

    2005-03-01

    Full Text Available In atmospheric aerosol sampling, it is inevitable that the air that carries particles is in motion, as a result of both externally driven wind and the sucking action of the sampler itself. High or low air flow sampling speeds may lead to significant particle size bias. The objective of this work is the validation of measurements enabling the comparison of species concentration from both air flow sampling techniques. The presence of several outliers and increase of residuals with concentration becomes obvious, requiring non-parametric methods, recommended for the handling of data which may not be normally distributed. This way, conversion factors are obtained for each of the various species under study using Kendall regression.

  12. A brain-computer interface to support functional recovery.

    Kjaer, Troels W; Sørensen, Helge B

    2013-01-01

    Brain-computer interfaces (BCI) register changes in brain activity and utilize this to control computers. The most widely used method is based on registration of electrical signals from the cerebral cortex using extracranially placed electrodes also called electroencephalography (EEG). The features extracted from the EEG may, besides controlling the computer, also be fed back to the patient for instance as visual input. This facilitates a learning process. BCI allow us to utilize brain activity in the rehabilitation of patients after stroke. The activity of the cerebral cortex varies with the type of movement we imagine, and by letting the patient know the type of brain activity best associated with the intended movement the rehabilitation process may be faster and more efficient. The focus of BCI utilization in medicine has changed in recent years. While we previously focused on devices facilitating communication in the rather few patients with locked-in syndrome, much interest is now devoted to the therapeutic use of BCI in rehabilitation. For this latter group of patients, the device is not intended to be a lifelong assistive companion but rather a 'teacher' during the rehabilitation period. Copyright © 2013 S. Karger AG, Basel.

  13. Computational study on the functionalization of BNNC with pyrrole molecule

    Payvand, Akram; Tavangar, Zahra

    2018-05-01

    The functionalization of the boron nitride nanocone (BNNC) by pyrrole molecule was studied using B3LYP/6-311+G(d) level of theory. The reaction was studied in three methods in different layers of the nanocone: Diels-Alder cycloaddition, quartet cycloaddition and the reaction of the nitrogen atom of the pyrrole molecule with the boron or nitrogen atom of the BNNC. Thermodynamic quantities, Chemical hardness and potential and electrophilicity index of the functionalized BNNC were studied. The results show that the tip of nanocone has a higher tendency for participation in the reaction and the most favorable product of the reaction between BNNC and pyrrole molecule is produced from the reaction of N atom of pyrrole with the B atom of BNNC. The reaction decreases the energy gap value which leads to increasing the reactivity and conductivity of functionalized nanocone. The calculated NICS values confirm the aromaticity in the pristine nanocone as well as in the functionalized nanocone.

  14. Computer Corner: Spreadsheets, Power Series, Generating Functions, and Integers.

    Snow, Donald R.

    1989-01-01

    Implements a table algorithm on a spreadsheet program and obtains functions for several number sequences such as the Fibonacci and Catalan numbers. Considers other applications of the table algorithm to integers represented in various number bases. (YP)

  15. On the Hierarchy of Functioning Rules in Distributed Computing

    Bui , Alain; Bui , Marc; Lavault , Christian

    1999-01-01

    International audience; In previous papers, we used a Markovian model to determine the optimal functioning rules of a distributed system in various settings. Searching optimal functioning rules amounts to solve an optimization problem under constraints. The hierarchy of solutions arising from the above problem is called the “first order hierarchy”, and may possibly yield equivalent solutions. The present paper emphasizes a specific technique for deciding between two equivalent solutions, whic...

  16. Individual renal function study using dynamic computed tomography

    Fukuda, Yutaka; Kiya, Keiichi; Suzuki, Yoshiharu

    1990-01-01

    Dynamic CT scans of individual kindneys were obtained after an intravenous bolus injection of contrast agent. Time-density curves measured from the renal cortex, medulla and pelvis revealed the changes in density produced by the contrast agent reflecting the differential phase of renal function. Renal cortical density increased rapidly after bolus administration and then renal medullary and pelvic density increased continuously. In analyzing time-density curve, the cortico-medullary junction time, which is the time when the cortical and medullary curves cross was 57±8 seconds in patients with normal renal function. The cortico-medullary junction time was delayed in patient with decreased glomerular filtration rate. The cortico-pelvic junction time, which is the time when the cortical and pelvic curves cross was 104±33 seconds in patients with normal renal function. The cortico-pelvic junction time was delayed in patients with declined urinary concentrating capacity. In patients with unilateral renal agenesis and patients who were treated surgically by ureteral sprits, the relationship between individual renal functions and these junction times was examined. As a result of study there were inversely significant correlations between C-M junction time and unilateral GFR and between C-P junction time and urinary concentrating capacity. These studies indicate that dynamic CT scanning is an effective way that individual renal function can be monitored and evaluated. (author)

  17. Bread dough rheology: Computing with a damage function model

    Tanner, Roger I.; Qi, Fuzhong; Dai, Shaocong

    2015-01-01

    We describe an improved damage function model for bread dough rheology. The model has relatively few parameters, all of which can easily be found from simple experiments. Small deformations in the linear region are described by a gel-like power-law memory function. A set of large non-reversing deformations - stress relaxation after a step of shear, steady shearing and elongation beginning from rest, and biaxial stretching, is used to test the model. With the introduction of a revised strain measure which includes a Mooney-Rivlin term, all of these motions can be well described by the damage function described in previous papers. For reversing step strains, larger amplitude oscillatory shearing and recoil reasonable predictions have been found. The numerical methods used are discussed and we give some examples.

  18. Efficient and Flexible Computation of Many-Electron Wave Function Overlaps.

    Plasser, Felix; Ruckenbauer, Matthias; Mai, Sebastian; Oppel, Markus; Marquetand, Philipp; González, Leticia

    2016-03-08

    A new algorithm for the computation of the overlap between many-electron wave functions is described. This algorithm allows for the extensive use of recurring intermediates and thus provides high computational efficiency. Because of the general formalism employed, overlaps can be computed for varying wave function types, molecular orbitals, basis sets, and molecular geometries. This paves the way for efficiently computing nonadiabatic interaction terms for dynamics simulations. In addition, other application areas can be envisaged, such as the comparison of wave functions constructed at different levels of theory. Aside from explaining the algorithm and evaluating the performance, a detailed analysis of the numerical stability of wave function overlaps is carried out, and strategies for overcoming potential severe pitfalls due to displaced atoms and truncated wave functions are presented.

  19. Computation of load functions for different types of aircraft

    Siefert, Alexander; Henkel, Fritz-Otto

    2013-01-01

    In the presentation the influence of different parameters on the Ft-function were shown. The increase of the impact velocity shows for all aircraft a higher maximal load value and a reduced impact time. Due to the structural setup of the aircraft's the intensity is of these effects different. Comparing the Ft-function of A320, A340 and A380 for an impact velocity of 100 and 175 m/s no constant relation between them can be determined. • The variation of the flight direction with respect to the vertical axis shows a great influence on the Ft-function. A approximation by the cosine is especially for bigger rotations not correct. The influence of the rotation about the horizontal axis can be neglected. Finally the SPH-method was applied for the modelling of the fuel. The comparison to the discrete modelling approach was carried out for the Phantom F4. Thereby no big influence on the Ft-function is observed. For the evaluation of this modelling approach on the local damage the loaded area must be determined in further investigations

  20. Computational approaches to identify functional genetic variants in cancer genomes

    Gonzalez-Perez, Abel; Mustonen, Ville; Reva, Boris

    2013-01-01

    The International Cancer Genome Consortium (ICGC) aims to catalog genomic abnormalities in tumors from 50 different cancer types. Genome sequencing reveals hundreds to thousands of somatic mutations in each tumor but only a minority of these drive tumor progression. We present the result of discu......The International Cancer Genome Consortium (ICGC) aims to catalog genomic abnormalities in tumors from 50 different cancer types. Genome sequencing reveals hundreds to thousands of somatic mutations in each tumor but only a minority of these drive tumor progression. We present the result...... of discussions within the ICGC on how to address the challenge of identifying mutations that contribute to oncogenesis, tumor maintenance or response to therapy, and recommend computational techniques to annotate somatic variants and predict their impact on cancer phenotype....

  1. Effects of Computer-Based Training on Procedural Modifications to Standard Functional Analyses

    Schnell, Lauren K.; Sidener, Tina M.; DeBar, Ruth M.; Vladescu, Jason C.; Kahng, SungWoo

    2018-01-01

    Few studies have evaluated methods for training decision-making when functional analysis data are undifferentiated. The current study evaluated computer-based training to teach 20 graduate students to arrange functional analysis conditions, analyze functional analysis data, and implement procedural modifications. Participants were exposed to…

  2. Spaceborne computer executive routine functional design specification. Volume 1: Functional design of a flight computer executive program for the reusable shuttle

    Curran, R. T.

    1971-01-01

    A flight computer functional executive design for the reusable shuttle is presented. The design is given in the form of functional flowcharts and prose description. Techniques utilized in the regulation of process flow to accomplish activation, resource allocation, suspension, termination, and error masking based on process primitives are considered. Preliminary estimates of main storage utilization by the Executive are furnished. Conclusions and recommendations for timely, effective software-hardware integration in the reusable shuttle avionics system are proposed.

  3. Memory intensive functional architecture for distributed computer control systems

    Dimmler, D.G.

    1983-10-01

    A memory-intensive functional architectue for distributed data-acquisition, monitoring, and control systems with large numbers of nodes has been conceptually developed and applied in several large-scale and some smaller systems. This discussion concentrates on: (1) the basic architecture; (2) recent expansions of the architecture which now become feasible in view of the rapidly developing component technologies in microprocessors and functional large-scale integration circuits; and (3) implementation of some key hardware and software structures and one system implementation which is a system for performing control and data acquisition of a neutron spectrometer at the Brookhaven High Flux Beam Reactor. The spectrometer is equipped with a large-area position-sensitive neutron detector

  4. A functional analytic approach to computer-interactive mathematics.

    Ninness, Chris; Rumph, Robin; McCuller, Glen; Harrison, Carol; Ford, Angela M; Ninness, Sharon K

    2005-01-01

    Following a pretest, 11 participants who were naive with regard to various algebraic and trigonometric transformations received an introductory lecture regarding the fundamentals of the rectangular coordinate system. Following the lecture, they took part in a computer-interactive matching-to-sample procedure in which they received training on particular formula-to-formula and formula-to-graph relations as these formulas pertain to reflections and vertical and horizontal shifts. In training A-B, standard formulas served as samples and factored formulas served as comparisons. In training B-C, factored formulas served as samples and graphs served as comparisons. Subsequently, the program assessed for mutually entailed B-A and C-B relations as well as combinatorially entailed C-A and A-C relations. After all participants demonstrated mutual entailment and combinatorial entailment, we employed a test of novel relations to assess 40 different and complex variations of the original training formulas and their respective graphs. Six of 10 participants who completed training demonstrated perfect or near-perfect performance in identifying novel formula-to-graph relations. Three of the 4 participants who made more than three incorrect responses during the assessment of novel relations showed some commonality among their error patterns. Derived transfer of stimulus control using mathematical relations is discussed.

  5. Performance of a computer-based assessment of cognitive function measures in two cohorts of seniors

    Computer-administered assessment of cognitive function is being increasingly incorporated in clinical trials, however its performance in these settings has not been systematically evaluated. The Seniors Health and Activity Research Program (SHARP) pilot trial (N=73) developed a computer-based tool f...

  6. A Functional Specification for a Programming Language for Computer Aided Learning Applications.

    National Research Council of Canada, Ottawa (Ontario).

    In 1972 there were at least six different course authoring languages in use in Canada with little exchange of course materials between Computer Assisted Learning (CAL) centers. In order to improve facilities for producing "transportable" computer based course materials, a working panel undertook the definition of functional requirements of a user…

  7. Logical and physical resource management in the common node of a distributed function laboratory computer network

    Stubblefield, F.W.

    1976-01-01

    A scheme for managing resources required for transaction processing in the common node of a distributed function computer system has been given. The scheme has been found to be satisfactory for all common node services provided so far

  8. Computer Processing and Display of Positron Scintigrams and Dynamic Function Curves

    Wilensky, S.; Ashare, A. B.; Pizer, S. M.; Hoop, B. Jr.; Brownell, G. L. [Massachusetts General Hospital, Boston, MA (United States)

    1969-01-15

    A computer processing and display system for handling radioisotope data is described. The system has been used to upgrade and display brain scans and to process dynamic function curves. The hardware and software are described, and results are presented. (author)

  9. Functions and Requirements and Specifications for Replacement of the Computer Automated Surveillance System (CASS)

    SCAIEF, C.C.

    1999-01-01

    This functions, requirements and specifications document defines the baseline requirements and criteria for the design, purchase, fabrication, construction, installation, and operation of the system to replace the Computer Automated Surveillance System (CASS) alarm monitoring

  10. Functional diagnostics of the cervical spine by using computer tomography

    Dvorak, J.; Hayek, J.; Grob, D.

    1988-01-01

    35 healthy adults and 137 patients after cervical spine injury were examined by functional CT. The range of axial rotation at the level occiput/atlas, atlas/axis and the segment below were measured in all subjects. A rotation occiput/atlas of more than 7 0 , and C1/C2 more than 54 0 could refer to segmental hypermobility, a rotation at the segment C1/C2 less than 29 0 to hypomobility. According to the postulated normal values based upon a 98% confidence level, out of 137 patients examined after cervical spine injury and with therapy-resistant neck pain, 45 showed signs of segmental hypermobility of the upper cervical spine, 17 showed hyper- or hypomobility at different levels, 10 patients presented segmental hypomobility at C1/C2 level alone. In all patients, according to the clinical assessment, functional pathology was suspected in the upper cervical spine. Surgical correction of rotatory instability should be considered as a possible therapeutic procedure after successful diagnostic stabilisation of the cervical spine by minerva cast. (orig.)

  11. Functional diagnostics of the cervical spine by using computer tomography

    Dvorak, J; Hayek, J; Grob, D; Penning, L; Panjabi, M M; Zehnder, R

    1988-04-01

    35 healthy adults and 137 patients after cervical spine injury were examined by functional CT. The range of axial rotation at the level occiput/atlas, atlas/axis and the segment below were measured in all subjects. A rotation occiput/atlas of more than 7/sup 0/, and C1/C2 more than 54/sup 0/ could refer to segmental hypermobility, a rotation at the segment C1/C2 less than 29/sup 0/ to hypomobility. According to the postulated normal values based upon a 98% confidence level, out of 137 patients examined after cervical spine injury and with therapy-resistant neck pain, 45 showed signs of segmental hypermobility of the upper cervical spine, 17 showed hyper- or hypomobility at different levels, 10 patients presented segmental hypomobility at C1/C2 level alone. In all patients, according to the clinical assessment, functional pathology was suspected in the upper cervical spine. Surgical correction of rotatory instability should be considered as a possible therapeutic procedure after successful diagnostic stabilisation of the cervical spine by minerva cast.

  12. Algorithms: economical computation of functions of real matrices

    Weiss, Z.

    1991-01-01

    An algorithm is presented which economizes on the calculation of F(a), where A is a real matrix and F(x) a real valued function of x, using spectral analysis. Assuming the availability of the software for the calculation of the complete set of eigenvalues and eigen vectors of A, it is shown that the complex matrix arithmetics involved in subsequent operations leading from A to F(A) can be reduced to the size comparable with the analogous problem in real matrix arithmetics. Saving in CPU time and storage has been achieved by utilizing explicitly the property that complex eigenvalues of a real matrix appear in pairs of complex conjugated numbers. (author)

  13. Computational complexity of time-dependent density functional theory

    Whitfield, J D; Yung, M-H; Tempel, D G; Aspuru-Guzik, A; Boixo, S

    2014-01-01

    Time-dependent density functional theory (TDDFT) is rapidly emerging as a premier method for solving dynamical many-body problems in physics and chemistry. The mathematical foundations of TDDFT are established through the formal existence of a fictitious non-interacting system (known as the Kohn–Sham system), which can reproduce the one-electron reduced probability density of the actual system. We build upon these works and show that on the interior of the domain of existence, the Kohn–Sham system can be efficiently obtained given the time-dependent density. We introduce a V-representability parameter which diverges at the boundary of the existence domain and serves to quantify the numerical difficulty of constructing the Kohn-Sham potential. For bounded values of V-representability, we present a polynomial time quantum algorithm to generate the time-dependent Kohn–Sham potential with controllable error bounds. (paper)

  14. Target localization on standard axial images in computed tomography (CT) stereotaxis for functional neurosurgery - a technical note

    Patil, A.-A.

    1986-01-01

    A simple technique for marking functional neurosurgery target on computed tomography (CT) axial image is described. This permits the use of standard axial image for computed tomography (CT) stereotaxis in functional neurosurgery. (Author)

  15. A novel method for non-parametric identification of nonlinear restoring forces in nonlinear vibrations from noisy response data: A conservative system

    Jang, T. S.; Kwon, S. H.; Han, S. L.

    2009-01-01

    A novel procedure is proposed to identify the functional form of nonlinear restoring forces in the nonlinear oscillatory motion of a conservative system. Although the problem of identification has a unique solution, formulation results in a Volterra-type of integral equation of the 'first' kind: the solution lacks stability because the integral equation is the 'first' kind. Thus, the new problem at hand is ill-posed. Inevitable small errors during the identification procedure can make the prediction of nonlinear restoring forces useless. We overcome the difficulty by using a stabilization technique of Landweber's regularization in this study. The capability of the proposed procedure is investigated through numerical examples

  16. Business Process Quality Computation : Computing Non-Functional Requirements to Improve Business Processes

    Heidari, F.

    2015-01-01

    Business process modelling is an important part of system design. When designing or redesigning a business process, stakeholders specify, negotiate, and agree on business requirements to be satisfied, including non-functional requirements that concern the quality of the business process. This thesis

  17. Renormalization group improved computation of correlation functions in theories with nontrivial phase diagram

    Codello, Alessandro; Tonero, Alberto

    2016-01-01

    We present a simple and consistent way to compute correlation functions in interacting theories with nontrivial phase diagram. As an example we show how to consistently compute the four-point function in three dimensional Z2-scalar theories. The idea is to perform the path integral by weighting...... the momentum modes that contribute to it according to their renormalization group (RG) relevance, i.e. we weight each mode according to the value of the running couplings at that scale. In this way, we are able to encode in a loop computation the information regarding the RG trajectory along which we...

  18. Construction of renormalized coefficient functions of the Feynman diagrams by means of a computer

    Tarasov, O.V.

    1978-01-01

    An algorithm and short description of computer program, written in SCHOONSCHIP, are given. The program is assigned for construction of integrands of renormalized coefficient functions of the Feynman diagrams in scalar theories in the case of arbitrary subtraction point. For the given Feynman graph computer completely realizes the R-operation of Bogolubov-Parasjuk and gives the result as an integral over Feynman parameters. With the help of the program the time construction of the whole renormalized coefficient function is equal approximately 30 s on the CDC-6500 computer

  19. A non-parametric, microdosimetric-based approach to the evaluation of the biological effects of low doses of ionizing radiation

    Varma, M.N.; Zaider, M.

    1992-01-01

    A microdosimetric-based specific quality factor, q(y), is determined for eight sets of experimental data on cellular inactivation, mutation, chromosome aberration and neoplastic transformation. Bias-free Bayesian and maximum entropy approaches were used. A comparison of the curves q(y) thus determined reveals a surprising degree of uniformity. In our view this is prima facie evidence that the spatial pattern of microscopic energy deposition - rather than the specific end point or cellular system - is the quantity which determines the dependency of the cellular response to the quality of the ionizing radiation. For further applications of this approach to radiation protection experimental microdosimetric spectra are urgently needed. Further improvement in the quality of the q(y) functions could provide important clues on fundamental biophysical mechanisms. (author)

  20. COMPUTING

    M. Kasemann

    Introduction A large fraction of the effort was focused during the last period into the preparation and monitoring of the February tests of Common VO Computing Readiness Challenge 08. CCRC08 is being run by the WLCG collaboration in two phases, between the centres and all experiments. The February test is dedicated to functionality tests, while the May challenge will consist of running at all centres and with full workflows. For this first period, a number of functionality checks of the computing power, data repositories and archives as well as network links are planned. This will help assess the reliability of the systems under a variety of loads, and identifying possible bottlenecks. Many tests are scheduled together with other VOs, allowing the full scale stress test. The data rates (writing, accessing and transfer¬ring) are being checked under a variety of loads and operating conditions, as well as the reliability and transfer rates of the links between Tier-0 and Tier-1s. In addition, the capa...

  1. PERFORMANCE OF A COMPUTER-BASED ASSESSMENT OF COGNITIVE FUNCTION MEASURES IN TWO COHORTS OF SENIORS

    Espeland, Mark A.; Katula, Jeffrey A.; Rushing, Julia; Kramer, Arthur F.; Jennings, Janine M.; Sink, Kaycee M.; Nadkarni, Neelesh K.; Reid, Kieran F.; Castro, Cynthia M.; Church, Timothy; Kerwin, Diana R.; Williamson, Jeff D.; Marottoli, Richard A.; Rushing, Scott; Marsiske, Michael; Rapp, Stephen R.

    2013-01-01

    Background Computer-administered assessment of cognitive function is being increasingly incorporated in clinical trials, however its performance in these settings has not been systematically evaluated. Design The Seniors Health and Activity Research Program (SHARP) pilot trial (N=73) developed a computer-based tool for assessing memory performance and executive functioning. The Lifestyle Interventions and Independence for Seniors (LIFE) investigators incorporated this battery in a full scale multicenter clinical trial (N=1635). We describe relationships that test scores have with those from interviewer-administered cognitive function tests and risk factors for cognitive deficits and describe performance measures (completeness, intra-class correlations). Results Computer-based assessments of cognitive function had consistent relationships across the pilot and full scale trial cohorts with interviewer-administered assessments of cognitive function, age, and a measure of physical function. In the LIFE cohort, their external validity was further demonstrated by associations with other risk factors for cognitive dysfunction: education, hypertension, diabetes, and physical function. Acceptable levels of data completeness (>83%) were achieved on all computer-based measures, however rates of missing data were higher among older participants (odds ratio=1.06 for each additional year; p<0.001) and those who reported no current computer use (odds ratio=2.71; p<0.001). Intra-class correlations among clinics were at least as low (ICC≤0.013) as for interviewer measures (ICC≤0.023), reflecting good standardization. All cognitive measures loaded onto the first principal component (global cognitive function), which accounted for 40% of the overall variance. Conclusion Our results support the use of computer-based tools for assessing cognitive function in multicenter clinical trials of older individuals. PMID:23589390

  2. Computing the Kummer function $U(a,b,z)$ for small values of the arguments

    A. Gil (Amparo); J. Segura (Javier); N.M. Temme (Nico)

    2015-01-01

    textabstractWe describe methods for computing the Kummer function $U(a,b,z)$ for small values of $z$, with special attention to small values of $b$. For these values of $b$ the connection formula that represents $U(a,b,z)$ as a linear combination of two ${}_1F_1$-functions needs a limiting

  3. Systemic functional grammar in natural language generation linguistic description and computational representation

    Teich, Elke

    1999-01-01

    This volume deals with the computational application of systemic functional grammar (SFG) for natural language generation. In particular, it describes the implementation of a fragment of the grammar of German in the computational framework of KOMET-PENMAN for multilingual generation. The text also presents a specification of explicit well-formedness constraints on syntagmatic structure which are defined in the form of typed feature structures. It thus achieves a model of systemic functional grammar that unites both the strengths of systemics, such as stratification, functional diversification

  4. Studies on the zeros of Bessel functions and methods for their computation

    Kerimov, M. K.

    2014-09-01

    The zeros of Bessel functions play an important role in computational mathematics, mathematical physics, and other areas of natural sciences. Studies addressing these zeros (their properties, computational methods) can be found in various sources. This paper offers a detailed overview of the results concerning the real zeros of the Bessel functions of the first and second kinds and general cylinder functions. The author intends to publish several overviews on this subject. In this first publication, works dealing with real zeros are analyzed. Primary emphasis is placed on classical results, which are still important. Some of the most recent publications are also discussed.

  5. Functioning strategy study on control systems of large physical installations used with a digital computer

    Bel'man, L.B.; Lavrikov, S.A.; Lenskij, O.D.

    1975-01-01

    A criterion to evaluate the efficiency of a control system functioning of large physical installations by means of a control computer. The criteria are the object utilization factor and computer load factor. Different strategies of control system functioning are described, and their comparative analysis is made. A choice of such important parameters as sampling time and parameter correction time is made. A single factor to evaluate the system functioning efficiency is introduced and its dependence on the sampling interval value is given. Using diagrams attached, it is easy to find the optimum value of the sampling interval and the corresponding maximum value of the single efficiency factor proposed

  6. Heuristic lipophilicity potential for computer-aided rational drug design: Optimizations of screening functions and parameters

    Du, Qishi; Mezey, Paul G.

    1998-09-01

    In this research we test and compare three possible atom-basedscreening functions used in the heuristic molecular lipophilicity potential(HMLP). Screening function 1 is a power distance-dependent function, b_{{i}} /| {R_{{i}}- r} |^γ, screening function 2is an exponential distance-dependent function, biexp(-| {R_i- r} |/d_0 , and screening function 3 is aweighted distance-dependent function, {{sign}}( {b_i } ){{exp}}ξ ( {| {R_i- r} |/| {b_i } |} )For every screening function, the parameters (γ ,d0, and ξ are optimized using 41 common organic molecules of 4 types of compounds:aliphatic alcohols, aliphatic carboxylic acids, aliphatic amines, andaliphatic alkanes. The results of calculations show that screening function3 cannot give chemically reasonable results, however, both the powerscreening function and the exponential screening function give chemicallysatisfactory results. There are two notable differences between screeningfunctions 1 and 2. First, the exponential screening function has largervalues in the short distance than the power screening function, thereforemore influence from the nearest neighbors is involved using screeningfunction 2 than screening function 1. Second, the power screening functionhas larger values in the long distance than the exponential screeningfunction, therefore screening function 1 is effected by atoms at longdistance more than screening function 2. For screening function 1, thesuitable range of parameter d0 is 1.5 < d0 < 3.0, and d0 = 2.0 is recommended. HMLP developed in this researchprovides a potential tool for computer-aided three-dimensional drugdesign.

  7. COMPUTING

    I. Fisk

    2012-01-01

      Introduction Computing activity has been running at a sustained, high rate as we collect data at high luminosity, process simulation, and begin to process the parked data. The system is functional, though a number of improvements are planned during LS1. Many of the changes will impact users, we hope only in positive ways. We are trying to improve the distributed analysis tools as well as the ability to access more data samples more transparently.  Operations Office Figure 2: Number of events per month, for 2012 Since the June CMS Week, Computing Operations teams successfully completed data re-reconstruction passes and finished the CMSSW_53X MC campaign with over three billion events available in AOD format. Recorded data was successfully processed in parallel, exceeding 1.2 billion raw physics events per month for the first time in October 2012 due to the increase in data-parking rate. In parallel, large efforts were dedicated to WMAgent development and integrati...

  8. Computation of Galois field expressions for quaternary logic functions on GPUs

    Gajić Dušan B.

    2014-01-01

    Full Text Available Galois field (GF expressions are polynomials used as representations of multiple-valued logic (MVL functions. For this purpose, MVL functions are considered as functions defined over a finite (Galois field of order p - GF(p. The problem of computing these functional expressions has an important role in areas such as digital signal processing and logic design. Time needed for computing GF-expressions increases exponentially with the number of variables in MVL functions and, as a result, it often represents a limiting factor in applications. This paper proposes a method for an accelerated computation of GF(4-expressions for quaternary (four-valued logic functions using graphics processing units (GPUs. The method is based on the spectral interpretation of GF-expressions, permitting the use of fast Fourier transform (FFT-like algorithms for their computation. These algorithms are then adapted for highly parallel processing on GPUs. The performance of the proposed solutions is compared with referent C/C++ implementations of the same algorithms processed on central processing units (CPUs. Experimental results confirm that the presented approach leads to significant reduction in processing times (up to 10.86 times when compared to CPU processing. Therefore, the proposed approach widens the set of problem instances which can be efficiently handled in practice. [Projekat Ministarstva nauke Republike Srbije, br. ON174026 i br. III44006

  9. Development and functional demonstration of a wireless intraoral inductive tongue computer interface for severely disabled persons.

    N S Andreasen Struijk, Lotte; Lontis, Eugen R; Gaihede, Michael; Caltenco, Hector A; Lund, Morten Enemark; Schioeler, Henrik; Bentsen, Bo

    2017-08-01

    Individuals with tetraplegia depend on alternative interfaces in order to control computers and other electronic equipment. Current interfaces are often limited in the number of available control commands, and may compromise the social identity of an individual due to their undesirable appearance. The purpose of this study was to implement an alternative computer interface, which was fully embedded into the oral cavity and which provided multiple control commands. The development of a wireless, intraoral, inductive tongue computer was described. The interface encompassed a 10-key keypad area and a mouse pad area. This system was embedded wirelessly into the oral cavity of the user. The functionality of the system was demonstrated in two tetraplegic individuals and two able-bodied individuals Results: The system was invisible during use and allowed the user to type on a computer using either the keypad area or the mouse pad. The maximal typing rate was 1.8 s for repetitively typing a correct character with the keypad area and 1.4 s for repetitively typing a correct character with the mouse pad area. The results suggest that this inductive tongue computer interface provides an esthetically acceptable and functionally efficient environmental control for a severely disabled user. Implications for Rehabilitation New Design, Implementation and detection methods for intra oral assistive devices. Demonstration of wireless, powering and encapsulation techniques suitable for intra oral embedment of assistive devices. Demonstration of the functionality of a rechargeable and fully embedded intra oral tongue controlled computer input device.

  10. Extended Krylov subspaces approximations of matrix functions. Application to computational electromagnetics

    Druskin, V.; Lee, Ping [Schlumberger-Doll Research, Ridgefield, CT (United States); Knizhnerman, L. [Central Geophysical Expedition, Moscow (Russian Federation)

    1996-12-31

    There is now a growing interest in the area of using Krylov subspace approximations to compute the actions of matrix functions. The main application of this approach is the solution of ODE systems, obtained after discretization of partial differential equations by method of lines. In the event that the cost of computing the matrix inverse is relatively inexpensive, it is sometimes attractive to solve the ODE using the extended Krylov subspaces, originated by actions of both positive and negative matrix powers. Examples of such problems can be found frequently in computational electromagnetics.

  11. Non-Parametric Model Drift Detection

    2016-07-01

    framework on two tasks in NLP domain, topic modeling, and machine translation. Our main findings are summarized as follows: • We can measure important...thank,us,me,hope,today Group num: 4, TC(X;Y_j): 0.407 4:republic,palestinian,israel, arab ,israeli,democratic,congo,mr,president,occupied Group num: 5...support,change,lessons,partnerships,l earned Group num: 35, TC(X;Y_j): 0.094 35:russian,federation,spoke,you,french,spanish, arabic ,your,chinese,sir

  12. Application of computer-generated functional (parametric) maps in radionuclide renography

    Agress, H. Jr.; Levenson, S.M.; Gelfand, M.J.; Green, M.V.; Bailey, J.J.; Johnston, G.S.

    1975-01-01

    A functional (parametric) map is a single visual display of regional dynamic phenomena which facilitates interpretation of the nature of focal abnormalities in renal function. Methods for producing several kinds of functional maps based on computer calculations of radionuclide scan data are briefly described. Three abnormal cases are presented to illustrate the use of functional maps to separate focal lesions and to specify the dynamic nature of the abnormalities in a way which is difficult to achieve with conventional sequential renal scans and renograms alone

  13. On the Computation and Applications of Bessel Functions with Pure Imaginary Indices

    Matyshev, A. A.; Fohtung, E.

    2009-01-01

    Bessel functions with pure imaginary index (order) play an important role in corpuscular optics where they govern the dynamics of charged particles in isotrajectory quadrupoles. Recently they were found to be of great importance in semiconductor material characterization as they are manifested in the strain state of crystalline material. A new algorithm which can be used for the computation of the normal and modifed Bessel functions with pure imaginary index is proposed. The developed algorit...

  14. MRIVIEW: An interactive computational tool for investigation of brain structure and function

    Ranken, D.; George, J.

    1993-01-01

    MRIVIEW is a software system which uses image processing and visualization to provide neuroscience researchers with an integrated environment for combining functional and anatomical information. Key features of the software include semi-automated segmentation of volumetric head data and an interactive coordinate reconciliation method which utilizes surface visualization. The current system is a precursor to a computational brain atlas. We describe features this atlas will incorporate, including methods under development for visualizing brain functional data obtained from several different research modalities

  15. Applications of computed nuclear structure functions to inclusive scattering, R-ratios and their moments

    Rinat, A.S.

    2000-01-01

    We discuss applications of previously computed nuclear structure functions (SF) to inclusive cross sections, compare predictions with recent CEBAF data and perform two scaling tests. We mention that the large Q 2 plateau of scaling functions may only in part be due to the asymptotic limit of SF, which prevents the extraction of the nucleon momentum distribution in a model- independent way. We show that there may be sizable discrepancies between computed and semi-heuristic estimates of SF ratios. We compute ratios of moments of nuclear SF and show these to be in reasonable agreement with data. We speculate that an effective theory may underly the model for the nuclear SF, which produces overall agreement with several observables. (author)

  16. Computer-aided Nonlinear Control System Design Using Describing Function Models

    Nassirharand, Amir

    2012-01-01

    A systematic computer-aided approach provides a versatile setting for the control engineer to overcome the complications of controller design for highly nonlinear systems. Computer-aided Nonlinear Control System Design provides such an approach based on the use of describing functions. The text deals with a large class of nonlinear systems without restrictions on the system order, the number of inputs and/or outputs or the number, type or arrangement of nonlinear terms. The strongly software-oriented methods detailed facilitate fulfillment of tight performance requirements and help the designer to think in purely nonlinear terms, avoiding the expedient of linearization which can impose substantial and unrealistic model limitations and drive up the cost of the final product. Design procedures are presented in a step-by-step algorithmic format each step being a functional unit with outputs that drive the other steps. This procedure may be easily implemented on a digital computer with example problems from mecha...

  17. Fast Computation of the Two-Point Correlation Function in the Age of Big Data

    Pellegrino, Andrew; Timlin, John

    2018-01-01

    We present a new code which quickly computes the two-point correlation function for large sets of astronomical data. This code combines the ease of use of Python with the speed of parallel shared libraries written in C. We include the capability to compute the auto- and cross-correlation statistics, and allow the user to calculate the three-dimensional and angular correlation functions. Additionally, the code automatically divides the user-provided sky masks into contiguous subsamples of similar size, using the HEALPix pixelization scheme, for the purpose of resampling. Errors are computed using jackknife and bootstrap resampling in a way that adds negligible extra runtime, even with many subsamples. We demonstrate comparable speed with other clustering codes, and code accuracy compared to known and analytic results.

  18. A fast computation method for MUSIC spectrum function based on circular arrays

    Du, Zhengdong; Wei, Ping

    2015-02-01

    The large computation amount of multiple signal classification (MUSIC) spectrum function seriously affects the timeliness of direction finding system using MUSIC algorithm, especially in the two-dimensional directions of arrival (DOA) estimation of azimuth and elevation with a large antenna array. This paper proposes a fast computation method for MUSIC spectrum. It is suitable for any circular array. First, the circular array is transformed into a virtual uniform circular array, in the process of calculating MUSIC spectrum, for the cyclic characteristics of steering vector, the inner product in the calculation of spatial spectrum is realised by cyclic convolution. The computational amount of MUSIC spectrum is obviously less than that of the conventional method. It is a very practical way for MUSIC spectrum computation in circular arrays.

  19. Non-parametric cell-based photometric proxies for galaxy morphology: methodology and application to the morphologically defined star formation-stellar mass relation of spiral galaxies in the local universe

    Grootes, M. W.; Tuffs, R. J.; Popescu, C. C.; Robotham, A. S. G.; Seibert, M.; Kelvin, L. S.

    2014-02-01

    We present a non-parametric cell-based method of selecting highly pure and largely complete samples of spiral galaxies using photometric and structural parameters as provided by standard photometric pipelines and simple shape fitting algorithms. The performance of the method is quantified for different parameter combinations, using purely human-based classifications as a benchmark. The discretization of the parameter space allows a markedly superior selection than commonly used proxies relying on a fixed curve or surface of separation. Moreover, we find structural parameters derived using passbands longwards of the g band and linked to older stellar populations, especially the stellar mass surface density μ* and the r-band effective radius re, to perform at least equally well as parameters more traditionally linked to the identification of spirals by means of their young stellar populations, e.g. UV/optical colours. In particular, the distinct bimodality in the parameter μ*, consistent with expectations of different evolutionary paths for spirals and ellipticals, represents an often overlooked yet powerful parameter in differentiating between spiral and non-spiral/elliptical galaxies. We use the cell-based method for the optical parameter set including re in combination with the Sérsic index n and the i-band magnitude to investigate the intrinsic specific star formation rate-stellar mass relation (ψ*-M*) for a morphologically defined volume-limited sample of local Universe spiral galaxies. The relation is found to be well described by ψ _* ∝ M_*^{-0.5} over the range of 109.5 ≤ M* ≤ 1011 M⊙ with a mean interquartile range of 0.4 dex. This is somewhat steeper than previous determinations based on colour-selected samples of star-forming galaxies, primarily due to the inclusion in the sample of red quiescent discs.

  20. Computer-Based Techniques for Collection of Pulmonary Function Variables during Rest and Exercise.

    1991-03-01

    routinely Included in experimental protocols involving hyper- and hypobaric excursions. Unfortunately, the full potential of those tests Is often not...for a Pulmonary Function data acquisition system that has proven useful in the hyperbaric research laboratory. It illustrates how computers can

  1. A Computational Model Quantifies the Effect of Anatomical Variability on Velopharyngeal Function

    Inouye, Joshua M.; Perry, Jamie L.; Lin, Kant Y.; Blemker, Silvia S.

    2015-01-01

    Purpose: This study predicted the effects of velopharyngeal (VP) anatomical parameters on VP function to provide a greater understanding of speech mechanics and aid in the treatment of speech disorders. Method: We created a computational model of the VP mechanism using dimensions obtained from magnetic resonance imaging measurements of 10 healthy…

  2. Maple (Computer Algebra System) in Teaching Pre-Calculus: Example of Absolute Value Function

    Tuluk, Güler

    2014-01-01

    Modules in Computer Algebra Systems (CAS) make Mathematics interesting and easy to understand. The present study focused on the implementation of the algebraic, tabular (numerical), and graphical approaches used for the construction of the concept of absolute value function in teaching mathematical content knowledge along with Maple 9. The study…

  3. Effects of a Computer-Based Intervention Program on the Communicative Functions of Children with Autism

    Hetzroni, Orit E.; Tannous, Juman

    2004-01-01

    This study investigated the use of computer-based intervention for enhancing communication functions of children with autism. The software program was developed based on daily life activities in the areas of play, food, and hygiene. The following variables were investigated: delayed echolalia, immediate echolalia, irrelevant speech, relevant…

  4. On algorithmic equivalence of instruction sequences for computing bit string functions

    Bergstra, J.A.; Middelburg, C.A.

    2015-01-01

    Every partial function from bit strings of a given length to bit strings of a possibly different given length can be computed by a finite instruction sequence that contains only instructions to set and get the content of Boolean registers, forward jump instructions, and a termination instruction. We

  5. On algorithmic equivalence of instruction sequences for computing bit string functions

    Bergstra, J.A.; Middelburg, C.A.

    2014-01-01

    Every partial function from bit strings of a given length to bit strings of a possibly different given length can be computed by a finite instruction sequence that contains only instructions to set and get the content of Boolean registers, forward jump instructions, and a termination instruction. We

  6. Computer-mediated communication in adults with high-functioning autism spectrum disorders and controls

    van der Aa, Christine; Pollmann, Monique; Plaat, Aske; van der Gaag, Rutger Jan

    2016-01-01

    It has been suggested that people with Autism Spectrum Disorders (ASD) are attracted to computer-mediated communication (CMC). In this study, we compare CMC use in adults with high-functioning ASD (N = 113) and a control group (N = 72). We find that people with ASD spend more time on CMC than

  7. Conical : An extended module for computing a numerically satisfactory pair of solutions of the differential equation for conical functions

    T.M. Dunster (Mark); A. Gil (Amparo); J. Segura (Javier); N.M. Temme (Nico)

    2017-01-01

    textabstractConical functions appear in a large number of applications in physics and engineering. In this paper we describe an extension of our module Conical (Gil et al., 2012) for the computation of conical functions. Specifically, the module includes now a routine for computing the function

  8. The role of dual-energy computed tomography in the assessment of pulmonary function

    Hwang, Hye Jeon [Department of Radiology, Hallym University College of Medicine, Hallym University Sacred Heart Hospital, 22, Gwanpyeong-ro 170beon-gil, Dongan-gu, Anyang-si, Gyeonggi-do 431-796 (Korea, Republic of); Hoffman, Eric A. [Departments of Radiology, Medicine, and Biomedical Engineering, University of Iowa, 200 Hawkins Dr, CC 701 GH, Iowa City, IA 52241 (United States); Lee, Chang Hyun; Goo, Jin Mo [Department of Radiology, Seoul National University College of Medicine, 103 Daehak-ro, Jongno-gu, Seoul 110-799 (Korea, Republic of); Levin, David L. [Department of Radiology, Mayo Clinic College of Medicine, 200 First Street, SW, Rochester, MN 55905 (United States); Kauczor, Hans-Ulrich [Diagnostic and Interventional Radiology, University Hospital Heidelberg, Im Neuenheimer Feld 400, 69120 Heidelberg (Germany); Translational Lung Research Center Heidelberg (TLRC), Member of the German Center for Lung Research (DZL), Im Neuenheimer Feld 400, 69120 Heidelberg (Germany); Seo, Joon Beom, E-mail: seojb@amc.seoul.kr [Department of Radiology and Research Institute of Radiology, Asan Medical Center, University of Ulsan College of Medicine, 388-1, Pungnap 2-dong, Songpa-ku, Seoul, 05505 (Korea, Republic of)

    2017-01-15

    Highlights: • The dual-energy CT technique enables the differentiation of contrast materials with material decomposition algorithm. • Pulmonary functional information can be evaluated using dual-energy CT with anatomic CT information, simultaneously. • Pulmonary functional information from dual-energy CT can improve diagnosis and severity assessment of diseases. - Abstract: The assessment of pulmonary function, including ventilation and perfusion status, is important in addition to the evaluation of structural changes of the lung parenchyma in various pulmonary diseases. The dual-energy computed tomography (DECT) technique can provide the pulmonary functional information and high resolution anatomic information simultaneously. The application of DECT for the evaluation of pulmonary function has been investigated in various pulmonary diseases, such as pulmonary embolism, asthma and chronic obstructive lung disease and so on. In this review article, we will present principles and technical aspects of DECT, along with clinical applications for the assessment pulmonary function in various lung diseases.

  9. On computation and use of Fourier coefficients for associated Legendre functions

    Gruber, Christian; Abrykosov, Oleh

    2016-06-01

    The computation of spherical harmonic series in very high resolution is known to be delicate in terms of performance and numerical stability. A major problem is to keep results inside a numerical range of the used data type during calculations as under-/overflow arises. Extended data types are currently not desirable since the arithmetic complexity will grow exponentially with higher resolution levels. If the associated Legendre functions are computed in the spectral domain, then regular grid transformations can be applied to be highly efficient and convenient for derived quantities as well. In this article, we compare three recursive computations of the associated Legendre functions as trigonometric series, thereby ensuring a defined numerical range for each constituent wave number, separately. The results to a high degree and order show the numerical strength of the proposed method. First, the evaluation of Fourier coefficients of the associated Legendre functions has been done with respect to the floating-point precision requirements. Secondly, the numerical accuracy in the cases of standard double and long double precision arithmetic is demonstrated. Following Bessel's inequality the obtained accuracy estimates of the Fourier coefficients are directly transferable to the associated Legendre functions themselves and to derived functionals as well. Therefore, they can provide an essential insight to modern geodetic applications that depend on efficient spherical harmonic analysis and synthesis beyond [5~× ~5] arcmin resolution.

  10. COMPUTING

    M. Kasemann

    CMS relies on a well functioning, distributed computing infrastructure. The Site Availability Monitoring (SAM) and the Job Robot submission have been very instrumental for site commissioning in order to increase availability of more sites such that they are available to participate in CSA07 and are ready to be used for analysis. The commissioning process has been further developed, including "lessons learned" documentation via the CMS twiki. Recently the visualization, presentation and summarizing of SAM tests for sites has been redesigned, it is now developed by the central ARDA project of WLCG. Work to test the new gLite Workload Management System was performed; a 4 times increase in throughput with respect to LCG Resource Broker is observed. CMS has designed and launched a new-generation traffic load generator called "LoadTest" to commission and to keep exercised all data transfer routes in the CMS PhE-DEx topology. Since mid-February, a transfer volume of about 12 P...

  11. Renal parenchyma thickness: a rapid estimation of renal function on computed tomography

    Kaplon, Daniel M.; Lasser, Michael S.; Sigman, Mark; Haleblian, George E.; Pareek, Gyan

    2009-01-01

    Purpose: To define the relationship between renal parenchyma thickness (RPT) on computed tomography and renal function on nuclear renography in chronically obstructed renal units (ORUs) and to define a minimal thickness ratio associated with adequate function. Materials and Methods: Twenty-eight consecutive patients undergoing both nuclear renography and CT during a six-month period between 2004 and 2006 were included. All patients that had a diagnosis of unilateral obstruction were included for analysis. RPT was measured in the following manner: The parenchyma thickness at three discrete levels of each kidney was measured using calipers on a CT workstation. The mean of these three measurements was defined as RPT. The renal parenchyma thickness ratio of the ORUs and non-obstructed renal unit (NORUs) was calculated and this was compared to the observed function on Mag-3 lasix Renogram. Results: A total of 28 patients were evaluated. Mean parenchyma thickness was 1.82 cm and 2.25 cm in the ORUs and NORUs, respectively. The mean relative renal function of ORUs was 39%. Linear regression analysis comparing renogram function to RPT ratio revealed a correlation coefficient of 0.48 (p * RPT ratio. A thickness ratio of 0.68 correlated with 20% renal function. Conclusion: RPT on computed tomography appears to be a powerful predictor of relative renal function in ORUs. Assessment of RPT is a useful and readily available clinical tool for surgical decision making (renal salvage therapy versus nephrectomy) in patients with ORUs. (author)

  12. Quantum computation and analysis of Wigner and Husimi functions: toward a quantum image treatment.

    Terraneo, M; Georgeot, B; Shepelyansky, D L

    2005-06-01

    We study the efficiency of quantum algorithms which aim at obtaining phase-space distribution functions of quantum systems. Wigner and Husimi functions are considered. Different quantum algorithms are envisioned to build these functions, and compared with the classical computation. Different procedures to extract more efficiently information from the final wave function of these algorithms are studied, including coarse-grained measurements, amplitude amplification, and measure of wavelet-transformed wave function. The algorithms are analyzed and numerically tested on a complex quantum system showing different behavior depending on parameters: namely, the kicked rotator. The results for the Wigner function show in particular that the use of the quantum wavelet transform gives a polynomial gain over classical computation. For the Husimi distribution, the gain is much larger than for the Wigner function and is larger with the help of amplitude amplification and wavelet transforms. We discuss the generalization of these results to the simulation of other quantum systems. We also apply the same set of techniques to the analysis of real images. The results show that the use of the quantum wavelet transform allows one to lower dramatically the number of measurements needed, but at the cost of a large loss of information.

  13. Three-dimensional computed tomographic volumetry precisely predicts the postoperative pulmonary function.

    Kobayashi, Keisuke; Saeki, Yusuke; Kitazawa, Shinsuke; Kobayashi, Naohiro; Kikuchi, Shinji; Goto, Yukinobu; Sakai, Mitsuaki; Sato, Yukio

    2017-11-01

    It is important to accurately predict the patient's postoperative pulmonary function. The aim of this study was to compare the accuracy of predictions of the postoperative residual pulmonary function obtained with three-dimensional computed tomographic (3D-CT) volumetry with that of predictions obtained with the conventional segment-counting method. Fifty-three patients scheduled to undergo lung cancer resection, pulmonary function tests, and computed tomography were enrolled in this study. The postoperative residual pulmonary function was predicted based on the segment-counting and 3D-CT volumetry methods. The predicted postoperative values were compared with the results of postoperative pulmonary function tests. Regarding the linear correlation coefficients between the predicted postoperative values and the measured values, those obtained using the 3D-CT volumetry method tended to be higher than those acquired using the segment-counting method. In addition, the variations between the predicted and measured values were smaller with the 3D-CT volumetry method than with the segment-counting method. These results were more obvious in COPD patients than in non-COPD patients. Our findings suggested that the 3D-CT volumetry was able to predict the residual pulmonary function more accurately than the segment-counting method, especially in patients with COPD. This method might lead to the selection of appropriate candidates for surgery among patients with a marginal pulmonary function.

  14. Computations of zeros of special functions and eigenvalues of differential equations by matrix method

    Miyazaki, Yoshinori

    2000-01-01

    This paper is strongly based on two powerful general theorems proved by Ikebe, et. al in 1993[15] and 1996[13], which will be referred to as Theorem A and Theorem B in this paper. They were recently published and justify the approximate computations of simple eigenvalues of infinite matrices of certain types by truncation, giving an extremely accurate error estimates. So far, they have applied to some important problems in engineering, such as computing the zeros of some special functions, an...

  15. Computer algebra in quantum field theory integration, summation and special functions

    Schneider, Carsten

    2013-01-01

    The book focuses on advanced computer algebra methods and special functions that have striking applications in the context of quantum field theory. It presents the state of the art and new methods for (infinite) multiple sums, multiple integrals, in particular Feynman integrals, difference and differential equations in the format of survey articles. The presented techniques emerge from interdisciplinary fields: mathematics, computer science and theoretical physics; the articles are written by mathematicians and physicists with the goal that both groups can learn from the other field, including

  16. FUNCTIONING FEATURES OF COMPUTER TECHNOLOGY WHILE FORMING PRIMARY SCHOOLCHILDREN’S COMMUNICATIVE COMPETENCE

    Olena Beskorsa

    2017-04-01

    Full Text Available The article reveals the problem of functioning features of computer technology while forming primary schoolchildren’s communicative competence whose relevance is proved by the increasing role of a foreign language as a means of communication and modernization of foreign language education. There is a great deal of publications devoted to the issue of foreign language learning at primary school by N. Biriukevych, O. Kolominova, O. Metolkina, O. Petrenko, V. Redko, S. Roman. Implementing of innovative technology as well as computer one is to intensify the language learning process and to improve young learners’ communicative skills. The aim of the article is to identify computer technology functioning features while forming primary schoolchildren communicative competence. In this study we follow the definition of the computer technology as an information technology whose implementation may be accompanied with a computer as one of the tools, excluding the use of audio and video equipment, projectors and other technical tools. Using computer technologies is realized due to a number of tools which are divided into two main groups: electronic learning materials; computer testing software. The analysis of current textbooks and learning and methodological complexes shows that teachers prefer authentic electronic materials to the national ones. The most available English learning materials are on the Internet and they are free. The author of the article discloses several on-line English learning tools and depict the opportunities to use them while forming primary schoolchildren’s communicative competence. Special attention is also paid to multimedia technology, its functioning features and multimedia lesson structure. Computer testing software provides tools for current and control assessing results of mastering language material, communicative skills, and self-assessing in an interactive way. For making tests for assessing English skill

  17. Computational Methods for Large Spatio-temporal Datasets and Functional Data Ranking

    Huang, Huang

    2017-07-16

    This thesis focuses on two topics, computational methods for large spatial datasets and functional data ranking. Both are tackling the challenges of big and high-dimensional data. The first topic is motivated by the prohibitive computational burden in fitting Gaussian process models to large and irregularly spaced spatial datasets. Various approximation methods have been introduced to reduce the computational cost, but many rely on unrealistic assumptions about the process and retaining statistical efficiency remains an issue. We propose a new scheme to approximate the maximum likelihood estimator and the kriging predictor when the exact computation is infeasible. The proposed method provides different types of hierarchical low-rank approximations that are both computationally and statistically efficient. We explore the improvement of the approximation theoretically and investigate the performance by simulations. For real applications, we analyze a soil moisture dataset with 2 million measurements with the hierarchical low-rank approximation and apply the proposed fast kriging to fill gaps for satellite images. The second topic is motivated by rank-based outlier detection methods for functional data. Compared to magnitude outliers, it is more challenging to detect shape outliers as they are often masked among samples. We develop a new notion of functional data depth by taking the integration of a univariate depth function. Having a form of the integrated depth, it shares many desirable features. Furthermore, the novel formation leads to a useful decomposition for detecting both shape and magnitude outliers. Our simulation studies show the proposed outlier detection procedure outperforms competitors in various outlier models. We also illustrate our methodology using real datasets of curves, images, and video frames. Finally, we introduce the functional data ranking technique to spatio-temporal statistics for visualizing and assessing covariance properties, such as

  18. VAT: a computational framework to functionally annotate variants in personal genomes within a cloud-computing environment.

    Habegger, Lukas; Balasubramanian, Suganthi; Chen, David Z; Khurana, Ekta; Sboner, Andrea; Harmanci, Arif; Rozowsky, Joel; Clarke, Declan; Snyder, Michael; Gerstein, Mark

    2012-09-01

    The functional annotation of variants obtained through sequencing projects is generally assumed to be a simple intersection of genomic coordinates with genomic features. However, complexities arise for several reasons, including the differential effects of a variant on alternatively spliced transcripts, as well as the difficulty in assessing the impact of small insertions/deletions and large structural variants. Taking these factors into consideration, we developed the Variant Annotation Tool (VAT) to functionally annotate variants from multiple personal genomes at the transcript level as well as obtain summary statistics across genes and individuals. VAT also allows visualization of the effects of different variants, integrates allele frequencies and genotype data from the underlying individuals and facilitates comparative analysis between different groups of individuals. VAT can either be run through a command-line interface or as a web application. Finally, in order to enable on-demand access and to minimize unnecessary transfers of large data files, VAT can be run as a virtual machine in a cloud-computing environment. VAT is implemented in C and PHP. The VAT web service, Amazon Machine Image, source code and detailed documentation are available at vat.gersteinlab.org.

  19. Sequential designs for sensitivity analysis of functional inputs in computer experiments

    Fruth, J.; Roustant, O.; Kuhnt, S.

    2015-01-01

    Computer experiments are nowadays commonly used to analyze industrial processes aiming at achieving a wanted outcome. Sensitivity analysis plays an important role in exploring the actual impact of adjustable parameters on the response variable. In this work we focus on sensitivity analysis of a scalar-valued output of a time-consuming computer code depending on scalar and functional input parameters. We investigate a sequential methodology, based on piecewise constant functions and sequential bifurcation, which is both economical and fully interpretable. The new approach is applied to a sheet metal forming problem in three sequential steps, resulting in new insights into the behavior of the forming process over time. - Highlights: • Sensitivity analysis method for functional and scalar inputs is presented. • We focus on the discovery of most influential parts of the functional domain. • We investigate economical sequential methodology based on piecewise constant functions. • Normalized sensitivity indices are introduced and investigated theoretically. • Successful application to sheet metal forming on two functional inputs

  20. Storing files in a parallel computing system based on user-specified parser function

    Faibish, Sorin; Bent, John M; Tzelnic, Percy; Grider, Gary; Manzanares, Adam; Torres, Aaron

    2014-10-21

    Techniques are provided for storing files in a parallel computing system based on a user-specified parser function. A plurality of files generated by a distributed application in a parallel computing system are stored by obtaining a parser from the distributed application for processing the plurality of files prior to storage; and storing one or more of the plurality of files in one or more storage nodes of the parallel computing system based on the processing by the parser. The plurality of files comprise one or more of a plurality of complete files and a plurality of sub-files. The parser can optionally store only those files that satisfy one or more semantic requirements of the parser. The parser can also extract metadata from one or more of the files and the extracted metadata can be stored with one or more of the plurality of files and used for searching for files.

  1. Management of Liver Cancer Argon-helium Knife Therapy with Functional Computer Tomography Perfusion Imaging.

    Wang, Hongbo; Shu, Shengjie; Li, Jinping; Jiang, Huijie

    2016-02-01

    The objective of this study was to observe the change in blood perfusion of liver cancer following argon-helium knife treatment with functional computer tomography perfusion imaging. Twenty-seven patients with primary liver cancer treated with argon-helium knife and were included in this study. Plain computer tomography (CT) and computer tomography perfusion (CTP) imaging were conducted in all patients before and after treatment. Perfusion parameters including blood flows, blood volume, hepatic artery perfusion fraction, hepatic artery perfusion, and hepatic portal venous perfusion were used for evaluating therapeutic effect. All parameters in liver cancer were significantly decreased after argon-helium knife treatment (p knife therapy. Therefore, CTP imaging would play an important role for liver cancer management followed argon-helium knife therapy. © The Author(s) 2014.

  2. Implementation of the Two-Point Angular Correlation Function on a High-Performance Reconfigurable Computer

    Volodymyr V. Kindratenko

    2009-01-01

    Full Text Available We present a parallel implementation of an algorithm for calculating the two-point angular correlation function as applied in the field of computational cosmology. The algorithm has been specifically developed for a reconfigurable computer. Our implementation utilizes a microprocessor and two reconfigurable processors on a dual-MAP SRC-6 system. The two reconfigurable processors are used as two application-specific co-processors. Two independent computational kernels are simultaneously executed on the reconfigurable processors while data pre-fetching from disk and initial data pre-processing are executed on the microprocessor. The overall end-to-end algorithm execution speedup achieved by this implementation is over 90× as compared to a sequential implementation of the algorithm executed on a single 2.8 GHz Intel Xeon microprocessor.

  3. Cognitive assessment of executive functions using brain computer interface and eye-tracking

    P. Cipresso

    2013-03-01

    Full Text Available New technologies to enable augmentative and alternative communication in Amyotrophic Lateral Sclerosis (ALS have been recently used in several studies. However, a comprehensive battery for cognitive assessment has not been implemented yet. Brain computer interfaces are innovative systems able to generate a control signal from brain responses conveying messages directly to a computer. Another available technology for communication purposes is the Eye-tracker system, that conveys messages from eye-movement to a computer. In this study we explored the use of these two technologies for the cognitive assessment of executive functions in a healthy population and in a ALS patient, also verifying usability, pleasantness, fatigue, and emotional aspects related to the setting. Our preliminary results may have interesting implications for both clinical practice (the availability of an effective tool for neuropsychological evaluation of ALS patients and ethical issues.

  4. Can Expanded Bacteriochlorins Act as Photosensitizers in Photodynamic Therapy? Good News from Density Functional Theory Computations

    Gloria Mazzone

    2016-02-01

    Full Text Available The main photophysical properties of a series of expanded bacteriochlorins, recently synthetized, have been investigated by means of DFT and TD-DFT methods. Absorption spectra computed with different exchange-correlation functionals, B3LYP, M06 and ωB97XD, have been compared with the experimental ones. In good agreement, all the considered systems show a maximum absorption wavelength that falls in the therapeutic window (600–800 nm. The obtained singlet-triplet energy gaps are large enough to ensure the production of cytotoxic singlet molecular oxygen. The computed spin-orbit matrix elements suggest a good probability of intersystem spin-crossing between singlet and triplet excited states, since they result to be higher than those computed for 5,10,15,20-tetrakis-(m-hydroxyphenylchlorin (Foscan© already used in the photodynamic therapy (PDT protocol. Because of the investigated properties, these expanded bacteriochlorins can be proposed as PDT agents.

  5. Stochastic methods for uncertainty treatment of functional variables in computer codes: application to safety studies

    Nanty, Simon

    2015-01-01

    This work relates to the framework of uncertainty quantification for numerical simulators, and more precisely studies two industrial applications linked to the safety studies of nuclear plants. These two applications have several common features. The first one is that the computer code inputs are functional and scalar variables, functional ones being dependent. The second feature is that the probability distribution of functional variables is known only through a sample of their realizations. The third feature, relative to only one of the two applications, is the high computational cost of the code, which limits the number of possible simulations. The main objective of this work was to propose a complete methodology for the uncertainty analysis of numerical simulators for the two considered cases. First, we have proposed a methodology to quantify the uncertainties of dependent functional random variables from a sample of their realizations. This methodology enables to both model the dependency between variables and their link to another variable, called co-variate, which could be, for instance, the output of the considered code. Then, we have developed an adaptation of a visualization tool for functional data, which enables to simultaneously visualize the uncertainties and features of dependent functional variables. Second, a method to perform the global sensitivity analysis of the codes used in the two studied cases has been proposed. In the case of a computationally demanding code, the direct use of quantitative global sensitivity analysis methods is intractable. To overcome this issue, the retained solution consists in building a surrogate model or meta model, a fast-running model approximating the computationally expensive code. An optimized uniform sampling strategy for scalar and functional variables has been developed to build a learning basis for the meta model. Finally, a new approximation approach for expensive codes with functional outputs has been

  6. Automated Quantitative Computed Tomography Versus Visual Computed Tomography Scoring in Idiopathic Pulmonary Fibrosis: Validation Against Pulmonary Function.

    Jacob, Joseph; Bartholmai, Brian J; Rajagopalan, Srinivasan; Kokosi, Maria; Nair, Arjun; Karwoski, Ronald; Raghunath, Sushravya M; Walsh, Simon L F; Wells, Athol U; Hansell, David M

    2016-09-01

    The aim of the study was to determine whether a novel computed tomography (CT) postprocessing software technique (CALIPER) is superior to visual CT scoring as judged by functional correlations in idiopathic pulmonary fibrosis (IPF). A total of 283 consecutive patients with IPF had CT parenchymal patterns evaluated quantitatively with CALIPER and by visual scoring. These 2 techniques were evaluated against: forced expiratory volume in 1 second (FEV1), forced vital capacity (FVC), diffusing capacity for carbon monoxide (DLco), carbon monoxide transfer coefficient (Kco), and a composite physiological index (CPI), with regard to extent of interstitial lung disease (ILD), extent of emphysema, and pulmonary vascular abnormalities. CALIPER-derived estimates of ILD extent demonstrated stronger univariate correlations than visual scores for most pulmonary function tests (PFTs): (FEV1: CALIPER R=0.29, visual R=0.18; FVC: CALIPER R=0.41, visual R=0.27; DLco: CALIPER R=0.31, visual R=0.35; CPI: CALIPER R=0.48, visual R=0.44). Correlations between CT measures of emphysema extent and PFTs were weak and did not differ significantly between CALIPER and visual scoring. Intriguingly, the pulmonary vessel volume provided similar correlations to total ILD extent scored by CALIPER for FVC, DLco, and CPI (FVC: R=0.45; DLco: R=0.34; CPI: R=0.53). CALIPER was superior to visual scoring as validated by functional correlations with PFTs. The pulmonary vessel volume, a novel CALIPER CT parameter with no visual scoring equivalent, has the potential to be a CT feature in the assessment of patients with IPF and requires further exploration.

  7. Encoding neural and synaptic functionalities in electron spin: A pathway to efficient neuromorphic computing

    Sengupta, Abhronil; Roy, Kaushik

    2017-12-01

    Present day computers expend orders of magnitude more computational resources to perform various cognitive and perception related tasks that humans routinely perform every day. This has recently resulted in a seismic shift in the field of computation where research efforts are being directed to develop a neurocomputer that attempts to mimic the human brain by nanoelectronic components and thereby harness its efficiency in recognition problems. Bridging the gap between neuroscience and nanoelectronics, this paper attempts to provide a review of the recent developments in the field of spintronic device based neuromorphic computing. Description of various spin-transfer torque mechanisms that can be potentially utilized for realizing device structures mimicking neural and synaptic functionalities is provided. A cross-layer perspective extending from the device to the circuit and system level is presented to envision the design of an All-Spin neuromorphic processor enabled with on-chip learning functionalities. Device-circuit-algorithm co-simulation framework calibrated to experimental results suggest that such All-Spin neuromorphic systems can potentially achieve almost two orders of magnitude energy improvement in comparison to state-of-the-art CMOS implementations.

  8. Recent progress in orbital-free density functional theory (recent advances in computational chemistry)

    Wesolowski, Tomasz A

    2013-01-01

    This is a comprehensive overview of state-of-the-art computational methods based on orbital-free formulation of density functional theory completed by the most recent developments concerning the exact properties, approximations, and interpretations of the relevant quantities in density functional theory. The book is a compilation of contributions stemming from a series of workshops which had been taking place since 2002. It not only chronicles many of the latest developments but also summarises some of the more significant ones. The chapters are mainly reviews of sub-domains but also include original research. Readership: Graduate students, academics and researchers in computational chemistry. Atomic & molecular physicists, theoretical physicists, theoretical chemists, physical chemists and chemical physicists.

  9. Structure, dynamics, and function of the monooxygenase P450 BM-3: insights from computer simulations studies

    Roccatano, Danilo

    2015-01-01

    The monooxygenase P450 BM-3 is a NADPH-dependent fatty acid hydroxylase enzyme isolated from soil bacterium Bacillus megaterium. As a pivotal member of cytochrome P450 superfamily, it has been intensely studied for the comprehension of structure–dynamics–function relationships in this class of enzymes. In addition, due to its peculiar properties, it is also a promising enzyme for biochemical and biomedical applications. However, despite the efforts, the full understanding of the enzyme structure and dynamics is not yet achieved. Computational studies, particularly molecular dynamics (MD) simulations, have importantly contributed to this endeavor by providing new insights at an atomic level regarding the correlations between structure, dynamics, and function of the protein. This topical review summarizes computational studies based on MD simulations of the cytochrome P450 BM-3 and gives an outlook on future directions. (topical review)

  10. Source apportionment of speciated PM2.5 and non-parametric regressions of PM2.5 and PM(coarse) mass concentrations from Denver and Greeley, Colorado, and construction and evaluation of dichotomous filter samplers

    Piedrahita, Ricardo A.

    The Denver Aerosol Sources and Health study (DASH) was a long-term study of the relationship between the variability in fine particulate mass and chemical constituents (PM2.5, particulate matter less than 2.5mum) and adverse health effects such as cardio-respiratory illnesses and mortality. Daily filter samples were chemically analyzed for multiple species. We present findings based on 2.8 years of DASH data, from 2003 to 2005. Multilinear Engine 2 (ME-2), a receptor-based source apportionment model was applied to the data to estimate source contributions to PM2.5 mass concentrations. This study relied on two different ME-2 models: (1) a 2-way model that closely reflects PMF-2; and (2) an enhanced model with meteorological data that used additional temporal and meteorological factors. The Coarse Rural Urban Sources and Health study (CRUSH) is a long-term study of the relationship between the variability in coarse particulate mass (PMcoarse, particulate matter between 2.5 and 10mum) and adverse health effects such as cardio-respiratory illnesses, pre-term births, and mortality. Hourly mass concentrations of PMcoarse and fine particulate matter (PM2.5) are measured using tapered element oscillating microbalances (TEOMs) with Filter Dynamics Measurement Systems (FDMS), at two rural and two urban sites. We present findings based on nine months of mass concentration data, including temporal trends, and non-parametric regressions (NPR) results, which were used to characterize the wind speed and wind direction relationships that might point to sources. As part of CRUSH, 1-year coarse and fine mode particulate matter filter sampling network, will allow us to characterize the chemical composition of the particulate matter collected and perform spatial comparisons. This work describes the construction and validation testing of four dichotomous filter samplers for this purpose. The use of dichotomous splitters with an approximate 2.5mum cut point, coupled with a 10mum cut

  11. Evaluation of the optimum region for mammographic system using computer simulation to study modulation transfer functions

    Oliveira, Isaura N. Sombra; Schiable, Homero; Porcel, Naider T.; Frere, Annie F.; Marques, Paulo M.A.

    1996-01-01

    An investigation of the 'optimum region' of the radiation field considering mammographic systems is studied. Such a region was defined in previous works as the field range where the system has its best performance and sharpest images. This study is based on a correlation of two methods for evaluating radiologic imaging systems, both using computer simulation in order to determine modulation transfer functions (MTFs) due to the X-ray tube focal spot in several field orientation and locations

  12. First results with twisted mass fermions towards the computation of parton distribution functions on the lattice

    Alexandrou, Constantia; Cyprus Institute, Nicosia; Deutsches Elektronen-Synchrotron; Cichy, Krzysztof; Poznan Univ.; Drach, Vincent; Garcia-Ramos, Elena; Humboldt-Universitaet, Berlin; Hadjiyiannakou, Kyriakos; Jansen, Karl; Steffens, Fernanda; Wiese, Christian

    2014-11-01

    We report on our exploratory study for the evaluation of the parton distribution functions from lattice QCD, based on a new method proposed in Ref.∝arXiv:1305.1539. Using the example of the nucleon, we compare two different methods to compute the matrix elements needed, and investigate the application of gauge link smearing. We also present first results from a large production ensemble and discuss the future challenges related to this method.

  13. Using computer graphics to preserve function in resection of malignant melanoma of the foot.

    Kaufman, M; Vantuyl, A; Japour, C; Ghosh, B C

    2001-08-01

    The increasing incidence of malignant melanoma challenges physicians to find innovative ways to preserve function and appearance in affected areas that require partial resection. We carefully planned the resection of a malignant lesion between the third and fourth toes of a 77-year-old man with the aid of computer technology. The subsequent excision of the third, fourth, and fifth digits was executed such that the new metatarsal arc formed would approximate the dimensions of the optimal hyperbola, thereby minimizing gait disturbance.

  14. Gaussian Radial Basis Function for Efficient Computation of Forest Indirect Illumination

    Abbas, Fayçal; Babahenini, Mohamed Chaouki

    2018-06-01

    Global illumination of natural scenes in real time like forests is one of the most complex problems to solve, because the multiple inter-reflections between the light and material of the objects composing the scene. The major problem that arises is the problem of visibility computation. In fact, the computing of visibility is carried out for all the set of leaves visible from the center of a given leaf, given the enormous number of leaves present in a tree, this computation performed for each leaf of the tree which also reduces performance. We describe a new approach that approximates visibility queries, which precede in two steps. The first step is to generate point cloud representing the foliage. We assume that the point cloud is composed of two classes (visible, not-visible) non-linearly separable. The second step is to perform a point cloud classification by applying the Gaussian radial basis function, which measures the similarity in term of distance between each leaf and a landmark leaf. It allows approximating the visibility requests to extract the leaves that will be used to calculate the amount of indirect illumination exchanged between neighbor leaves. Our approach allows efficiently treat the light exchanges in the scene of a forest, it allows a fast computation and produces images of good visual quality, all this takes advantage of the immense power of computation of the GPU.

  15. A new Fortran 90 program to compute regular and irregular associated Legendre functions (new version announcement)

    Schneider, Barry I.; Segura, Javier; Gil, Amparo; Guan, Xiaoxu; Bartschat, Klaus

    2018-04-01

    This is a revised and updated version of a modern Fortran 90 code to compute the regular Plm (x) and irregular Qlm (x) associated Legendre functions for all x ∈(- 1 , + 1) (on the cut) and | x | > 1 and integer degree (l) and order (m). The necessity to revise the code comes as a consequence of some comments of Prof. James Bremer of the UC//Davis Mathematics Department, who discovered that there were errors in the code for large integer degree and order for the normalized regular Legendre functions on the cut.

  16. Computation of Value Functions in Nonlinear Differential Games with State Constraints

    Botkin, Nikolai

    2013-01-01

    Finite-difference schemes for the computation of value functions of nonlinear differential games with non-terminal payoff functional and state constraints are proposed. The solution method is based on the fact that the value function is a generalized viscosity solution of the corresponding Hamilton-Jacobi-Bellman-Isaacs equation. Such a viscosity solution is defined as a function satisfying differential inequalities introduced by M. G. Crandall and P. L. Lions. The difference with the classical case is that these inequalities hold on an unknown in advance subset of the state space. The convergence rate of the numerical schemes is given. Numerical solution to a non-trivial three-dimensional example is presented. © 2013 IFIP International Federation for Information Processing.

  17. A review of Green's function methods in computational fluid mechanics: Background, recent developments and future directions

    Dorning, J.

    1981-01-01

    The research and development over the past eight years on local Green's function methods for the high-accuracy, high-efficiency numerical solution of nuclear engineering problems is reviewed. The basic concepts and key ideas are presented by starting with an expository review of the original fully two-dimensional local Green's function methods developed for neutron diffusion and heat conduction, and continuing through the progressively more complicated and more efficient nodal Green's function methods for neutron diffusion, heat conduction and neutron transport to establish the background for the recent development of Green's function methods in computational fluid mechanics. Some of the impressive numerical results obtained via these classes of methods for nuclear engineering problems are briefly summarized. Finally, speculations are proffered on future directions in which the development of these types of methods in fluid mechanics and other areas might lead. (orig.) [de

  18. Modeling of edge effect in subaperture tool influence functions of computer controlled optical surfacing.

    Wan, Songlin; Zhang, Xiangchao; He, Xiaoying; Xu, Min

    2016-12-20

    Computer controlled optical surfacing requires an accurate tool influence function (TIF) for reliable path planning and deterministic fabrication. Near the edge of the workpieces, the TIF has a nonlinear removal behavior, which will cause a severe edge-roll phenomenon. In the present paper, a new edge pressure model is developed based on the finite element analysis results. The model is represented as the product of a basic pressure function and a correcting function. The basic pressure distribution is calculated according to the surface shape of the polishing pad, and the correcting function is used to compensate the errors caused by the edge effect. Practical experimental results demonstrate that the new model can accurately predict the edge TIFs with different overhang ratios. The relative error of the new edge model can be reduced to 15%.

  19. Wigner functions and density matrices in curved spaces as computational tools

    Habib, S.; Kandrup, H.E.

    1989-01-01

    This paper contrasts two alternative approaches to statistical quantum field theory in curved spacetimes, namely (1) a canonical Hamiltonian approach, in which the basic object is a density matrix ρ characterizing the noncovariant, but globally defined, modes of the field; and (2) a Wigner function approach, in which the basic object is a Wigner function f defined quasilocally from the Hadamard, or correlation, function G 1 (x 1 , x 2 ). The key object is to isolate on the conceptual biases underlying each of these approaches and then to assess their utility and limitations in effecting concerete calculations. The following questions are therefore addressed and largely answered. What sort of spacetimes (e.g., de Sitter or Friedmann-Robertson-Walker) are comparatively eas to consider? What sorts of objects (e.g., average fields or renormalized stress energies) are easy to compute approximately? What, if anything, can be computed exactly? What approximations are intrinsic to each approach or convenient as computational tools? What sorts of ''field entropies'' are natural to define? copyright 1989 Academic Press, Inc

  20. Combining computer modelling and cardiac imaging to understand right ventricular pump function.

    Walmsley, John; van Everdingen, Wouter; Cramer, Maarten J; Prinzen, Frits W; Delhaas, Tammo; Lumens, Joost

    2017-10-01

    Right ventricular (RV) dysfunction is a strong predictor of outcome in heart failure and is a key determinant of exercise capacity. Despite these crucial findings, the RV remains understudied in the clinical, experimental, and computer modelling literature. This review outlines how recent advances in using computer modelling and cardiac imaging synergistically help to understand RV function in health and disease. We begin by highlighting the complexity of interactions that make modelling the RV both challenging and necessary, and then summarize the multiscale modelling approaches used to date to simulate RV pump function in the context of these interactions. We go on to demonstrate how these modelling approaches in combination with cardiac imaging have improved understanding of RV pump function in pulmonary arterial hypertension, arrhythmogenic right ventricular cardiomyopathy, dyssynchronous heart failure and cardiac resynchronization therapy, hypoplastic left heart syndrome, and repaired tetralogy of Fallot. We conclude with a perspective on key issues to be addressed by computational models of the RV in the near future. Published on behalf of the European Society of Cardiology. All rights reserved. © The Author 2017. For permissions, please email: journals.permissions@oup.com.

  1. COMPUTING

    M. Kasemann

    Overview In autumn the main focus was to process and handle CRAFT data and to perform the Summer08 MC production. The operational aspects were well covered by regular Computing Shifts, experts on duty and Computing Run Coordination. At the Computing Resource Board (CRB) in October a model to account for service work at Tier 2s was approved. The computing resources for 2009 were reviewed for presentation at the C-RRB. The quarterly resource monitoring is continuing. Facilities/Infrastructure operations Operations during CRAFT data taking ran fine. This proved to be a very valuable experience for T0 workflows and operations. The transfers of custodial data to most T1s went smoothly. A first round of reprocessing started at the Tier-1 centers end of November; it will take about two weeks. The Computing Shifts procedure was tested full scale during this period and proved to be very efficient: 30 Computing Shifts Persons (CSP) and 10 Computing Resources Coordinators (CRC). The shift program for the shut down w...

  2. Computational Fluid Dynamics Simulation of Combustion Instability in Solid Rocket Motor : Implementation of Pressure Coupled Response Function

    S. Saha; D. Chakraborty

    2016-01-01

    Combustion instability in solid propellant rocket motor is numerically simulated by implementing propellant response function with quasi steady homogeneous one dimensional formulation. The convolution integral of propellant response with pressure history is implemented through a user defined function in commercial computational fluid dynamics software. The methodology is validated against literature reported motor test and other simulation results. Computed amplitude of pressure fluctuations ...

  3. Complexity on dwarf galaxy scales : A bimodal distributionfFunction in sculptor

    Breddels, Maarten A.; Helmi, Amina

    2014-01-01

    In our previous work, we presented Schwarzschild models of the Sculptor dwarf spheroidal galaxy demonstrating that this system could be embedded in dark matter halos that are either cusped or cored. Here, we show that the non-parametric distribution function recovered through Schwarzschild's method

  4. COMPUTING

    P. McBride

    The Computing Project is preparing for a busy year where the primary emphasis of the project moves towards steady operations. Following the very successful completion of Computing Software and Analysis challenge, CSA06, last fall, we have reorganized and established four groups in computing area: Commissioning, User Support, Facility/Infrastructure Operations and Data Operations. These groups work closely together with groups from the Offline Project in planning for data processing and operations. Monte Carlo production has continued since CSA06, with about 30M events produced each month to be used for HLT studies and physics validation. Monte Carlo production will continue throughout the year in the preparation of large samples for physics and detector studies ramping to 50 M events/month for CSA07. Commissioning of the full CMS computing system is a major goal for 2007. Site monitoring is an important commissioning component and work is ongoing to devise CMS specific tests to be included in Service Availa...

  5. COMPUTING

    M. Kasemann

    Overview During the past three months activities were focused on data operations, testing and re-enforcing shift and operational procedures for data production and transfer, MC production and on user support. Planning of the computing resources in view of the new LHC calendar in ongoing. Two new task forces were created for supporting the integration work: Site Commissioning, which develops tools helping distributed sites to monitor job and data workflows, and Analysis Support, collecting the user experience and feedback during analysis activities and developing tools to increase efficiency. The development plan for DMWM for 2009/2011 was developed at the beginning of the year, based on the requirements from the Physics, Computing and Offline groups (see Offline section). The Computing management meeting at FermiLab on February 19th and 20th was an excellent opportunity discussing the impact and for addressing issues and solutions to the main challenges facing CMS computing. The lack of manpower is particul...

  6. Cranial computed tomography associated with development of functional dependence in a community-based elderly population

    Tsukishima, Eri; Shido, Koichi

    2002-01-01

    The purpose of this study was to investigate whether changes at computed tomography (CT) imaging in the ageing brain are associated with future risks for functional dependence. One hundred sixty residents aged 69 years and older at the cranial CT and were independently living in a rural community in Hokkaido, Japan. Cranial CT was performed between 1991 and 1993, graded for ventricular enlargement, sulcal enlargement, white matter change, and small infarction. Functional status was reassessed in 1998 in each participant. Multiple logistic regression analysis was performed to estimate the association of CT changes in the ageing brain with development of functional dependence over six years. Functional dependence was found in 19 residents at the second survey. After adjusting for age, sex, medical conditions, and cognitive functioning, small infarction and ventricular enlargement were significantly associated with development of functional dependence (adjusted odds ratio=9.27 and 4.62). After controlling for age, the age-related changes on cranial CT have significant association on development of functional dependence. (author)

  7. Neuromorphological and wiring pattern alterations effects on brain function: a mixed experimental and computational approach.

    Linus Manubens-Gil

    2015-04-01

    In addition, the study of fixed intact brains (by means of the state of the art CLARITY technique brings us closer to biologically and medically relevant situations, allowing not only to confirm whether the functional links in neuronal cultures are also present in vivo, but also enabling the introduction of functional information (like behavioral studies and functional imaging and another layer of structural alterations such as brain region morphology, neuronal density, and long-range connectivity. Taking together the experimental information from these systems we want to feed self-developed computational models that allow us to understand what are the fundamental characteristics of the observed connectivity patterns and the impact of each of the alterations on neuronal network function. These models will also provide a framework able to account for the emergent properties that bridge the gap between spontaneous electrical activity arousal/transmission and higher order information processing and memory storage capacities in the brain. As an additional part of the project we are now working on the application of the clearing, labeling and imaging protocols to human biopsy samples. Our aim is to obtain neuronal architecture and connectivity information from focal cortical dysplasia microcircuits using samples from intractable temporal lobe epilepsy patients that undergo deep-brain electrode recording diagnosis and posterior surgical extraction of the tissue. Our computational models can allow us to discern the contributions of the observed abnormalities to neuronal hyperactivity and epileptic seizure generation.

  8. Computer-Based Cognitive Training for Executive Functions after Stroke: A Systematic Review

    van de Ven, Renate M.; Murre, Jaap M. J.; Veltman, Dick J.; Schmand, Ben A.

    2016-01-01

    Background: Stroke commonly results in cognitive impairments in working memory, attention, and executive function, which may be restored with appropriate training programs. Our aim was to systematically review the evidence for computer-based cognitive training of executive dysfunctions. Methods: Studies were included if they concerned adults who had suffered stroke or other types of acquired brain injury, if the intervention was computer training of executive functions, and if the outcome was related to executive functioning. We searched in MEDLINE, PsycINFO, Web of Science, and The Cochrane Library. Study quality was evaluated based on the CONSORT Statement. Treatment effect was evaluated based on differences compared to pre-treatment and/or to a control group. Results: Twenty studies were included. Two were randomized controlled trials that used an active control group. The other studies included multiple baselines, a passive control group, or were uncontrolled. Improvements were observed in tasks similar to the training (near transfer) and in tasks dissimilar to the training (far transfer). However, these effects were not larger in trained than in active control groups. Two studies evaluated neural effects and found changes in both functional and structural connectivity. Most studies suffered from methodological limitations (e.g., lack of an active control group and no adjustment for multiple testing) hampering differentiation of training effects from spontaneous recovery, retest effects, and placebo effects. Conclusions: The positive findings of most studies, including neural changes, warrant continuation of research in this field, but only if its methodological limitations are addressed. PMID:27148007

  9. A new algorithm to compute conjectured supply function equilibrium in electricity markets

    Diaz, Cristian A.; Villar, Jose; Campos, Fco Alberto; Rodriguez, M. Angel

    2011-01-01

    Several types of market equilibria approaches, such as Cournot, Conjectural Variation (CVE), Supply Function (SFE) or Conjectured Supply Function (CSFE) have been used to model electricity markets for the medium and long term. Among them, CSFE has been proposed as a generalization of the classic Cournot. It computes the equilibrium considering the reaction of the competitors against changes in their strategy, combining several characteristics of both CVE and SFE. Unlike linear SFE approaches, strategies are linearized only at the equilibrium point, using their first-order Taylor approximation. But to solve CSFE, the slope or the intercept of the linear approximations must be given, which has been proved to be very restrictive. This paper proposes a new algorithm to compute CSFE. Unlike previous approaches, the main contribution is that the competitors' strategies for each generator are initially unknown (both slope and intercept) and endogenously computed by this new iterative algorithm. To show the applicability of the proposed approach, it has been applied to several case examples where its qualitative behavior has been analyzed in detail. (author)

  10. COMPUTING

    I. Fisk

    2013-01-01

    Computing activity had ramped down after the completion of the reprocessing of the 2012 data and parked data, but is increasing with new simulation samples for analysis and upgrade studies. Much of the Computing effort is currently involved in activities to improve the computing system in preparation for 2015. Operations Office Since the beginning of 2013, the Computing Operations team successfully re-processed the 2012 data in record time, not only by using opportunistic resources like the San Diego Supercomputer Center which was accessible, to re-process the primary datasets HTMHT and MultiJet in Run2012D much earlier than planned. The Heavy-Ion data-taking period was successfully concluded in February collecting almost 500 T. Figure 3: Number of events per month (data) In LS1, our emphasis is to increase efficiency and flexibility of the infrastructure and operation. Computing Operations is working on separating disk and tape at the Tier-1 sites and the full implementation of the xrootd federation ...

  11. Multiple exciton generation in chiral carbon nanotubes: Density functional theory based computation

    Kryjevski, Andrei; Mihaylov, Deyan; Kilina, Svetlana; Kilin, Dmitri

    2017-10-01

    We use a Boltzmann transport equation (BE) to study time evolution of a photo-excited state in a nanoparticle including phonon-mediated exciton relaxation and the multiple exciton generation (MEG) processes, such as exciton-to-biexciton multiplication and biexciton-to-exciton recombination. BE collision integrals are computed using Kadanoff-Baym-Keldysh many-body perturbation theory based on density functional theory simulations, including exciton effects. We compute internal quantum efficiency (QE), which is the number of excitons generated from an absorbed photon in the course of the relaxation. We apply this approach to chiral single-wall carbon nanotubes (SWCNTs), such as (6,2) and (6,5). We predict efficient MEG in the (6,2) and (6,5) SWCNTs within the solar spectrum range starting at the 2Eg energy threshold and with QE reaching ˜1.6 at about 3Eg, where Eg is the electronic gap.

  12. REDUCED DATA FOR CURVE MODELING – APPLICATIONS IN GRAPHICS, COMPUTER VISION AND PHYSICS

    Małgorzata Janik

    2013-06-01

    Full Text Available In this paper we consider the problem of modeling curves in Rn via interpolation without a priori specified interpolation knots. We discuss two approaches to estimate the missing knots for non-parametric data (i.e. collection of points. The first approach (uniform evaluation is based on blind guess in which knots are chosen uniformly. The second approach (cumulative chord parameterization incorporates the geometry of the distribution of data points. More precisely, the difference is equal to the Euclidean distance between data points qi+1 and qi. The second method partially compensates for the loss of the information carried by the reduced data. We also present the application of the above schemes for fitting non-parametric data in computer graphics (light-source motion rendering, in computer vision (image segmentation and in physics (high velocity particles trajectory modeling. Though experiments are conducted for points in R2 and R3 the entire method is equally applicable in Rn.

  13. Computational medical imaging and hemodynamics framework for functional analysis and assessment of cardiovascular structures.

    Wong, Kelvin K L; Wang, Defeng; Ko, Jacky K L; Mazumdar, Jagannath; Le, Thu-Thao; Ghista, Dhanjoo

    2017-03-21

    Cardiac dysfunction constitutes common cardiovascular health issues in the society, and has been an investigation topic of strong focus by researchers in the medical imaging community. Diagnostic modalities based on echocardiography, magnetic resonance imaging, chest radiography and computed tomography are common techniques that provide cardiovascular structural information to diagnose heart defects. However, functional information of cardiovascular flow, which can in fact be used to support the diagnosis of many cardiovascular diseases with a myriad of hemodynamics performance indicators, remains unexplored to its full potential. Some of these indicators constitute important cardiac functional parameters affecting the cardiovascular abnormalities. With the advancement of computer technology that facilitates high speed computational fluid dynamics, the realization of a support diagnostic platform of hemodynamics quantification and analysis can be achieved. This article reviews the state-of-the-art medical imaging and high fidelity multi-physics computational analyses that together enable reconstruction of cardiovascular structures and hemodynamic flow patterns within them, such as of the left ventricle (LV) and carotid bifurcations. The combined medical imaging and hemodynamic analysis enables us to study the mechanisms of cardiovascular disease-causing dysfunctions, such as how (1) cardiomyopathy causes left ventricular remodeling and loss of contractility leading to heart failure, and (2) modeling of LV construction and simulation of intra-LV hemodynamics can enable us to determine the optimum procedure of surgical ventriculation to restore its contractility and health This combined medical imaging and hemodynamics framework can potentially extend medical knowledge of cardiovascular defects and associated hemodynamic behavior and their surgical restoration, by means of an integrated medical image diagnostics and hemodynamic performance analysis framework.

  14. Computational prediction of drug-drug interactions based on drugs functional similarities.

    Ferdousi, Reza; Safdari, Reza; Omidi, Yadollah

    2017-06-01

    Therapeutic activities of drugs are often influenced by co-administration of drugs that may cause inevitable drug-drug interactions (DDIs) and inadvertent side effects. Prediction and identification of DDIs are extremely vital for the patient safety and success of treatment modalities. A number of computational methods have been employed for the prediction of DDIs based on drugs structures and/or functions. Here, we report on a computational method for DDIs prediction based on functional similarity of drugs. The model was set based on key biological elements including carriers, transporters, enzymes and targets (CTET). The model was applied for 2189 approved drugs. For each drug, all the associated CTETs were collected, and the corresponding binary vectors were constructed to determine the DDIs. Various similarity measures were conducted to detect DDIs. Of the examined similarity methods, the inner product-based similarity measures (IPSMs) were found to provide improved prediction values. Altogether, 2,394,766 potential drug pairs interactions were studied. The model was able to predict over 250,000 unknown potential DDIs. Upon our findings, we propose the current method as a robust, yet simple and fast, universal in silico approach for identification of DDIs. We envision that this proposed method can be used as a practical technique for the detection of possible DDIs based on the functional similarities of drugs. Copyright © 2017. Published by Elsevier Inc.

  15. Computing single step operators of logic programming in radial basis function neural networks

    Hamadneh, Nawaf; Sathasivam, Saratha; Choon, Ong Hong [School of Mathematical Sciences, Universiti Sains Malaysia, 11800 USM, Penang (Malaysia)

    2014-07-10

    Logic programming is the process that leads from an original formulation of a computing problem to executable programs. A normal logic program consists of a finite set of clauses. A valuation I of logic programming is a mapping from ground atoms to false or true. The single step operator of any logic programming is defined as a function (T{sub p}:I→I). Logic programming is well-suited to building the artificial intelligence systems. In this study, we established a new technique to compute the single step operators of logic programming in the radial basis function neural networks. To do that, we proposed a new technique to generate the training data sets of single step operators. The training data sets are used to build the neural networks. We used the recurrent radial basis function neural networks to get to the steady state (the fixed point of the operators). To improve the performance of the neural networks, we used the particle swarm optimization algorithm to train the networks.

  16. Computing single step operators of logic programming in radial basis function neural networks

    Hamadneh, Nawaf; Sathasivam, Saratha; Choon, Ong Hong

    2014-07-01

    Logic programming is the process that leads from an original formulation of a computing problem to executable programs. A normal logic program consists of a finite set of clauses. A valuation I of logic programming is a mapping from ground atoms to false or true. The single step operator of any logic programming is defined as a function (Tp:I→I). Logic programming is well-suited to building the artificial intelligence systems. In this study, we established a new technique to compute the single step operators of logic programming in the radial basis function neural networks. To do that, we proposed a new technique to generate the training data sets of single step operators. The training data sets are used to build the neural networks. We used the recurrent radial basis function neural networks to get to the steady state (the fixed point of the operators). To improve the performance of the neural networks, we used the particle swarm optimization algorithm to train the networks.

  17. FUNCTIONALITY OF STUDENTS WITH PHYSICAL DEFICIENCY IN WRITING AND COMPUTER USE ACTIVITIES

    Fernanda Matrigani Mercado Gutierres de Queiroz

    2017-08-01

    Full Text Available The educational inclusion is focused on the learning of all students that have confronts barriers in to effective participation in the school life. In the inclusive education perspective, the students with disabilities must meet be served preferably in the regular education and the special education that needs to offer the educational attendance specialized to complement their educational needs. In this context, the objective of the research is defined in: Describe the functionality of students with physical disabilities, in the Multifunctional Resource Rooms, for activities of writing and computer use, according to the perception of the teachers. The participants of this analysis were teachers of the Educational Service Specialist that are serving students with disabilities. For data collection was used instrument School Function Assessment. The data were organized into a single document, the categories being presented. 1Written work; 2Use of the computer and the equipment. The conclusion was that students with physical disabilities, especially those with impaired upper-limb functionality can have find difficult to write using conventional materials, so they need Assistive Technology to develop their writing skills. Therefore, it is important to improve the profile analysis of the student, thus to choose the more appropriate resource as, it is necessary to improve the materials of the Multifunctional Resource Rooms to meet the diversity of all students with physical disabilities, since the type of furniture, didactic-pedagogical materials and equipment, are not favor in the use by students with serious motor disabilities.

  18. Computing single step operators of logic programming in radial basis function neural networks

    Hamadneh, Nawaf; Sathasivam, Saratha; Choon, Ong Hong

    2014-01-01

    Logic programming is the process that leads from an original formulation of a computing problem to executable programs. A normal logic program consists of a finite set of clauses. A valuation I of logic programming is a mapping from ground atoms to false or true. The single step operator of any logic programming is defined as a function (T p :I→I). Logic programming is well-suited to building the artificial intelligence systems. In this study, we established a new technique to compute the single step operators of logic programming in the radial basis function neural networks. To do that, we proposed a new technique to generate the training data sets of single step operators. The training data sets are used to build the neural networks. We used the recurrent radial basis function neural networks to get to the steady state (the fixed point of the operators). To improve the performance of the neural networks, we used the particle swarm optimization algorithm to train the networks

  19. COMPUTING

    I. Fisk

    2010-01-01

    Introduction It has been a very active quarter in Computing with interesting progress in all areas. The activity level at the computing facilities, driven by both organised processing from data operations and user analysis, has been steadily increasing. The large-scale production of simulated events that has been progressing throughout the fall is wrapping-up and reprocessing with pile-up will continue. A large reprocessing of all the proton-proton data has just been released and another will follow shortly. The number of analysis jobs by users each day, that was already hitting the computing model expectations at the time of ICHEP, is now 33% higher. We are expecting a busy holiday break to ensure samples are ready in time for the winter conferences. Heavy Ion An activity that is still in progress is computing for the heavy-ion program. The heavy-ion events are collected without zero suppression, so the event size is much large at roughly 11 MB per event of RAW. The central collisions are more complex and...

  20. COMPUTING

    M. Kasemann P. McBride Edited by M-C. Sawley with contributions from: P. Kreuzer D. Bonacorsi S. Belforte F. Wuerthwein L. Bauerdick K. Lassila-Perini M-C. Sawley

    Introduction More than seventy CMS collaborators attended the Computing and Offline Workshop in San Diego, California, April 20-24th to discuss the state of readiness of software and computing for collisions. Focus and priority were given to preparations for data taking and providing room for ample dialog between groups involved in Commissioning, Data Operations, Analysis and MC Production. Throughout the workshop, aspects of software, operating procedures and issues addressing all parts of the computing model were discussed. Plans for the CMS participation in STEP’09, the combined scale testing for all four experiments due in June 2009, were refined. The article in CMS Times by Frank Wuerthwein gave a good recap of the highly collaborative atmosphere of the workshop. Many thanks to UCSD and to the organizers for taking care of this workshop, which resulted in a long list of action items and was definitely a success. A considerable amount of effort and care is invested in the estimate of the comput...

  1. Efficient Server-Aided Secure Two-Party Function Evaluation with Applications to Genomic Computation

    Blanton Marina

    2016-10-01

    Full Text Available Computation based on genomic data is becoming increasingly popular today, be it for medical or other purposes. Non-medical uses of genomic data in a computation often take place in a server-mediated setting where the server offers the ability for joint genomic testing between the users. Undeniably, genomic data is highly sensitive, which in contrast to other biometry types, discloses a plethora of information not only about the data owner, but also about his or her relatives. Thus, there is an urgent need to protect genomic data. This is particularly true when the data is used in computation for what we call recreational non-health-related purposes. Towards this goal, in this work we put forward a framework for server-aided secure two-party computation with the security model motivated by genomic applications. One particular security setting that we treat in this work provides stronger security guarantees with respect to malicious users than the traditional malicious model. In particular, we incorporate certified inputs into secure computation based on garbled circuit evaluation to guarantee that a malicious user is unable to modify her inputs in order to learn unauthorized information about the other user’s data. Our solutions are general in the sense that they can be used to securely evaluate arbitrary functions and offer attractive performance compared to the state of the art. We apply the general constructions to three specific types of genomic tests: paternity, genetic compatibility, and ancestry testing and implement the constructions. The results show that all such private tests can be executed within a matter of seconds or less despite the large size of one’s genomic data.

  2. Accelerating Computation of DCM for ERP in MATLAB by External Function Calls to the GPU

    Wang, Wei-Jen; Hsieh, I-Fan; Chen, Chun-Chuan

    2013-01-01

    This study aims to improve the performance of Dynamic Causal Modelling for Event Related Potentials (DCM for ERP) in MATLAB by using external function calls to a graphics processing unit (GPU). DCM for ERP is an advanced method for studying neuronal effective connectivity. DCM utilizes an iterative procedure, the expectation maximization (EM) algorithm, to find the optimal parameters given a set of observations and the underlying probability model. As the EM algorithm is computationally demanding and the analysis faces possible combinatorial explosion of models to be tested, we propose a parallel computing scheme using the GPU to achieve a fast estimation of DCM for ERP. The computation of DCM for ERP is dynamically partitioned and distributed to threads for parallel processing, according to the DCM model complexity and the hardware constraints. The performance efficiency of this hardware-dependent thread arrangement strategy was evaluated using the synthetic data. The experimental data were used to validate the accuracy of the proposed computing scheme and quantify the time saving in practice. The simulation results show that the proposed scheme can accelerate the computation by a factor of 155 for the parallel part. For experimental data, the speedup factor is about 7 per model on average, depending on the model complexity and the data. This GPU-based implementation of DCM for ERP gives qualitatively the same results as the original MATLAB implementation does at the group level analysis. In conclusion, we believe that the proposed GPU-based implementation is very useful for users as a fast screen tool to select the most likely model and may provide implementation guidance for possible future clinical applications such as online diagnosis. PMID:23840507

  3. Probing the mutational interplay between primary and promiscuous protein functions: a computational-experimental approach.

    Garcia-Seisdedos, Hector; Ibarra-Molero, Beatriz; Sanchez-Ruiz, Jose M

    2012-01-01

    Protein promiscuity is of considerable interest due its role in adaptive metabolic plasticity, its fundamental connection with molecular evolution and also because of its biotechnological applications. Current views on the relation between primary and promiscuous protein activities stem largely from laboratory evolution experiments aimed at increasing promiscuous activity levels. Here, on the other hand, we attempt to assess the main features of the simultaneous modulation of the primary and promiscuous functions during the course of natural evolution. The computational/experimental approach we propose for this task involves the following steps: a function-targeted, statistical coupling analysis of evolutionary data is used to determine a set of positions likely linked to the recruitment of a promiscuous activity for a new function; a combinatorial library of mutations on this set of positions is prepared and screened for both, the primary and the promiscuous activities; a partial-least-squares reconstruction of the full combinatorial space is carried out; finally, an approximation to the Pareto set of variants with optimal primary/promiscuous activities is derived. Application of the approach to the emergence of folding catalysis in thioredoxin scaffolds reveals an unanticipated scenario: diverse patterns of primary/promiscuous activity modulation are possible, including a moderate (but likely significant in a biological context) simultaneous enhancement of both activities. We show that this scenario can be most simply explained on the basis of the conformational diversity hypothesis, although alternative interpretations cannot be ruled out. Overall, the results reported may help clarify the mechanisms of the evolution of new functions. From a different viewpoint, the partial-least-squares-reconstruction/Pareto-set-prediction approach we have introduced provides the computational basis for an efficient directed-evolution protocol aimed at the simultaneous

  4. Studies on the Zeroes of Bessel Functions and Methods for Their Computation: IV. Inequalities, Estimates, Expansions, etc., for Zeros of Bessel Functions

    Kerimov, M. K.

    2018-01-01

    This paper is the fourth in a series of survey articles concerning zeros of Bessel functions and methods for their computation. Various inequalities, estimates, expansions, etc. for positive zeros are analyzed, and some results are described in detail with proofs.

  5. Computational principles of syntax in the regions specialized for language: integrating theoretical linguistics and functional neuroimaging.

    Ohta, Shinri; Fukui, Naoki; Sakai, Kuniyoshi L

    2013-01-01

    The nature of computational principles of syntax remains to be elucidated. One promising approach to this problem would be to construct formal and abstract linguistic models that parametrically predict the activation modulations in the regions specialized for linguistic processes. In this article, we review recent advances in theoretical linguistics and functional neuroimaging in the following respects. First, we introduce the two fundamental linguistic operations: Merge (which combines two words or phrases to form a larger structure) and Search (which searches and establishes a syntactic relation of two words or phrases). We also illustrate certain universal properties of human language, and present hypotheses regarding how sentence structures are processed in the brain. Hypothesis I is that the Degree of Merger (DoM), i.e., the maximum depth of merged subtrees within a given domain, is a key computational concept to properly measure the complexity of tree structures. Hypothesis II is that the basic frame of the syntactic structure of a given linguistic expression is determined essentially by functional elements, which trigger Merge and Search. We then present our recent functional magnetic resonance imaging experiment, demonstrating that the DoM is indeed a key syntactic factor that accounts for syntax-selective activations in the left inferior frontal gyrus and supramarginal gyrus. Hypothesis III is that the DoM domain changes dynamically in accordance with iterative Merge applications, the Search distances, and/or task requirements. We confirm that the DoM accounts for activations in various sentence types. Hypothesis III successfully explains activation differences between object- and subject-relative clauses, as well as activations during explicit syntactic judgment tasks. A future research on the computational principles of syntax will further deepen our understanding of uniquely human mental faculties.

  6. Air trapping in sarcoidosis on computed tomography: Correlation with lung function

    Davies, C.W.H.; Tasker, A.D.; Padley, S.P.G.; Davies, R.J.O.; Gleeson, F.V.

    2000-01-01

    AIMS: To document the presence and extent of air trapping on high resolution computed tomography (HRCT) in patients with pulmonary sarcoidosis and correlate HRCT features with pulmonary function tests. METHODS: Twenty-one patients with pulmonary sarcoidosis underwent HRCT and pulmonary function assessment at presentation. Inspiratory and expiratory HRCT were assessed for the presence and extent of air trapping, ground-glass opacification, nodularity, septal thickening, bronchiectasis and parenchymal distortion. HRCT features were correlated with pulmonary function tests. RESULTS: Air trapping on expiratory HRCT was present in 20/21 (95%) patients. The extent of air trapping correlated with percentage predicted residual volume (RV)/total lung capacity (TLC) (r = 0.499;P < 0.05) and percentage predicted maximal mid-expiratory flow rate between 25 and 75% of the vital capacity (r = -0.54;P < 0.05). Ground-glass opacification was present in four of 21 (19%), nodularity in 18/21 (86%), septal thickening in 18/21 (86%), traction bronchiectasis in 14/21 (67%) and distortion in 12/21 (57%) of patients; there were no significant relationships between these CT features and pulmonary function results. CONCLUSION: Air trapping is a common feature in sarcoidosis and correlates with evidence of small airways disease on pulmonary function testing. Davies, C.W.H. (2000). Clinical Radiology 55, 217-221

  7. Distribution of computer functionality for accelerator control at the Brookhaven AGS

    Stevens, A.; Clifford, T.; Frankel, R.

    1985-01-01

    A set of physical and functional system components and their interconnection protocols have been established for all controls work at the AGS. Portions of these designs were tested as part of enhanced operation of the AGS as a source of polarized protons and additional segments will be implemented during the continuing construction efforts which are adding heavy ion capability to our facility. Included in our efforts are the following computer and control system elements: a broad band local area network, which embodies MODEMS; transmission systems and branch interface units; a hierarchical layer, which performs certain data base and watchdog/alarm functions; a group of work station processors (Apollo's) which perform the function of traditional minicomputer host(s) and a layer, which provides both real time control and standardization functions for accelerator devices and instrumentation. Data base and other accelerator functionality is assigned to the most correct level within our network for both real time performance, long-term utility, and orderly growth

  8. Fast and accurate three-dimensional point spread function computation for fluorescence microscopy.

    Li, Jizhou; Xue, Feng; Blu, Thierry

    2017-06-01

    The point spread function (PSF) plays a fundamental role in fluorescence microscopy. A realistic and accurately calculated PSF model can significantly improve the performance in 3D deconvolution microscopy and also the localization accuracy in single-molecule microscopy. In this work, we propose a fast and accurate approximation of the Gibson-Lanni model, which has been shown to represent the PSF suitably under a variety of imaging conditions. We express the Kirchhoff's integral in this model as a linear combination of rescaled Bessel functions, thus providing an integral-free way for the calculation. The explicit approximation error in terms of parameters is given numerically. Experiments demonstrate that the proposed approach results in a significantly smaller computational time compared with current state-of-the-art techniques to achieve the same accuracy. This approach can also be extended to other microscopy PSF models.

  9. COMPUTING

    P. McBride

    It has been a very active year for the computing project with strong contributions from members of the global community. The project has focused on site preparation and Monte Carlo production. The operations group has begun processing data from P5 as part of the global data commissioning. Improvements in transfer rates and site availability have been seen as computing sites across the globe prepare for large scale production and analysis as part of CSA07. Preparations for the upcoming Computing Software and Analysis Challenge CSA07 are progressing. Ian Fisk and Neil Geddes have been appointed as coordinators for the challenge. CSA07 will include production tests of the Tier-0 production system, reprocessing at the Tier-1 sites and Monte Carlo production at the Tier-2 sites. At the same time there will be a large analysis exercise at the Tier-2 centres. Pre-production simulation of the Monte Carlo events for the challenge is beginning. Scale tests of the Tier-0 will begin in mid-July and the challenge it...

  10. COMPUTING

    M. Kasemann

    Introduction During the past six months, Computing participated in the STEP09 exercise, had a major involvement in the October exercise and has been working with CMS sites on improving open issues relevant for data taking. At the same time operations for MC production, real data reconstruction and re-reconstructions and data transfers at large scales were performed. STEP09 was successfully conducted in June as a joint exercise with ATLAS and the other experiments. It gave good indication about the readiness of the WLCG infrastructure with the two major LHC experiments stressing the reading, writing and processing of physics data. The October Exercise, in contrast, was conducted as an all-CMS exercise, where Physics, Computing and Offline worked on a common plan to exercise all steps to efficiently access and analyze data. As one of the major results, the CMS Tier-2s demonstrated to be fully capable for performing data analysis. In recent weeks, efforts were devoted to CMS Computing readiness. All th...

  11. COMPUTING

    I. Fisk

    2011-01-01

    Introduction It has been a very active quarter in Computing with interesting progress in all areas. The activity level at the computing facilities, driven by both organised processing from data operations and user analysis, has been steadily increasing. The large-scale production of simulated events that has been progressing throughout the fall is wrapping-up and reprocessing with pile-up will continue. A large reprocessing of all the proton-proton data has just been released and another will follow shortly. The number of analysis jobs by users each day, that was already hitting the computing model expectations at the time of ICHEP, is now 33% higher. We are expecting a busy holiday break to ensure samples are ready in time for the winter conferences. Heavy Ion The Tier 0 infrastructure was able to repack and promptly reconstruct heavy-ion collision data. Two copies were made of the data at CERN using a large CASTOR disk pool, and the core physics sample was replicated ...

  12. COMPUTING

    I. Fisk

    2012-01-01

    Introduction Computing continued with a high level of activity over the winter in preparation for conferences and the start of the 2012 run. 2012 brings new challenges with a new energy, more complex events, and the need to make the best use of the available time before the Long Shutdown. We expect to be resource constrained on all tiers of the computing system in 2012 and are working to ensure the high-priority goals of CMS are not impacted. Heavy ions After a successful 2011 heavy-ion run, the programme is moving to analysis. During the run, the CAF resources were well used for prompt analysis. Since then in 2012 on average 200 job slots have been used continuously at Vanderbilt for analysis workflows. Operations Office As of 2012, the Computing Project emphasis has moved from commissioning to operation of the various systems. This is reflected in the new organisation structure where the Facilities and Data Operations tasks have been merged into a common Operations Office, which now covers everything ...

  13. COMPUTING

    M. Kasemann

    CCRC’08 challenges and CSA08 During the February campaign of the Common Computing readiness challenges (CCRC’08), the CMS computing team had achieved very good results. The link between the detector site and the Tier0 was tested by gradually increasing the number of parallel transfer streams well beyond the target. Tests covered the global robustness at the Tier0, processing a massive number of very large files and with a high writing speed to tapes.  Other tests covered the links between the different Tiers of the distributed infrastructure and the pre-staging and reprocessing capacity of the Tier1’s: response time, data transfer rate and success rate for Tape to Buffer staging of files kept exclusively on Tape were measured. In all cases, coordination with the sites was efficient and no serious problem was found. These successful preparations prepared the ground for the second phase of the CCRC’08 campaign, in May. The Computing Software and Analysis challen...

  14. COMPUTING

    I. Fisk

    2010-01-01

    Introduction The first data taking period of November produced a first scientific paper, and this is a very satisfactory step for Computing. It also gave the invaluable opportunity to learn and debrief from this first, intense period, and make the necessary adaptations. The alarm procedures between different groups (DAQ, Physics, T0 processing, Alignment/calibration, T1 and T2 communications) have been reinforced. A major effort has also been invested into remodeling and optimizing operator tasks in all activities in Computing, in parallel with the recruitment of new Cat A operators. The teams are being completed and by mid year the new tasks will have been assigned. CRB (Computing Resource Board) The Board met twice since last CMS week. In December it reviewed the experience of the November data-taking period and could measure the positive improvements made for the site readiness. It also reviewed the policy under which Tier-2 are associated with Physics Groups. Such associations are decided twice per ye...

  15. COMPUTING

    M. Kasemann

    Introduction More than seventy CMS collaborators attended the Computing and Offline Workshop in San Diego, California, April 20-24th to discuss the state of readiness of software and computing for collisions. Focus and priority were given to preparations for data taking and providing room for ample dialog between groups involved in Commissioning, Data Operations, Analysis and MC Production. Throughout the workshop, aspects of software, operating procedures and issues addressing all parts of the computing model were discussed. Plans for the CMS participation in STEP’09, the combined scale testing for all four experiments due in June 2009, were refined. The article in CMS Times by Frank Wuerthwein gave a good recap of the highly collaborative atmosphere of the workshop. Many thanks to UCSD and to the organizers for taking care of this workshop, which resulted in a long list of action items and was definitely a success. A considerable amount of effort and care is invested in the estimate of the co...

  16. A Practical Computational Method for the Anisotropic Redshift-Space 3-Point Correlation Function

    Slepian, Zachary; Eisenstein, Daniel J.

    2018-04-01

    We present an algorithm enabling computation of the anisotropic redshift-space galaxy 3-point correlation function (3PCF) scaling as N2, with N the number of galaxies. Our previous work showed how to compute the isotropic 3PCF with this scaling by expanding the radially-binned density field around each galaxy in the survey into spherical harmonics and combining these coefficients to form multipole moments. The N2 scaling occurred because this approach never explicitly required the relative angle between a galaxy pair about the primary galaxy. Here we generalize this work, demonstrating that in the presence of azimuthally-symmetric anisotropy produced by redshift-space distortions (RSD) the 3PCF can be described by two triangle side lengths, two independent total angular momenta, and a spin. This basis for the anisotropic 3PCF allows its computation with negligible additional work over the isotropic 3PCF. We also present the covariance matrix of the anisotropic 3PCF measured in this basis. Our algorithm tracks the full 5-D redshift-space 3PCF, uses an accurate line of sight to each triplet, is exact in angle, and easily handles edge correction. It will enable use of the anisotropic large-scale 3PCF as a probe of RSD in current and upcoming large-scale redshift surveys.

  17. Morphological and Functional Evaluation of Quadricuspid Aortic Valves Using Cardiac Computed Tomography

    Song, Inyoung; Park, Jung Ah; Choi, Bo Hwa; Ko, Sung Min [Department of Radiology, Konkuk University Medical Center, Konkuk University School of Medicine, Seoul 05030 (Korea, Republic of); Shin, Je Kyoun; Chee, Hyun Keun; Kim, Jun Seok [Department of Thoracic Surgery, Konkuk University Medical Center, Konkuk University School of Medicine, Seoul 05030 (Korea, Republic of)

    2016-11-01

    The aim of this study was to identify the morphological and functional characteristics of quadricuspid aortic valves (QAV) on cardiac computed tomography (CCT). We retrospectively enrolled 11 patients with QAV. All patients underwent CCT and transthoracic echocardiography (TTE), and 7 patients underwent cardiovascular magnetic resonance (CMR). The presence and classification of QAV assessed by CCT was compared with that of TTE and intraoperative findings. The regurgitant orifice area (ROA) measured by CCT was compared with severity of aortic regurgitation (AR) by TTE and the regurgitant fraction (RF) by CMR. All of the patients had AR; 9 had pure AR, 1 had combined aortic stenosis and regurgitation, and 1 had combined subaortic stenosis and regurgitation. Two patients had a subaortic fibrotic membrane and 1 of them showed a subaortic stenosis. One QAV was misdiagnosed as tricuspid aortic valve on TTE. In accordance with the Hurwitz and Robert's classification, consensus was reached on the QAV classification between the CCT and TTE findings in 7 of 10 patients. The patients were classified as type A (n = 1), type B (n = 3), type C (n = 1), type D (n = 4), and type F (n = 2) on CCT. A very high correlation existed between ROA by CCT and RF by CMR (r = 0.99) but a good correlation existed between ROA by CCT and regurgitant severity by TTE (r = 0.62). Cardiac computed tomography provides comprehensive anatomical and functional information about the QAV.

  18. Morphological and functional evaluation of quadricuspid aortic valves using cardiac computed tomography

    Song, In Young; Park, Jung Ah; Choi, Bo Hwa; Ko, Sung Min; Shin, Je Kyoun; Chee, Hyun Keun; KIm, Jun Seok [Konkuk University Medical Center, Konkuk University School of Medicine, Seoul (Korea, Republic of)

    2016-07-15

    The aim of this study was to identify the morphological and functional characteristics of quadricuspid aortic valves (QAV) on cardiac computed tomography (CCT). We retrospectively enrolled 11 patients with QAV. All patients underwent CCT and transthoracic echocardiography (TTE), and 7 patients underwent cardiovascular magnetic resonance (CMR). The presence and classification of QAV assessed by CCT was compared with that of TTE and intraoperative findings. The regurgitant orifice area (ROA) measured by CCT was compared with severity of aortic regurgitation (AR) by TTE and the regurgitant fraction (RF) by CMR. All of the patients had AR; 9 had pure AR, 1 had combined aortic stenosis and regurgitation, and 1 had combined subaortic stenosis and regurgitation. Two patients had a subaortic fibrotic membrane and 1 of them showed a subaortic stenosis. One QAV was misdiagnosed as tricuspid aortic valve on TTE. In accordance with the Hurwitz and Robert's classification, consensus was reached on the QAV classification between the CCT and TTE findings in 7 of 10 patients. The patients were classified as type A (n = 1), type B (n = 3), type C (n = 1), type D (n = 4), and type F (n = 2) on CCT. A very high correlation existed between ROA by CCT and RF by CMR (r = 0.99) but a good correlation existed between ROA by CCT and regurgitant severity by TTE (r = 0.62). Cardiac computed tomography provides comprehensive anatomical and functional information about the QAV.

  19. Gravity-supported exercise with computer gaming improves arm function in chronic stroke.

    Jordan, Kimberlee; Sampson, Michael; King, Marcus

    2014-08-01

    To investigate the effect of 4 to 6 weeks of exergaming with a computer mouse embedded within an arm skate on upper limb function in survivors of chronic stroke. Intervention study with a 4-week postintervention follow-up. In home. Survivors (N=13) of chronic (≥6 mo) stroke with hemiparesis of the upper limb with stable baseline Fugl-Meyer assessment scores received the intervention. One participant withdrew, and 2 participants were not reassessed at the 4-week follow-up. No participants withdrew as a result of adverse effects. Four to 6 weeks of exergaming using the arm skate where participants received either 9 (n=5) or 16 (n=7) hours of game play. Upper limb component of the Fugl-Meyer assessment. There was an average increase in the Fugl-Meyer upper limb assessment score from the beginning to end of the intervention of 4.9 points. At the end of the 4-week period after the intervention, the increase was 4.4 points. A 4- to 6-week intervention using the arm skate significantly improved arm function in survivors of chronic stroke by an average of 4.9 Fugl-Meyer upper limb assessment points. This research shows that a larger-scale randomized trial of this device is warranted and highlights the potential value of using virtual reality technology (eg, computer games) in a rehabilitation setting. Copyright © 2014 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.

  20. Rayleigh radiance computations for satellite remote sensing: accounting for the effect of sensor spectral response function.

    Wang, Menghua

    2016-05-30

    To understand and assess the effect of the sensor spectral response function (SRF) on the accuracy of the top of the atmosphere (TOA) Rayleigh-scattering radiance computation, new TOA Rayleigh radiance lookup tables (LUTs) over global oceans and inland waters have been generated. The new Rayleigh LUTs include spectral coverage of 335-2555 nm, all possible solar-sensor geometries, and surface wind speeds of 0-30 m/s. Using the new Rayleigh LUTs, the sensor SRF effect on the accuracy of the TOA Rayleigh radiance computation has been evaluated for spectral bands of the Visible Infrared Imaging Radiometer Suite (VIIRS) on the Suomi National Polar-orbiting Partnership (SNPP) satellite and the Joint Polar Satellite System (JPSS)-1, showing some important uncertainties for VIIRS-SNPP particularly for large solar- and/or sensor-zenith angles as well as for large Rayleigh optical thicknesses (i.e., short wavelengths) and bands with broad spectral bandwidths. To accurately account for the sensor SRF effect, a new correction algorithm has been developed for VIIRS spectral bands, which improves the TOA Rayleigh radiance accuracy to ~0.01% even for the large solar-zenith angles of 70°-80°, compared with the error of ~0.7% without applying the correction for the VIIRS-SNPP 410 nm band. The same methodology that accounts for the sensor SRF effect on the Rayleigh radiance computation can be used for other satellite sensors. In addition, with the new Rayleigh LUTs, the effect of surface atmospheric pressure variation on the TOA Rayleigh radiance computation can be calculated precisely, and no specific atmospheric pressure correction algorithm is needed. There are some other important applications and advantages to using the new Rayleigh LUTs for satellite remote sensing, including an efficient and accurate TOA Rayleigh radiance computation for hyperspectral satellite remote sensing, detector-based TOA Rayleigh radiance computation, Rayleigh radiance calculations for high altitude

  1. Intersections between the Autism Spectrum and the Internet: Perceived Benefits and Preferred Functions of Computer-Mediated Communication

    Gillespie-Lynch, Kristen; Kapp, Steven K.; Shane-Simpson, Christina; Smith, David Shane; Hutman, Ted

    2014-01-01

    An online survey compared the perceived benefits and preferred functions of computer-mediated communication of participants with (N = 291) and without ASD (N = 311). Participants with autism spectrum disorder (ASD) perceived benefits of computer-mediated communication in terms of increased comprehension and control over communication, access to…

  2. An Algorithm Computing the Local $b$ Function by an Approximate Division Algorithm in $\\hat{\\mathcal{D}}$

    Nakayama, Hiromasa

    2006-01-01

    We give an algorithm to compute the local $b$ function. In this algorithm, we use the Mora division algorithm in the ring of differential operators and an approximate division algorithm in the ring of differential operators with power series coefficient.

  3. Exact fast computation of band depth for large functional datasets: How quickly can one million curves be ranked?

    Sun, Ying; Genton, Marc G.; Nychka, Douglas W.

    2012-01-01

    © 2012 John Wiley & Sons, Ltd. Band depth is an important nonparametric measure that generalizes order statistics and makes univariate methods based on order statistics possible for functional data. However, the computational burden of band depth

  4. Development of utility generic functional requirements for electronic work packages and computer-based procedures

    Oxstrand, Johanna [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2017-06-01

    The Nuclear Electronic Work Packages - Enterprise Requirements (NEWPER) initiative is a step toward a vision of implementing an eWP framework that includes many types of eWPs. This will enable immediate paper-related cost savings in work management and provide a path to future labor efficiency gains through enhanced integration and process improvement in support of the Nuclear Promise (Nuclear Energy Institute 2016). The NEWPER initiative was organized by the Nuclear Information Technology Strategic Leadership (NITSL) group, which is an organization that brings together leaders from the nuclear utility industry and regulatory agencies to address issues involved with information technology used in nuclear-power utilities. NITSL strives to maintain awareness of industry information technology-related initiatives and events and communicates those events to its membership. NITSL and LWRS Program researchers have been coordinating activities, including joint organization of NEWPER-related meetings and report development. The main goal of the NEWPER initiative was to develop a set of utility generic functional requirements for eWP systems. This set of requirements will support each utility in their process of identifying plant-specific functional and non-functional requirements. The NEWPER initiative has 140 members where the largest group of members consists of 19 commercial U.S. nuclear utilities and eleven of the most prominent vendors of eWP solutions. Through the NEWPER initiative two sets of functional requirements were developed; functional requirements for electronic work packages and functional requirements for computer-based procedures. This paper will describe the development process as well as a summary of the requirements.

  5. COMPUTING

    2010-01-01

    Introduction Just two months after the “LHC First Physics” event of 30th March, the analysis of the O(200) million 7 TeV collision events in CMS accumulated during the first 60 days is well under way. The consistency of the CMS computing model has been confirmed during these first weeks of data taking. This model is based on a hierarchy of use-cases deployed between the different tiers and, in particular, the distribution of RECO data to T1s, who then serve data on request to T2s, along a topology known as “fat tree”. Indeed, during this period this model was further extended by almost full “mesh” commissioning, meaning that RECO data were shipped to T2s whenever possible, enabling additional physics analyses compared with the “fat tree” model. Computing activities at the CMS Analysis Facility (CAF) have been marked by a good time response for a load almost evenly shared between ALCA (Alignment and Calibration tasks - highest p...

  6. COMPUTING

    Contributions from I. Fisk

    2012-01-01

    Introduction The start of the 2012 run has been busy for Computing. We have reconstructed, archived, and served a larger sample of new data than in 2011, and we are in the process of producing an even larger new sample of simulations at 8 TeV. The running conditions and system performance are largely what was anticipated in the plan, thanks to the hard work and preparation of many people. Heavy ions Heavy Ions has been actively analysing data and preparing for conferences.  Operations Office Figure 6: Transfers from all sites in the last 90 days For ICHEP and the Upgrade efforts, we needed to produce and process record amounts of MC samples while supporting the very successful data-taking. This was a large burden, especially on the team members. Nevertheless the last three months were very successful and the total output was phenomenal, thanks to our dedicated site admins who keep the sites operational and the computing project members who spend countless hours nursing the...

  7. COMPUTING

    Matthias Kasemann

    Overview The main focus during the summer was to handle data coming from the detector and to perform Monte Carlo production. The lessons learned during the CCRC and CSA08 challenges in May were addressed by dedicated PADA campaigns lead by the Integration team. Big improvements were achieved in the stability and reliability of the CMS Tier1 and Tier2 centres by regular and systematic follow-up of faults and errors with the help of the Savannah bug tracking system. In preparation for data taking the roles of a Computing Run Coordinator and regular computing shifts monitoring the services and infrastructure as well as interfacing to the data operations tasks are being defined. The shift plan until the end of 2008 is being put together. User support worked on documentation and organized several training sessions. The ECoM task force delivered the report on “Use Cases for Start-up of pp Data-Taking” with recommendations and a set of tests to be performed for trigger rates much higher than the ...

  8. COMPUTING

    P. MacBride

    The Computing Software and Analysis Challenge CSA07 has been the main focus of the Computing Project for the past few months. Activities began over the summer with the preparation of the Monte Carlo data sets for the challenge and tests of the new production system at the Tier-0 at CERN. The pre-challenge Monte Carlo production was done in several steps: physics generation, detector simulation, digitization, conversion to RAW format and the samples were run through the High Level Trigger (HLT). The data was then merged into three "Soups": Chowder (ALPGEN), Stew (Filtered Pythia) and Gumbo (Pythia). The challenge officially started when the first Chowder events were reconstructed on the Tier-0 on October 3rd. The data operations teams were very busy during the the challenge period. The MC production teams continued with signal production and processing while the Tier-0 and Tier-1 teams worked on splitting the Soups into Primary Data Sets (PDS), reconstruction and skimming. The storage sys...

  9. COMPUTING

    I. Fisk

    2013-01-01

    Computing operation has been lower as the Run 1 samples are completing and smaller samples for upgrades and preparations are ramping up. Much of the computing activity is focusing on preparations for Run 2 and improvements in data access and flexibility of using resources. Operations Office Data processing was slow in the second half of 2013 with only the legacy re-reconstruction pass of 2011 data being processed at the sites.   Figure 1: MC production and processing was more in demand with a peak of over 750 Million GEN-SIM events in a single month.   Figure 2: The transfer system worked reliably and efficiently and transferred on average close to 520 TB per week with peaks at close to 1.2 PB.   Figure 3: The volume of data moved between CMS sites in the last six months   The tape utilisation was a focus for the operation teams with frequent deletion campaigns from deprecated 7 TeV MC GEN-SIM samples to INVALID datasets, which could be cleaned up...

  10. Computer Simulations Reveal Multiple Functions for Aromatic Residues in Cellulase Enzymes (Fact Sheet)

    2012-07-01

    NREL researchers use high-performance computing to demonstrate fundamental roles of aromatic residues in cellulase enzyme tunnels. National Renewable Energy Laboratory (NREL) computer simulations of a key industrial enzyme, the Trichoderma reesei Family 6 cellulase (Cel6A), predict that aromatic residues near the enzyme's active site and at the entrance and exit tunnel perform different functions in substrate binding and catalysis, depending on their location in the enzyme. These results suggest that nature employs aromatic-carbohydrate interactions with a wide variety of binding affinities for diverse functions. Outcomes also suggest that protein engineering strategies in which mutations are made around the binding sites may require tailoring specific to the enzyme family. Cellulase enzymes ubiquitously exhibit tunnels or clefts lined with aromatic residues for processing carbohydrate polymers to monomers, but the molecular-level role of these aromatic residues remains unknown. In silico mutation of the aromatic residues near the catalytic site of Cel6A has little impact on the binding affinity, but simulation suggests that these residues play a major role in the glucopyranose ring distortion necessary for cleaving glycosidic bonds to produce fermentable sugars. Removal of aromatic residues at the entrance and exit of the cellulase tunnel, however, dramatically impacts the binding affinity. This suggests that these residues play a role in acquiring cellulose chains from the cellulose crystal and stabilizing the reaction product, respectively. These results illustrate that the role of aromatic-carbohydrate interactions varies dramatically depending on the position in the enzyme tunnel. As aromatic-carbohydrate interactions are present in all carbohydrate-active enzymes, the results have implications for understanding protein structure-function relationships in carbohydrate metabolism and recognition, carbon turnover in nature, and protein engineering

  11. A BASIC program for an IBM PC compatible computer for drawing the weak phase object contrast transfer function

    Olsen, A.; Skjerpe, P.

    1989-01-01

    This report describes a computer program which is useful in high resolution microscopy. The program is written in EBASIC and calculates the weak phase object contrast transfer function as function of instrumental and imaging parameters. The function is plotted on the PC graphics screen, and by a Print Screen command the function can be copied to the printer. The program runs on both the Hercules graphic card and the IBM CGA card. 2 figs

  12. COMPUTING

    I. Fisk

    2011-01-01

    Introduction The Computing Team successfully completed the storage, initial processing, and distribution for analysis of proton-proton data in 2011. There are still a variety of activities ongoing to support winter conference activities and preparations for 2012. Heavy ions The heavy-ion run for 2011 started in early November and has already demonstrated good machine performance and success of some of the more advanced workflows planned for 2011. Data collection will continue until early December. Facilities and Infrastructure Operations Operational and deployment support for WMAgent and WorkQueue+Request Manager components, routinely used in production by Data Operations, are provided. The GlideInWMS and components installation are now deployed at CERN, which is added to the GlideInWMS factory placed in the US. There has been new operational collaboration between the CERN team and the UCSD GlideIn factory operators, covering each others time zones by monitoring/debugging pilot jobs sent from the facto...

  13. COPD phenotypes on computed tomography and its correlation with selected lung function variables in severe patients

    da Silva SMD

    2016-03-01

    Full Text Available Silvia Maria Doria da Silva, Ilma Aparecida Paschoal, Eduardo Mello De Capitani, Marcos Mello Moreira, Luciana Campanatti Palhares, Mônica Corso PereiraPneumology Service, Department of Internal Medicine, School of Medical Sciences, State University of Campinas (UNICAMP, Campinas, São Paulo, BrazilBackground: Computed tomography (CT phenotypic characterization helps in understanding the clinical diversity of chronic obstructive pulmonary disease (COPD patients, but its clinical relevance and its relationship with functional features are not clarified. Volumetric capnography (VC uses the principle of gas washout and analyzes the pattern of CO2 elimination as a function of expired volume. The main variables analyzed were end-tidal concentration of carbon dioxide (ETCO2, Slope of phase 2 (Slp2, and Slope of phase 3 (Slp3 of capnogram, the curve which represents the total amount of CO2 eliminated by the lungs during each breath.Objective: To investigate, in a group of patients with severe COPD, if the phenotypic analysis by CT could identify different subsets of patients, and if there was an association of CT findings and functional variables.Subjects and methods: Sixty-five patients with COPD Gold III–IV were admitted for clinical evaluation, high-resolution CT, and functional evaluation (spirometry, 6-minute walk test [6MWT], and VC. The presence and profusion of tomography findings were evaluated, and later, the patients were identified as having emphysema (EMP or airway disease (AWD phenotype. EMP and AWD groups were compared; tomography findings scores were evaluated versus spirometric, 6MWT, and VC variables.Results: Bronchiectasis was found in 33.8% and peribronchial thickening in 69.2% of the 65 patients. Structural findings of airways had no significant correlation with spirometric variables. Air trapping and EMP were strongly correlated with VC variables, but in opposite directions. There was some overlap between the EMP and AWD

  14. ABINIT: Plane-Wave-Based Density-Functional Theory on High Performance Computers

    Torrent, Marc

    2014-03-01

    For several years, a continuous effort has been produced to adapt electronic structure codes based on Density-Functional Theory to the future computing architectures. Among these codes, ABINIT is based on a plane-wave description of the wave functions which allows to treat systems of any kind. Porting such a code on petascale architectures pose difficulties related to the many-body nature of the DFT equations. To improve the performances of ABINIT - especially for what concerns standard LDA/GGA ground-state and response-function calculations - several strategies have been followed: A full multi-level parallelisation MPI scheme has been implemented, exploiting all possible levels and distributing both computation and memory. It allows to increase the number of distributed processes and could not be achieved without a strong restructuring of the code. The core algorithm used to solve the eigen problem (``Locally Optimal Blocked Congugate Gradient''), a Blocked-Davidson-like algorithm, is based on a distribution of processes combining plane-waves and bands. In addition to the distributed memory parallelization, a full hybrid scheme has been implemented, using standard shared-memory directives (openMP/openACC) or porting some comsuming code sections to Graphics Processing Units (GPU). As no simple performance model exists, the complexity of use has been increased; the code efficiency strongly depends on the distribution of processes among the numerous levels. ABINIT is able to predict the performances of several process distributions and automatically choose the most favourable one. On the other hand, a big effort has been carried out to analyse the performances of the code on petascale architectures, showing which sections of codes have to be improved; they all are related to Matrix Algebra (diagonalisation, orthogonalisation). The different strategies employed to improve the code scalability will be described. They are based on an exploration of new diagonalization

  15. Passive Stretch Induces Structural and Functional Maturation of Engineered Heart Muscle as Predicted by Computational Modeling.

    Abilez, Oscar J; Tzatzalos, Evangeline; Yang, Huaxiao; Zhao, Ming-Tao; Jung, Gwanghyun; Zöllner, Alexander M; Tiburcy, Malte; Riegler, Johannes; Matsa, Elena; Shukla, Praveen; Zhuge, Yan; Chour, Tony; Chen, Vincent C; Burridge, Paul W; Karakikes, Ioannis; Kuhl, Ellen; Bernstein, Daniel; Couture, Larry A; Gold, Joseph D; Zimmermann, Wolfram H; Wu, Joseph C

    2018-02-01

    The ability to differentiate human pluripotent stem cells (hPSCs) into cardiomyocytes (CMs) makes them an attractive source for repairing injured myocardium, disease modeling, and drug testing. Although current differentiation protocols yield hPSC-CMs to >90% efficiency, hPSC-CMs exhibit immature characteristics. With the goal of overcoming this limitation, we tested the effects of varying passive stretch on engineered heart muscle (EHM) structural and functional maturation, guided by computational modeling. Human embryonic stem cells (hESCs, H7 line) or human induced pluripotent stem cells (IMR-90 line) were differentiated to hPSC-derived cardiomyocytes (hPSC-CMs) in vitro using a small molecule based protocol. hPSC-CMs were characterized by troponin + flow cytometry as well as electrophysiological measurements. Afterwards, 1.2 × 10 6 hPSC-CMs were mixed with 0.4 × 10 6 human fibroblasts (IMR-90 line) (3:1 ratio) and type-I collagen. The blend was cast into custom-made 12-mm long polydimethylsiloxane reservoirs to vary nominal passive stretch of EHMs to 5, 7, or 9 mm. EHM characteristics were monitored for up to 50 days, with EHMs having a passive stretch of 7 mm giving the most consistent formation. Based on our initial macroscopic observations of EHM formation, we created a computational model that predicts the stress distribution throughout EHMs, which is a function of cellular composition, cellular ratio, and geometry. Based on this predictive modeling, we show cell alignment by immunohistochemistry and coordinated calcium waves by calcium imaging. Furthermore, coordinated calcium waves and mechanical contractions were apparent throughout entire EHMs. The stiffness and active forces of hPSC-derived EHMs are comparable with rat neonatal cardiomyocyte-derived EHMs. Three-dimensional EHMs display increased expression of mature cardiomyocyte genes including sarcomeric protein troponin-T, calcium and potassium ion channels, β-adrenergic receptors, and t

  16. Chronic hypersensitivity pneumonitis: high resolution computed tomography patterns and pulmonary function indices as prognostic determinants

    Walsh, Simon L.F.; Devaraj, Anand; Hansell, David M. [Royal Brompton Hospital, Department of Radiology, London (United Kingdom); Sverzellati, Nicola [University of Parma, Department of Clinical Sciences, Section of Radiology, Parma (Italy); Wells, Athol U. [Royal Brompton Hospital, Interstitial Lung Diseases Unit, London (United Kingdom)

    2012-08-15

    To investigate high resolution computed tomography (HRCT) and pulmonary function indices (PFTs) for determining prognosis in patients with chronic fibrotic hypersensitivity pneumonitis (CHP). Case records, PFTs (FEV{sub 1}, FVC and DLco) and HRCTs of ninety-two patients with chronic hypersensitivity pneumonitis were evaluated. HRCT studies were scored by two observers for total disease extent, ground-glass opacification, fine and coarse reticulation, microcystic and macrocystic honeycombing, centrilobular emphysema and consolidation. Traction bronchiectasis within each pattern was graded. Using Cox proportional hazards regression models the prognostic strength of individual HRCT patterns and pulmonary function test variables were determined. There were forty two deaths during the study period. Increasing severity of traction bronchiectasis was the strongest predictor of mortality (HR 1.10, P < 0.001, 95%CI 1.04-1.16). Increasing global interstitial disease extent (HR 1.02, P = 0.02, 95%CI 1.00-1.03), microcystic honeycombing (HR 1.09, P = 0.019, 95%CI 1.01-1.17) and macrocystic honeycombing (HR 1.06, P < 0.01, 95%CI 1.01-1.10) were also independent predictors of mortality. In contrast, no individual PFT variable was predictive of mortality once HRCT patterns were accounted for. HRCT patterns, in particular, severity of traction bronchiectasis and extent of honeycombing are superior to pulmonary function tests for predicting mortality in patients with CHP. (orig.)

  17. The relationship between lung function impairment and quantitative computed tomography in chronic obstructive pulmonary disease

    Mets, O.M. [Radiology, University Medical Center Utrecht (Netherlands); University Medical Center Utrecht, Department of Radiology, Utrecht (Netherlands); Murphy, K. [Image Sciences Institute, University Medical Center Utrecht (Netherlands); Zanen, P.; Lammers, J.W. [Pulmonology, University Medical Center Utrecht (Netherlands); Gietema, H.A.; Jong, P.A. de [Radiology, University Medical Center Utrecht (Netherlands); Ginneken, B. van [Image Sciences Institute, University Medical Center Utrecht (Netherlands); Radboud University Nijmegen Medical Centre, Diagnostic Image Analysis Group, Radiology, Nijmegen (Netherlands); Prokop, M. [Radiology, University Medical Center Utrecht (Netherlands); Radiology, Radboud University Nijmegen Medical Centre (Netherlands)

    2012-01-15

    To determine the relationship between lung function impairment and quantitative computed tomography (CT) measurements of air trapping and emphysema in a population of current and former heavy smokers with and without airflow limitation. In 248 subjects (50 normal smokers; 50 mild obstruction; 50 moderate obstruction; 50 severe obstruction; 48 very severe obstruction) CT emphysema and CT air trapping were quantified on paired inspiratory and end-expiratory CT examinations using several available quantification methods. CT measurements were related to lung function (FEV{sub 1}, FEV{sub 1}/FVC, RV/TLC, Kco) by univariate and multivariate linear regression analysis. Quantitative CT measurements of emphysema and air trapping were strongly correlated to airflow limitation (univariate r-squared up to 0.72, p < 0.001). In multivariate analysis, the combination of CT emphysema and CT air trapping explained 68-83% of the variability in airflow limitation in subjects covering the total range of airflow limitation (p < 0.001). The combination of quantitative CT air trapping and emphysema measurements is strongly associated with lung function impairment in current and former heavy smokers with a wide range of airflow limitation. (orig.)

  18. Ensemble-based computational approach discriminates functional activity of p53 cancer and rescue mutants.

    Özlem Demir

    2011-10-01

    Full Text Available The tumor suppressor protein p53 can lose its function upon single-point missense mutations in the core DNA-binding domain ("cancer mutants". Activity can be restored by second-site suppressor mutations ("rescue mutants". This paper relates the functional activity of p53 cancer and rescue mutants to their overall molecular dynamics (MD, without focusing on local structural details. A novel global measure of protein flexibility for the p53 core DNA-binding domain, the number of clusters at a certain RMSD cutoff, was computed by clustering over 0.7 µs of explicitly solvated all-atom MD simulations. For wild-type p53 and a sample of p53 cancer or rescue mutants, the number of clusters was a good predictor of in vivo p53 functional activity in cell-based assays. This number-of-clusters (NOC metric was strongly correlated (r(2 = 0.77 with reported values of experimentally measured ΔΔG protein thermodynamic stability. Interpreting the number of clusters as a measure of protein flexibility: (i p53 cancer mutants were more flexible than wild-type protein, (ii second-site rescue mutations decreased the flexibility of cancer mutants, and (iii negative controls of non-rescue second-site mutants did not. This new method reflects the overall stability of the p53 core domain and can discriminate which second-site mutations restore activity to p53 cancer mutants.

  19. The relationship between lung function impairment and quantitative computed tomography in chronic obstructive pulmonary disease

    Mets, O.M.; Murphy, K.; Zanen, P.; Lammers, J.W.; Gietema, H.A.; Jong, P.A. de; Ginneken, B. van; Prokop, M.

    2012-01-01

    To determine the relationship between lung function impairment and quantitative computed tomography (CT) measurements of air trapping and emphysema in a population of current and former heavy smokers with and without airflow limitation. In 248 subjects (50 normal smokers; 50 mild obstruction; 50 moderate obstruction; 50 severe obstruction; 48 very severe obstruction) CT emphysema and CT air trapping were quantified on paired inspiratory and end-expiratory CT examinations using several available quantification methods. CT measurements were related to lung function (FEV 1 , FEV 1 /FVC, RV/TLC, Kco) by univariate and multivariate linear regression analysis. Quantitative CT measurements of emphysema and air trapping were strongly correlated to airflow limitation (univariate r-squared up to 0.72, p < 0.001). In multivariate analysis, the combination of CT emphysema and CT air trapping explained 68-83% of the variability in airflow limitation in subjects covering the total range of airflow limitation (p < 0.001). The combination of quantitative CT air trapping and emphysema measurements is strongly associated with lung function impairment in current and former heavy smokers with a wide range of airflow limitation. (orig.)

  20. Chronic hypersensitivity pneumonitis: high resolution computed tomography patterns and pulmonary function indices as prognostic determinants

    Walsh, Simon L.F.; Devaraj, Anand; Hansell, David M.; Sverzellati, Nicola; Wells, Athol U.

    2012-01-01

    To investigate high resolution computed tomography (HRCT) and pulmonary function indices (PFTs) for determining prognosis in patients with chronic fibrotic hypersensitivity pneumonitis (CHP). Case records, PFTs (FEV 1 , FVC and DLco) and HRCTs of ninety-two patients with chronic hypersensitivity pneumonitis were evaluated. HRCT studies were scored by two observers for total disease extent, ground-glass opacification, fine and coarse reticulation, microcystic and macrocystic honeycombing, centrilobular emphysema and consolidation. Traction bronchiectasis within each pattern was graded. Using Cox proportional hazards regression models the prognostic strength of individual HRCT patterns and pulmonary function test variables were determined. There were forty two deaths during the study period. Increasing severity of traction bronchiectasis was the strongest predictor of mortality (HR 1.10, P < 0.001, 95%CI 1.04-1.16). Increasing global interstitial disease extent (HR 1.02, P = 0.02, 95%CI 1.00-1.03), microcystic honeycombing (HR 1.09, P = 0.019, 95%CI 1.01-1.17) and macrocystic honeycombing (HR 1.06, P < 0.01, 95%CI 1.01-1.10) were also independent predictors of mortality. In contrast, no individual PFT variable was predictive of mortality once HRCT patterns were accounted for. HRCT patterns, in particular, severity of traction bronchiectasis and extent of honeycombing are superior to pulmonary function tests for predicting mortality in patients with CHP. (orig.)

  1. Comparison of measured and computed phase functions of individual tropospheric ice crystals

    Stegmann, Patrick G.; Tropea, Cameron; Järvinen, Emma; Schnaiter, Martin

    2016-01-01

    Airplanes passing the incuda (lat. anvils) regions of tropical cumulonimbi-clouds are at risk of suffering an engine power-loss event and engine damage due to ice ingestion (Mason et al., 2006 [1]). Research in this field relies on optical measurement methods to characterize ice crystals; however the design and implementation of such methods presently suffer from the lack of reliable and efficient means of predicting the light scattering from ice crystals. The nascent discipline of direct measurement of phase functions of ice crystals in conjunction with particle imaging and forward modelling through geometrical optics derivative- and Transition matrix-codes for the first time allow us to obtain a deeper understanding of the optical properties of real tropospheric ice crystals. In this manuscript, a sample phase function obtained via the Particle Habit Imaging and Polar Scattering (PHIPS) probe during a measurement campaign in flight over Brazil will be compared to three different light scattering codes. This includes a newly developed first order geometrical optics code taking into account the influence of the Gaussian beam illumination used in the PHIPS device, as well as the reference ray tracing code of Macke and the T-matrix code of Kahnert. - Highlights: • A GO code for shaped beams and non-spherical particles has been developed. • The code has been validated against exact Mie results. • Measured and computed phase functions for a single ice crystal have been compared. • The comparison highlights differences in the backscattering region.

  2. Training Older Adults to Use Tablet Computers: Does It Enhance Cognitive Function?

    Chan, Micaela Y; Haber, Sara; Drew, Linda M; Park, Denise C

    2016-06-01

    Recent evidence shows that engaging in learning new skills improves episodic memory in older adults. In this study, older adults who were computer novices were trained to use a tablet computer and associated software applications. We hypothesize that sustained engagement in this mentally challenging training would yield a dual benefit of improved cognition and enhancement of everyday function by introducing useful skills. A total of 54 older adults (age 60-90) committed 15 hr/week for 3 months. Eighteen participants received extensive iPad training, learning a broad range of practical applications. The iPad group was compared with 2 separate controls: a Placebo group that engaged in passive tasks requiring little new learning; and a Social group that had regular social interaction, but no active skill acquisition. All participants completed the same cognitive battery pre- and post-engagement. Compared with both controls, the iPad group showed greater improvements in episodic memory and processing speed but did not differ in mental control or visuospatial processing. iPad training improved cognition relative to engaging in social or nonchallenging activities. Mastering relevant technological devices have the added advantage of providing older adults with technological skills useful in facilitating everyday activities (e.g., banking). This work informs the selection of targeted activities for future interventions and community programs. © The Author 2014. Published by Oxford University Press on behalf of The Gerontological Society of America.

  3. Technical Report: Toward a Scalable Algorithm to Compute High-Dimensional Integrals of Arbitrary Functions

    Snyder, Abigail C.; Jiao, Yu

    2010-01-01

    Neutron experiments at the Spallation Neutron Source (SNS) at Oak Ridge National Laboratory (ORNL) frequently generate large amounts of data (on the order of 106-1012 data points). Hence, traditional data analysis tools run on a single CPU take too long to be practical and scientists are unable to efficiently analyze all data generated by experiments. Our goal is to develop a scalable algorithm to efficiently compute high-dimensional integrals of arbitrary functions. This algorithm can then be used to integrate the four-dimensional integrals that arise as part of modeling intensity from the experiments at the SNS. Here, three different one-dimensional numerical integration solvers from the GNU Scientific Library were modified and implemented to solve four-dimensional integrals. The results of these solvers on a final integrand provided by scientists at the SNS can be compared to the results of other methods, such as quasi-Monte Carlo methods, computing the same integral. A parallelized version of the most efficient method can allow scientists the opportunity to more effectively analyze all experimental data.

  4. Computational modeling of heterogeneity and function of CD4+ T cells

    Adria eCarbo

    2014-07-01

    Full Text Available The immune system is composed of many different cell types and hundreds of intersecting molecular pathways and signals. This large biological complexity requires coordination between distinct pro-inflammatory and regulatory cell subsets to respond to infection while maintaining tissue homeostasis. CD4+ T cells play a central role in orchestrating immune responses and in maintaining a balance between pro- and anti- inflammatory responses. This tight balance between regulatory and effector reactions depends on the ability of CD4+ T cells to modulate distinct pathways within large molecular networks, since dysregulated CD4+ T cell responses may result in chronic inflammatory and autoimmune diseases. The CD4+ T cell differentiation process comprises an intricate interplay between cytokines, their receptors, adaptor molecules, signaling cascades and transcription factors that help delineate cell fate and function. Computational modeling can help to describe, simulate, analyze, and predict some of the behaviors in this complicated differentiation network. This review provides a comprehensive overview of existing computational immunology methods as well as novel strategies used to model immune responses with a particular focus on CD4+ T cell differentiation.

  5. Computer based training for NPP personnel (interactive communication systems and functional trainers)

    Martin, H.D.

    1987-01-01

    KWU as a manufacturer of thermal and nuclear power plants has extensive customer training obligations within its power plant contracts. In this respect KWU has gained large experience in training of personnel, in the production of training material including video tapes an in the design of simulators. KWU developed interactive communication systems (ICS) for training and retraining purposes with a personal computer operating a video disc player on which video instruction is stored. The training program is edited with the help of a self developed editing system which enables the author to easily enter his instructions into the computer. ICS enables the plant management to better monitor the performance of its personnel through computerized training results and helps to save training manpower. German NPPs differ very much from other designs with respect to a more complex and integrated reactor control system and an additional reactor limitation system. Simulators for such plants therefore have also to simulate these systems. KWU developed a Functional Trainer (FT) which is a replica of the primary system, the auxiliary systems linked to it and the associated control, limitation and protection systems including the influences of the turbine operation and control

  6. Computational screening of functionalized zinc porphyrins for dye sensitized solar cells

    Ørnsø, Kristian Baruël; García Lastra, Juan Maria; Thygesen, Kristian Sommer

    2013-01-01

    separation, and high output voltage. Here we demonstrate an extensive computational screening of zinc porphyrins functionalized with electron donating side groups and electron accepting anchoring groups. The trends in frontier energy levels versus side groups are analyzed and a no-loss DSSC level alignment...... quality is estimated. Out of the initial 1029 molecules, we find around 50 candidates with level alignment qualities within 5% of the optimal limit. We show that the level alignment of five zinc porphyrin dyes which were recently used in DSSCs with high efficiencies can be further improved by simple side......An efficient dye sensitized solar cell (DSSC) is one possible solution to meet the world's rapidly increasing energy demands and associated climate challenges. This requires inexpensive and stable dyes with well-positioned frontier energy levels for maximal solar absorption, efficient charge...

  7. Functional high-resolution computed tomography of pulmonary vascular and airway reactions

    Herold, C.J.; Johns Hopkins Medical Institutions, Baltimore, MD; Brown, R.H.; Johns Hopkins Medical Institutions, Baltimore, MD; Johns Hopkins Medical Institutions, Baltimore, MD; Wetzel, R.C.; Herold, S.M.; Zeerhouni, E.A.

    1993-01-01

    We describe the use of high-resolution computed tomography (HRCT) for assessment of the function of pulmonary vessels and airways. With its excellent spatial resolution, HRCT is able to demonstrate pulmonary structures as small as 300 μm and can be used to monitor changes following various stimuli. HRCT also provides information about structures smaller than 300 μm through measurement of parenchymal background density. To date, sequential, spiral and ultrafast HRCT techniques have been used in a variety of challenges to gather information about the anatomical correlates of traditional physiological measurements, thus making anatomical-physiological correlation possible. HRCT of bronchial reactivity can demonstrate the location and time course of aerosol-induced broncho-constriction and may show changes not apparent on spirometry. HRCT of the pulmonary vascular system visualized adaptations of vessels during hypoxia and intravascular volume loading and elucidates cardiorespiratory interactions. Experimental studies provide a basis for potential clinical applications of this method. (orig.) [de

  8. Evaluation of the modulation transfer function for computer tomography by using American Association Physics Medicine Phantom

    Kim, Ki Won [Dept. of Radiology, Kyung Hee University Hospital at Gang-dong, Seoul (Korea, Republic of); Choi, Kwan Woo [Dept. of Radiology, Asan Medical Center, Seoul (Korea, Republic of); Jeong, Hoi Woun [Dept. of Radiological Technology, Baekseok Culture University, Cheonan (Korea, Republic of); Jang, Seo Goo [Dept. of Medical Science, Soonchunhyang University, Asan (Korea, Republic of); Kwon, Kyung Tae [Dept. of Radiological Technology, Dongnam Health University, Suwon (Korea, Republic of); Son, Soon Yong [Dept. of Radiological Technology, Wonkwang Health Science University, Iksan (Korea, Republic of); Son, Jin Hyun; Min, Jung Whan [Dept. of Radiological Technology, Shingu University, Sungnam (Korea, Republic of)

    2016-06-15

    In clinical computed tomography (CT), regular quality assurance (QA) has been required. This study is to evaluate the MTF for analyzing the spatial resolution using AAPM phantom in CT exam. The dual source somatom definition flash (siemens healthcare, forchheim, Germany), the brilliance 64 (philips medical system Netherlands) and aquilion 64 (toshiba medical system, Japan) were used in this study. The quantitative evaluation was performed using the image J (wayne rasband national institutes of health, USA) and chart method which is measurement of modulation transfer function (MTF). In MTF evaluation, the spatial frequencies corresponding to the 50% MTF for the CT systems were 0.58, 0.28, and 0.59 mm-1, respectively and the 10% MTF for the CT systems were 1.63, 0.89, and 1.21 mm-1, respectively. This study could evaluate the characteristic of spatial resolution of MTF using chart method, suggesting the quantitative evaluation method using the data.

  9. Computer Modelling of Functional Aspects of Noise in Endogenously Oscillating Neurons

    Huber, M. T.; Dewald, M.; Voigt, K.; Braun, H. A.; Moss, F.

    1998-03-01

    Membrane potential oscillations are a widespread feature of neuronal activity. When such oscillations operate close to the spike-triggering threshold, noise can become an essential property of spike-generation. According to that, we developed a minimal Hodgkin-Huxley-type computer model which includes a noise term. This model accounts for experimental data from quite different cells ranging from mammalian cortical neurons to fish electroreceptors. With slight modifications of the parameters, the model's behavior can be tuned to bursting activity, which additionally allows it to mimick temperature encoding in peripheral cold receptors including transitions to apparently chaotic dynamics as indicated by methods for the detection of unstable periodic orbits. Under all conditions, cooperative effects between noise and nonlinear dynamics can be shown which, beyond stochastic resonance, might be of functional significance for stimulus encoding and neuromodulation.

  10. Development of a computer-adaptive physical function instrument for Social Security Administration disability determination.

    Ni, Pengsheng; McDonough, Christine M; Jette, Alan M; Bogusz, Kara; Marfeo, Elizabeth E; Rasch, Elizabeth K; Brandt, Diane E; Meterko, Mark; Haley, Stephen M; Chan, Leighton

    2013-09-01

    To develop and test an instrument to assess physical function for Social Security Administration (SSA) disability programs, the SSA-Physical Function (SSA-PF) instrument. Item response theory (IRT) analyses were used to (1) create a calibrated item bank for each of the factors identified in prior factor analyses, (2) assess the fit of the items within each scale, (3) develop separate computer-adaptive testing (CAT) instruments for each scale, and (4) conduct initial psychometric testing. Cross-sectional data collection; IRT analyses; CAT simulation. Telephone and Internet survey. Two samples: SSA claimants (n=1017) and adults from the U.S. general population (n=999). None. Model fit statistics, correlation, and reliability coefficients. IRT analyses resulted in 5 unidimensional SSA-PF scales: Changing & Maintaining Body Position, Whole Body Mobility, Upper Body Function, Upper Extremity Fine Motor, and Wheelchair Mobility for a total of 102 items. High CAT accuracy was demonstrated by strong correlations between simulated CAT scores and those from the full item banks. On comparing the simulated CATs with the full item banks, very little loss of reliability or precision was noted, except at the lower and upper ranges of each scale. No difference in response patterns by age or sex was noted. The distributions of claimant scores were shifted to the lower end of each scale compared with those of a sample of U.S. adults. The SSA-PF instrument contributes important new methodology for measuring the physical function of adults applying to the SSA disability programs. Initial evaluation revealed that the SSA-PF instrument achieved considerable breadth of coverage in each content domain and demonstrated noteworthy psychometric properties. Copyright © 2013 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.

  11. Quantitative Analysis of Lateral Pinch Force in Quadriplegic Patients Using Functional Neuromuscular Stimulation with Computer Stimulation

    Ali Esteki

    2004-10-01

    Full Text Available Objective: In some applications of functional neuromuscular stimulation (FNS, the distal joint of the thumb (IP in quadriplegic patients is sometimes surgically fused at zero degrees and the FPL is stimulated. This prevents hyperextension and extreme flexion of the IP joint during lateral pinch. However, IP joint fusion removes one degree of freedom from the thumb and may reduce the grip force. An alternative approach, preferably without surgical alterations, using sufficient electrical stimulation of selected muscles was investigated. A 3D model of prehensile lateral pinch was developed. Computer simulation of the model was used to find an approach providing the appropriate posture and adequate lateral grip force for quadriplegic patients using FNS. Materials & Methods: The model consists of a multi-rigid-body system connected by one or two degree(s of freedom joints acted upon by passive resistive moments, active muscle moments and moments of external contact forces. Passive resistive moments were measured at each joint, active muscle moments were computed using a simple muscle model, and moments of external force were computed based on a force-displacement relationship for finger pads. In addition to the current strategy, two possible alternatives were studied: increasing the fused joint angle and activation of multiple muscles without joint fusion. Normal component of the grip force and its angle with respect to the horizontal plane were computed and compared for the studied cases. Results: Results showed, by using the current FNS strategy, a convenient posture and a grip force of 10.1 (N are achieved which is comparable to what is measured experimentally and introduced in the literature. Increasing the joint fusion angle from 0 to 15 and 30 degrees in parallel with the activation of FPL increased the grip force from 10.1 to 10.7 and 11.2 (N, respectively, but resulted in inconvenient posture. Among all different combinations of the muscles

  12. Computing many-body wave functions with guaranteed precision: the first-order Møller-Plesset wave function for the ground state of helium atom.

    Bischoff, Florian A; Harrison, Robert J; Valeev, Edward F

    2012-09-14

    We present an approach to compute accurate correlation energies for atoms and molecules using an adaptive discontinuous spectral-element multiresolution representation for the two-electron wave function. Because of the exponential storage complexity of the spectral-element representation with the number of dimensions, a brute-force computation of two-electron (six-dimensional) wave functions with high precision was not practical. To overcome the key storage bottlenecks we utilized (1) a low-rank tensor approximation (specifically, the singular value decomposition) to compress the wave function, and (2) explicitly correlated R12-type terms in the wave function to regularize the Coulomb electron-electron singularities of the Hamiltonian. All operations necessary to solve the Schrödinger equation were expressed so that the reconstruction of the full-rank form of the wave function is never necessary. Numerical performance of the method was highlighted by computing the first-order Møller-Plesset wave function of a helium atom. The computed second-order Møller-Plesset energy is precise to ~2 microhartrees, which is at the precision limit of the existing general atomic-orbital-based approaches. Our approach does not assume special geometric symmetries, hence application to molecules is straightforward.

  13. Functional requirements of computer systems for the U.S. Geological Survey, Water Resources Division, 1988-97

    Hathaway, R.M.; McNellis, J.M.

    1989-01-01

    Investigating the occurrence, quantity, quality, distribution, and movement of the Nation 's water resources is the principal mission of the U.S. Geological Survey 's Water Resources Division. Reports of these investigations are published and available to the public. To accomplish this mission, the Division requires substantial computer technology to process, store, and analyze data from more than 57,000 hydrologic sites. The Division 's computer resources are organized through the Distributed Information System Program Office that manages the nationwide network of computers. The contract that provides the major computer components for the Water Resources Division 's Distributed information System expires in 1991. Five work groups were organized to collect the information needed to procure a new generation of computer systems for the U. S. Geological Survey, Water Resources Division. Each group was assigned a major Division activity and asked to describe its functional requirements of computer systems for the next decade. The work groups and major activities are: (1) hydrologic information; (2) hydrologic applications; (3) geographic information systems; (4) reports and electronic publishing; and (5) administrative. The work groups identified 42 functions and described their functional requirements for 1988, 1992, and 1997. A few new functions such as Decision Support Systems and Executive Information Systems, were identified, but most are the same as performed today. Although the number of functions will remain about the same, steady growth in the size, complexity, and frequency of many functions is predicted for the next decade. No compensating increase in the Division 's staff is anticipated during this period. To handle the increased workload and perform these functions, new approaches will be developed that use advanced computer technology. The advanced technology is required in a unified, tightly coupled system that will support all functions simultaneously

  14. High-resolution computed tomography in silicosis: correlation with chest radiography and pulmonary function tests

    Lopes, Agnaldo Jose [Universidade do Estado do Rio de Janeiro (UERJ), RJ (Brazil). Pedro Ernesto Univ. Hospital. Dept. of Respiratory Function]. E-mail: phel.lop@uol.com.br; Mogami, Roberto; Capone, Domenico; Jansen, Jose Manoel [Universidade do Estado do Rio de Janeiro (UERJ), RJ (Brazil). School of Medical Sciences; Tessarollo, Bernardo [Universidade do Estado do Rio de Janeiro (UERJ), RJ (Brazil). Dept. of Radiology and Diagnostic Image; Melo, Pedro Lopes de [Universidade do Estado do Rio de Janeiro (UERJ), RJ (Brazil). Inst. of Biology

    2008-05-15

    Objective: To correlate tomographic findings with pulmonary function findings, as well as to compare chest X-ray findings with high-resolution computed tomography (HRCT) findings, in patients with silicosis. Methods: A cross-sectional study was conducted in 44 non-smoking patients without a history of tuberculosis. Chest X-ray findings were classified according to the International Labour Organization recommendations. Using a semiquantitative system, the following HRCT findings were measured: the full extent of pulmonary involvement; parenchymal opacities; and emphysema. Spirometry and forced oscillation were performed. Pulmonary volumes were evaluated using the helium dilution method, and diffusing capacity of the lung for carbon monoxide was assessed. Results: Of the 44 patients studied, 41 were male. The mean age was 48.4 years. There were 4 patients who were classified as category 0 based on X-ray findings and as category 1 based on HRCT findings. Using HRCT scans, we identified progressive massive fibrosis in 33 patients, compared with only 23 patients when X-rays were used. Opacity score was found to correlate most closely with airflow, DLCO and compliance. Emphysema score correlated inversely with volume, DLCO and airflow. In this sample of patients presenting a predominance of large opacities (75% of the individuals), the deterioration of pulmonary function was associated with the extent of structural changes. Conclusions: In the early detection of silicosis and the identification of progressive massive fibrosis, HRCT scans are superior to X-rays. (author)

  15. High-resolution computed tomography in silicosis: correlation with chest radiography and pulmonary function tests

    Lopes, Agnaldo Jose; Mogami, Roberto; Capone, Domenico; Jansen, Jose Manoel; Tessarollo, Bernardo; Melo, Pedro Lopes de

    2008-01-01

    Objective: To correlate tomographic findings with pulmonary function findings, as well as to compare chest X-ray findings with high-resolution computed tomography (HRCT) findings, in patients with silicosis. Methods: A cross-sectional study was conducted in 44 non-smoking patients without a history of tuberculosis. Chest X-ray findings were classified according to the International Labour Organization recommendations. Using a semiquantitative system, the following HRCT findings were measured: the full extent of pulmonary involvement; parenchymal opacities; and emphysema. Spirometry and forced oscillation were performed. Pulmonary volumes were evaluated using the helium dilution method, and diffusing capacity of the lung for carbon monoxide was assessed. Results: Of the 44 patients studied, 41 were male. The mean age was 48.4 years. There were 4 patients who were classified as category 0 based on X-ray findings and as category 1 based on HRCT findings. Using HRCT scans, we identified progressive massive fibrosis in 33 patients, compared with only 23 patients when X-rays were used. Opacity score was found to correlate most closely with airflow, DLCO and compliance. Emphysema score correlated inversely with volume, DLCO and airflow. In this sample of patients presenting a predominance of large opacities (75% of the individuals), the deterioration of pulmonary function was associated with the extent of structural changes. Conclusions: In the early detection of silicosis and the identification of progressive massive fibrosis, HRCT scans are superior to X-rays. (author)

  16. Computational-based structural, functional and phylogenetic analysis of Enterobacter phytases.

    Pramanik, Krishnendu; Kundu, Shreyasi; Banerjee, Sandipan; Ghosh, Pallab Kumar; Maiti, Tushar Kanti

    2018-06-01

    Myo-inositol hexakisphosphate phosphohydrolases (i.e., phytases) are known to be a very important enzyme responsible for solubilization of insoluble phosphates. In the present study, Enterobacter phytases have characterized by different phylogenetic, structural and functional parameters using some standard bio-computational tools. Results showed that majority of the Enterobacter phytases are acidic in nature as most of the isoelectric points were under 7.0. The aliphatic indices predicted for the selected proteins were below 40 indicating their thermostable nature. The average molecular weight of the proteins was 48 kDa. The lower values of GRAVY of the said proteins implied that they have better interactions with water. Secondary structure prediction revealed that alpha-helical content was highest among the other forms such as sheets, coils, etc. Moreover, the predicted 3D structure of Enterobacter phytases divulged that the proteins consisted of four monomeric polypeptide chains i.e., it was a tetrameric protein. The predicted tertiary model of E. aerogenes (A0A0M3HCJ2) was deposited in Protein Model Database (Acc. No.: PM0080561) for further utilization after a thorough quality check from QMEAN and SAVES server. Functional analysis supported their classification as histidine acid phosphatases. Besides, multiple sequence alignment revealed that "DG-DP-LG" was the most highly conserved residues within the Enterobacter phytases. Thus, the present study will be useful in selecting suitable phytase-producing microbe exclusively for using in the animal food industry as a food additive.

  17. Estimation Methods of the Point Spread Function Axial Position: A Comparative Computational Study

    Javier Eduardo Diaz Zamboni

    2017-01-01

    Full Text Available The precise knowledge of the point spread function is central for any imaging system characterization. In fluorescence microscopy, point spread function (PSF determination has become a common and obligatory task for each new experimental device, mainly due to its strong dependence on acquisition conditions. During the last decade, algorithms have been developed for the precise calculation of the PSF, which fit model parameters that describe image formation on the microscope to experimental data. In order to contribute to this subject, a comparative study of three parameter estimation methods is reported, namely: I-divergence minimization (MIDIV, maximum likelihood (ML and non-linear least square (LSQR. They were applied to the estimation of the point source position on the optical axis, using a physical model. Methods’ performance was evaluated under different conditions and noise levels using synthetic images and considering success percentage, iteration number, computation time, accuracy and precision. The main results showed that the axial position estimation requires a high SNR to achieve an acceptable success level and higher still to be close to the estimation error lower bound. ML achieved a higher success percentage at lower SNR compared to MIDIV and LSQR with an intrinsic noise source. Only the ML and MIDIV methods achieved the error lower bound, but only with data belonging to the optical axis and high SNR. Extrinsic noise sources worsened the success percentage, but no difference was found between noise sources for the same method for all methods studied.

  18. What are the ideal properties for functional food peptides with antihypertensive effect? A computational peptidology approach.

    Zhou, Peng; Yang, Chao; Ren, Yanrong; Wang, Congcong; Tian, Feifei

    2013-12-01

    Peptides with antihypertensive potency have long been attractive to the medical and food communities. However, serving as food additives, rather than therapeutic agents, peptides should have a good taste. In the present study, we explore the intrinsic relationship between the angiotensin I-converting enzyme (ACE) inhibition and bitterness of short peptides in the framework of computational peptidology, attempting to find out the appropriate properties for functional food peptides with satisfactory bioactivities. As might be expected, quantitative structure-activity relationship modeling reveals a significant positive correlation between the ACE inhibition and bitterness of dipeptides, but this correlation is quite modest for tripeptides and, particularly, tetrapeptides. Moreover, quantum mechanics/molecular mechanics analysis of the structural basis and energetic profile involved in ACE-peptide complexes unravels that peptides of up to 4 amino acids long are sufficient to have efficient binding to ACE, and more additional residues do not bring with substantial enhance in their ACE-binding affinity and, thus, antihypertensive capability. All of above, it is coming together to suggest that the tripeptides and tetrapeptides could be considered as ideal candidates for seeking potential functional food additives with both high antihypertensive activity and low bitterness. Copyright © 2013 Elsevier Ltd. All rights reserved.

  19. Correlation of pulmonary function and usual interstitial pneumonia computed tomography patterns in idiopathic pulmonary fibrosis.

    Arcadu, Antonella; Byrne, Suzanne C; Pirina, Pietro; Hartman, Thomas E; Bartholmai, Brian J; Moua, Teng

    2017-08-01

    Little is known about presenting 'inconsistent' or 'possible' usual interstitial pneumonia (UIP) computed tomography (CT) patterns advancing to 'consistent' UIP as disease progresses in idiopathic pulmonary fibrosis (IPF). We hypothesized that if 'consistent' UIP represented more advanced disease, such a pattern on presentation should also correlate with more severe pulmonary function test (PFT) abnormalities. Consecutive IPF patients (2005-2013) diagnosed by international criteria with baseline PFT and CT were included. Presenting CTs were assessed by three expert radiologists for consensus UIP pattern ('consistent', 'possible', and 'inconsistent'). Approximation of individual and combined interstitial abnormalities was also performed with correlation of interstitial abnormalities and UIP CT pattern made with PFT findings and survival. Three-hundred and fifty patients (70% male) were included with a mean age of 68.3 years. Mean percent predicted forced vital capacity (FVC%) and diffusion capacity (DLCO%) was 64% and 45.5% respectively. Older age and male gender correlated more with 'consistent' UIP CT pattern. FVC% was not associated with any UIP pattern but did correlate with total volume of radiologist assessed interstitial abnormalities. DLCO% was lower in those with 'consistent' UIP pattern. A 'consistent' UIP CT pattern was also not independently predictive of survival after correction for age, gender, FVC%, and DLCO%. PFT findings appear to correlate with extent of radiologic disease but not specific morphologic patterns. Whether such UIP patterns represent different stages of disease severity or radiologic progression is not supported by coinciding pulmonary function decline. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. Development of a mobile gammacamera computer system for non invasive ventricular function determination

    Knopp, R.; Reske, S.N.; Winkler, C.

    1983-03-01

    As a reliable non-invasive method, dynamic ventricular volume determination by means of gammacamera computer scintigraphy is now generally accepted to be most useful in clinical cardiology. In view to the fact, however, that the required instrumentation is in general unwieldy and not mobile sophisticated cardiac function studies could not be performed up to now in many intensive care units. In order to overcome this problem we developed a compact scintigraphic system consisting of a mobile gammacamera (Siemens Mobicon) with a conductive build-in minicomputer (Siemens R 20: 16 bit, 128 kB). It renders possible a combined investigation of ventricular volume and pressure. The volume curve is acquired by sequential scintigrahpy whereas the pessure is simultaneously measured manometrically by means of heart catheter. As a result of this comprehensive investigation a pressure-volume loop is plottes the enclosed area of which represents the cardiac work performance. Additionally, functional parameters such as compliance (dV/dp) or stiffness (dp/dV) can be derived from the loop diagram. Besides of the mentioned procedures, the mobile system can also be used for detection of acute infarctions as well as for myocardial scintigraphy in general. (orig.) [de

  1. Flash X-Ray Apparatus With Spectrum Control Functions For Medical Use And Fuji Computed Radiography

    Isobe, H.; Sato, E.; Hayasi, Y.; Suzuki, M.; Arima, H.; Hoshino, F.

    1985-02-01

    Flash radiographic bio-medical studies at sub-microsecond intervals were performed by using both a new type of flash X-ray(FX) apparatus with spectrum control functions and Fuji Computed Radiography(FCR). This single flasher tends to have a comparatively long exposure time and the electric pulse width of the FX wave form is about 0.3,usec. The maximum FX dose is about 50mR at 1m per pulse, and the effective focal spot varies according to condenser charging voltage, A-C distance, etc., ranging from 1.0 to 3.0mm in diameter, but in the low dose rate region it can be reduced to less than 1.0mm in diameter. The FX dose is determined by the condenser charging voltage and the A-C distance, while the FX spectrum is determined by the average voltage of the FX tube and filters. Various clear FX images were obtained by controlling the spectrum and dose. FCR is a new storage medium for medical radiography developed by the Fuji Photo Film Co., Ltd. and this apparatus has various image forming functions: low dose radiography, film density control, image contrast control, subtraction management and others. We have used this new apparatus in conjunction with our FX radiography and have obtained some new and interesting biomedical radiograms: the edge enhancement image, the instantaneous enlarged image, and the single exposure energy subtraction image using the FX spectrum distribution.

  2. Use of time space Green's functions in the computation of transient eddy current fields

    Davey, K.; Turner, L.

    1988-01-01

    The utility of integral equations to solve eddy current problems has been borne out by numerous computations in the past few years, principally in sinusoidal steady-state problems. This paper attempts to examine the applicability of the integral approaches in both time and space for the more generic transient problem. The basic formulation for the time space Green's function approach is laid out. A technique employing Gauss-Laguerre integration is employed to realize the temporal solution, while Gauss--Legendre integration is used to resolve the spatial field character. The technique is then applied to the fusion electromagnetic induction experiments (FELIX) cylinder experiments in both two and three dimensions. It is found that quite accurate solutions can be obtained using rather coarse time steps and very few unknowns; the three-dimensional field solution worked out in this context used basically only four unknowns. The solution appears to be somewhat sensitive to the choice of time step, a consequence of a numerical instability imbedded in the Green's function near the origin

  3. Crown-ether functionalized carbon nanotubes for purification of lithium compounds: computational and experimental study

    Singha Deb, A.K.; Arora, S.K.; Joshi, J.M.; Ali, Sk. M.; Shenoy, K.T.; Goyal, Aiana

    2015-01-01

    Lithium compounds finds several applications in nuclear science and technology, viz, lithium fluoride/hydroxide/alloys are used as dosimetric materials in luminescence devices, molten-salt breeder reactor, international thermonuclear experimental reactor, single crystal based neutron detectors etc. The lithium compounds should be in a proper state of purity; especially it should not contain other alkali metal cations which can downgrade the performance. Hence, there is a need to develop a process for purification of the lithium salt to achieve the desired quality. Therefore an attempt has been made to develop advanced nanomaterials for purification of the lithium salts. In this work, benzo-15-crown-5(B15C5) functionalized carbon nanotubes (CNTs), owing to the good adsorption properties of CNT and alkali metal encapsulation behaviour of B15C5, were showed to bind preferentially with sodium and potassium ions compared to lithium ions. DFT based computation calculations have shown that the free energy of complexation of Na + and K + by B15C5-CNT is higher than that of Li + , implying that B15C5-CNT selectively binds Na + and K + . The experimental batch solid-liquid extraction has also revealed the same trend as in the calculations. The crown-ethers functionalized CNTs have the potentiality for use in purifying lithium compounds. (author)

  4. Brain-Computer Interface Controlled Functional Electrical Stimulation System for Ankle Movement

    King Christine E

    2011-08-01

    Full Text Available Abstract Background Many neurological conditions, such as stroke, spinal cord injury, and traumatic brain injury, can cause chronic gait function impairment due to foot-drop. Current physiotherapy techniques provide only a limited degree of motor function recovery in these individuals, and therefore novel therapies are needed. Brain-computer interface (BCI is a relatively novel technology with a potential to restore, substitute, or augment lost motor behaviors in patients with neurological injuries. Here, we describe the first successful integration of a noninvasive electroencephalogram (EEG-based BCI with a noninvasive functional electrical stimulation (FES system that enables the direct brain control of foot dorsiflexion in able-bodied individuals. Methods A noninvasive EEG-based BCI system was integrated with a noninvasive FES system for foot dorsiflexion. Subjects underwent computer-cued epochs of repetitive foot dorsiflexion and idling while their EEG signals were recorded and stored for offline analysis. The analysis generated a prediction model that allowed EEG data to be analyzed and classified in real time during online BCI operation. The real-time online performance of the integrated BCI-FES system was tested in a group of five able-bodied subjects who used repetitive foot dorsiflexion to elicit BCI-FES mediated dorsiflexion of the contralateral foot. Results Five able-bodied subjects performed 10 alternations of idling and repetitive foot dorsifiexion to trigger BCI-FES mediated dorsifiexion of the contralateral foot. The epochs of BCI-FES mediated foot dorsifiexion were highly correlated with the epochs of voluntary foot dorsifiexion (correlation coefficient ranged between 0.59 and 0.77 with latencies ranging from 1.4 sec to 3.1 sec. In addition, all subjects achieved a 100% BCI-FES response (no omissions, and one subject had a single false alarm. Conclusions This study suggests that the integration of a noninvasive BCI with a lower

  5. Brain-computer interface controlled functional electrical stimulation system for ankle movement.

    Do, An H; Wang, Po T; King, Christine E; Abiri, Ahmad; Nenadic, Zoran

    2011-08-26

    Many neurological conditions, such as stroke, spinal cord injury, and traumatic brain injury, can cause chronic gait function impairment due to foot-drop. Current physiotherapy techniques provide only a limited degree of motor function recovery in these individuals, and therefore novel therapies are needed. Brain-computer interface (BCI) is a relatively novel technology with a potential to restore, substitute, or augment lost motor behaviors in patients with neurological injuries. Here, we describe the first successful integration of a noninvasive electroencephalogram (EEG)-based BCI with a noninvasive functional electrical stimulation (FES) system that enables the direct brain control of foot dorsiflexion in able-bodied individuals. A noninvasive EEG-based BCI system was integrated with a noninvasive FES system for foot dorsiflexion. Subjects underwent computer-cued epochs of repetitive foot dorsiflexion and idling while their EEG signals were recorded and stored for offline analysis. The analysis generated a prediction model that allowed EEG data to be analyzed and classified in real time during online BCI operation. The real-time online performance of the integrated BCI-FES system was tested in a group of five able-bodied subjects who used repetitive foot dorsiflexion to elicit BCI-FES mediated dorsiflexion of the contralateral foot. Five able-bodied subjects performed 10 alternations of idling and repetitive foot dorsifiexion to trigger BCI-FES mediated dorsifiexion of the contralateral foot. The epochs of BCI-FES mediated foot dorsifiexion were highly correlated with the epochs of voluntary foot dorsifiexion (correlation coefficient ranged between 0.59 and 0.77) with latencies ranging from 1.4 sec to 3.1 sec. In addition, all subjects achieved a 100% BCI-FES response (no omissions), and one subject had a single false alarm. This study suggests that the integration of a noninvasive BCI with a lower-extremity FES system is feasible. With additional modifications

  6. Functional magnetic resonance maps obtained by personal computer; Mapas de resonancia magnetica funcional obtenidos con PC

    Gomez, F. j.; Manjon, J. V.; Robles, M. [Universidad Politecnica de Valencia (Spain); Marti-Bonmati, L.; Dosda, R. [Cinica Quiron. Valencia (Spain); Molla, E. [Universidad de Valencia (Spain)

    2001-07-01

    Functional magnetic resonance (fMR) is of special relevance in the analysis of certain types of brain activation. The present report describes the development of a simple software program for use with personal computers (PCs) that analyzes these images and provides functional activation maps. Activation maps are based on the temporal differences in oxyhemoglobin in tomographic images. To detect these differences, intensities registered repeatedly during brain control and activation are compared. The experiments were performed with a 1.5-Tesla MR unit. To verify the reliability of the program fMR studies were carried out in 4 healthy individuals (12 contiguous slices, 80 images per slice every 3.1 seconds for a total of 960 images). All the images were transferred to a PC and were processed pixel by pixel within each sequence to obtain an intensity/time curve. The statistical study of the results (Student's test and cross correlation analysis) made it possible to establish the activation of each pixel. The images were prepared using spatial filtering, temporal filtering, baseline correction, normalization and segmentation of the parenchyma. The postprocessing of the results involved the elimination of single pixels, superposition of an anatomical image of greater spatial resolution and anti-aliasing. The application (Xfun 1.0, Valencia, Spain) was developed in Microsoft Visual C++5.0 Developer Studio for Windows NT Workstation. As a representative example, the program took 8.2 seconds to calculate and present the results of the entire study (12 functional maps). In the motor and visual activation experiments, the activation corresponding to regions proximal to the central sulcus of the hemisphere contralateral to the hand that moved and in the occipital cortex were observed. While programs that calculate activation maps are available, the development of software for PCs running Microsoft Windows ensures several key features for its use on a daily basis: it is

  7. Antisecretory therapy with no improvement in functional level in Ménière's disease

    Ingvardsen, Charlotte J; Klokker, Mads

    2016-01-01

    -HNS) functional scale, and the frequency of attacks was registered. Results Thirty-two patients completed the study. No carryover effect was found. In both functional level and frequency of attacks no significant effect of SPC was found. Seventeen patients showed improvement in functional level when treated......, and 22 when treated with placebo. Three patients reported more frequent attacks when treated with SPC, and three when treated with placebo. A non-parametric comparison and a parametric analysis supported the findings....

  8. The Effect of Neurocognitive Function on Math Computation in Pediatric ADHD: Moderating Influences of Anxious Perfectionism and Gender.

    Sturm, Alexandra; Rozenman, Michelle; Piacentini, John C; McGough, James J; Loo, Sandra K; McCracken, James T

    2018-03-20

    Predictors of math achievement in attention-deficit/hyperactivity disorder (ADHD) are not well-known. To address this gap in the literature, we examined individual differences in neurocognitive functioning domains on math computation in a cross-sectional sample of youth with ADHD. Gender and anxiety symptoms were explored as potential moderators. The sample consisted of 281 youth (aged 8-15 years) diagnosed with ADHD. Neurocognitive tasks assessed auditory-verbal working memory, visuospatial working memory, and processing speed. Auditory-verbal working memory speed significantly predicted math computation. A three-way interaction revealed that at low levels of anxious perfectionism, slower processing speed predicted poorer math computation for boys compared to girls. These findings indicate the uniquely predictive values of auditory-verbal working memory and processing speed on math computation, and their differential moderation. These findings provide preliminary support that gender and anxious perfectionism may influence the relationship between neurocognitive functioning and academic achievement.

  9. Exact fast computation of band depth for large functional datasets: How quickly can one million curves be ranked?

    Sun, Ying

    2012-10-01

    © 2012 John Wiley & Sons, Ltd. Band depth is an important nonparametric measure that generalizes order statistics and makes univariate methods based on order statistics possible for functional data. However, the computational burden of band depth limits its applicability when large functional or image datasets are considered. This paper proposes an exact fast method to speed up the band depth computation when bands are defined by two curves. Remarkable computational gains are demonstrated through simulation studies comparing our proposal with the original computation and one existing approximate method. For example, we report an experiment where our method can rank one million curves, evaluated at fifty time points each, in 12.4 seconds with Matlab.

  10. Computer animations of color markings reveal the function of visual threat signals in Neolamprologus pulcher.

    Balzarini, Valentina; Taborsky, Michael; Villa, Fabienne; Frommen, Joachim G

    2017-02-01

    Visual signals, including changes in coloration and color patterns, are frequently used by animals to convey information. During contests, body coloration and its changes can be used to assess an opponent's state or motivation. Communication of aggressive propensity is particularly important in group-living animals with a stable dominance hierarchy, as the outcome of aggressive interactions determines the social rank of group members. Neolamprologus pulcher is a cooperatively breeding cichlid showing frequent within-group aggression. Both sexes exhibit two vertical black stripes on the operculum that vary naturally in shape and darkness. During frontal threat displays these patterns are actively exposed to the opponent, suggesting a signaling function. To investigate the role of operculum stripes during contests we manipulated their darkness in computer animated pictures of the fish. We recorded the responses in behavior and stripe darkness of test subjects to which these animated pictures were presented. Individuals with initially darker stripes were more aggressive against the animations and showed more operculum threat displays. Operculum stripes of test subjects became darker after exposure to an animation exhibiting a pale operculum than after exposure to a dark operculum animation, highlighting the role of the darkness of this color pattern in opponent assessment. We conclude that (i) the black stripes on the operculum of N. pulcher are a reliable signal of aggression and dominance, (ii) these markings play an important role in opponent assessment, and (iii) 2D computer animations are well suited to elicit biologically meaningful short-term aggressive responses in this widely used model system of social evolution.

  11. Computational engineering of cellulase Cel9A-68 functional motions through mutations in its linker region.

    Costa, M G S; Silva, Y F; Batista, P R

    2018-03-14

    Microbial cellulosic degradation by cellulases has become a complementary approach for biofuel production. However, its efficiency is hindered by the recalcitrance of cellulose fibres. In this context, computational protein design methods may offer an efficient way to obtain variants with improved enzymatic activity. Cel9A-68 is a cellulase from Thermobifida fusca that is still active at high temperatures. In a previous work, we described a collective bending motion, which governs the overall cellulase dynamics. This movement promotes the approximation of its CBM and CD structural domains (that are connected by a flexible linker). We have identified two residues (G460 and P461) located at the linker that act as a hinge point. Herein, we applied a new level of protein design, focusing on the modulation of this collective motion to obtain cellulase variants with enhanced functional dynamics. We probed whether specific linker mutations would affect Cel9A-68 dynamics through computational simulations. We assumed that P461G and G460+ (with an extra glycine) constructs would present enhanced interdomain motions, while the G460P mutant would be rigid. From our results, the P461G mutation resulted in a broader exploration of the conformational space, as confirmed by clustering and free energy analyses. The WT enzyme was the most rigid system. However, G460P and P460+ explored distinct conformational states described by opposite directions of low-frequency normal modes; they sampled preferentially closed and open conformations, respectively. Overall, we highlight two significant findings: (i) all mutants explored larger conformational spaces than the WT; (ii) the selection of distinct conformational populations was intimately associated with the mutation considered. Thus, the engineering of Cel9A-68 motions through linker mutations may constitute an efficient way to improve cellulase activity, facilitating the disruption of cellulose fibres.

  12. Response functions for computing absorbed dose to skeletal tissues from neutron irradiation

    Bahadori, Amir A.; Johnson, Perry; Jokisch, Derek W.; Eckerman, Keith F.; Bolch, Wesley E.

    2011-11-01

    Spongiosa in the adult human skeleton consists of three tissues—active marrow (AM), inactive marrow (IM) and trabecularized mineral bone (TB). AM is considered to be the target tissue for assessment of both long-term leukemia risk and acute marrow toxicity following radiation exposure. The total shallow marrow (TM50), defined as all tissues lying within the first 50 µm of the bone surfaces, is considered to be the radiation target tissue of relevance for radiogenic bone cancer induction. For irradiation by sources external to the body, kerma to homogeneous spongiosa has been used as a surrogate for absorbed dose to both of these tissues, as direct dose calculations are not possible using computational phantoms with homogenized spongiosa. Recent micro-CT imaging of a 40 year old male cadaver has allowed for the accurate modeling of the fine microscopic structure of spongiosa in many regions of the adult skeleton (Hough et al 2011 Phys. Med. Biol. 56 2309-46). This microstructure, along with associated masses and tissue compositions, was used to compute specific absorbed fraction (SAF) values for protons originating in axial and appendicular bone sites (Jokisch et al 2011 Phys. Med. Biol. 56 6857-72). These proton SAFs, bone masses, tissue compositions and proton production cross sections, were subsequently used to construct neutron dose-response functions (DRFs) for both AM and TM50 targets in each bone of the reference adult male. Kerma conditions were assumed for other resultant charged particles. For comparison, AM, TM50 and spongiosa kerma coefficients were also calculated. At low incident neutron energies, AM kerma coefficients for neutrons correlate well with values of the AM DRF, while total marrow (TM) kerma coefficients correlate well with values of the TM50 DRF. At high incident neutron energies, all kerma coefficients and DRFs tend to converge as charged-particle equilibrium is established across the bone site. In the range of 10 eV to 100 Me

  13. A computational approach to discovering the functions of bacterial phytochromes by analysis of homolog distributions

    Lamparter Tilman

    2006-03-01

    bacterial phytochromes in ammonium assimilation and amino acid metabolism. Conclusion It was possible to identify several proteins that might share common functions with bacterial phytochromes by the co-distribution approach. This computational approach might also be helpful in other cases.

  14. Phasic firing in vasopressin cells: understanding its functional significance through computational models.

    Duncan J MacGregor

    Full Text Available Vasopressin neurons, responding to input generated by osmotic pressure, use an intrinsic mechanism to shift from slow irregular firing to a distinct phasic pattern, consisting of long bursts and silences lasting tens of seconds. With increased input, bursts lengthen, eventually shifting to continuous firing. The phasic activity remains asynchronous across the cells and is not reflected in the population output signal. Here we have used a computational vasopressin neuron model to investigate the functional significance of the phasic firing pattern. We generated a concise model of the synaptic input driven spike firing mechanism that gives a close quantitative match to vasopressin neuron spike activity recorded in vivo, tested against endogenous activity and experimental interventions. The integrate-and-fire based model provides a simple physiological explanation of the phasic firing mechanism involving an activity-dependent slow depolarising afterpotential (DAP generated by a calcium-inactivated potassium leak current. This is modulated by the slower, opposing, action of activity-dependent dendritic dynorphin release, which inactivates the DAP, the opposing effects generating successive periods of bursting and silence. Model cells are not spontaneously active, but fire when perturbed by random perturbations mimicking synaptic input. We constructed one population of such phasic neurons, and another population of similar cells but which lacked the ability to fire phasically. We then studied how these two populations differed in the way that they encoded changes in afferent inputs. By comparison with the non-phasic population, the phasic population responds linearly to increases in tonic synaptic input. Non-phasic cells respond to transient elevations in synaptic input in a way that strongly depends on background activity levels, phasic cells in a way that is independent of background levels, and show a similar strong linearization of the response

  15. Functional analysis of metabolic channeling and regulation in lignin biosynthesis: a computational approach.

    Yun Lee

    Full Text Available Lignin is a polymer in secondary cell walls of plants that is known to have negative impacts on forage digestibility, pulping efficiency, and sugar release from cellulosic biomass. While targeted modifications of different lignin biosynthetic enzymes have permitted the generation of transgenic plants with desirable traits, such as improved digestibility or reduced recalcitrance to saccharification, some of the engineered plants exhibit monomer compositions that are clearly at odds with the expected outcomes when the biosynthetic pathway is perturbed. In Medicago, such discrepancies were partly reconciled by the recent finding that certain biosynthetic enzymes may be spatially organized into two independent channels for the synthesis of guaiacyl (G and syringyl (S lignin monomers. Nevertheless, the mechanistic details, as well as the biological function of these interactions, remain unclear. To decipher the working principles of this and similar control mechanisms, we propose and employ here a novel computational approach that permits an expedient and exhaustive assessment of hundreds of minimal designs that could arise in vivo. Interestingly, this comparative analysis not only helps distinguish two most parsimonious mechanisms of crosstalk between the two channels by formulating a targeted and readily testable hypothesis, but also suggests that the G lignin-specific channel is more important for proper functioning than the S lignin-specific channel. While the proposed strategy of analysis in this article is tightly focused on lignin synthesis, it is likely to be of similar utility in extracting unbiased information in a variety of situations, where the spatial organization of molecular components is critical for coordinating the flow of cellular information, and where initially various control designs seem equally valid.

  16. Functional safeguards for computers for protection systems for Savannah River reactors

    Kritz, W.R.

    1977-06-01

    Reactors at the Savannah River Plant have recently been equipped with a ''safety computer'' system. This system utilizes dual digital computers in a primary protection system that monitors individual fuel assembly coolant flow and temperature. The design basis for the (SRP safety) computer systems allowed for eventual failure of any input sensor or any computer component. These systems are routinely used by reactor operators with a minimum of training in computer technology. The hardware configuration and software design therefore contain safeguards so that both hardware and human failures do not cause significant loss of reactor protection. The performance of the system to date is described

  17. Practical Steps toward Computational Unification: Helpful Perspectives for New Systems, Adding Functionality to Existing Ones

    Troy, R. M.

    2005-12-01

    and functions may be integrated into a system efficiently, with minimal effort, and with an eye toward an eventual Computational Unification of the Earth Sciences. A fundamental to such systems is meta-data which describe not only the content of data but also how intricate relationships are represented and used to good advantage. Retrieval techniques will be discussed including trade-offs in using externally managed meta-data versus embedded meta-data, how the two may be integrated, and how "simplifying assumptions" may or may not actually be helpful. The perspectives presented in this talk or poster session are based upon the experience of the Sequoia 2000 and BigSur research projects at the University of California, Berkeley, which sought to unify NASA's Mission To Planet Earth's EOS-DIS, and on-going experience developed by Science Tools corporation, of which the author is a principal. NOTE: These ideas are most easily shared in the form of a talk, and we suspect that this session will generate a lot of interest. We would therefore prefer to have this session accepted as a talk as opposed to a poster session.

  18. Probabilistic performance estimators for computational chemistry methods: The empirical cumulative distribution function of absolute errors

    Pernot, Pascal; Savin, Andreas

    2018-06-01

    Benchmarking studies in computational chemistry use reference datasets to assess the accuracy of a method through error statistics. The commonly used error statistics, such as the mean signed and mean unsigned errors, do not inform end-users on the expected amplitude of prediction errors attached to these methods. We show that, the distributions of model errors being neither normal nor zero-centered, these error statistics cannot be used to infer prediction error probabilities. To overcome this limitation, we advocate for the use of more informative statistics, based on the empirical cumulative distribution function of unsigned errors, namely, (1) the probability for a new calculation to have an absolute error below a chosen threshold and (2) the maximal amplitude of errors one can expect with a chosen high confidence level. Those statistics are also shown to be well suited for benchmarking and ranking studies. Moreover, the standard error on all benchmarking statistics depends on the size of the reference dataset. Systematic publication of these standard errors would be very helpful to assess the statistical reliability of benchmarking conclusions.

  19. Revealing Soil Structure and Functional Macroporosity along a Clay Gradient Using X-ray Computed Tomography

    Naveed, Muhammad; Møldrup, Per; Arthur, Emmanuel

    2013-01-01

    clay content, respectively) at a field site in Lerbjerg, Denmark. The water-holding capacity of soils markedly increased with increasing soil clay content, while significantly higher air permeability was observed for the L1 to L3 soils than for the L4 to L6 soils. Higher air permeability values......The influence of clay content in soil-pore structure development and the relative importance of macroporosity in governing convective fluid flow are two key challenges toward better understanding and quantifying soil ecosystem functions. In this study, soil physical measurements (soil-water...... retention and air permeability) and x-ray computed tomography (CT) scanning were combined and used from two scales on intact soil columns (100 and 580 cm3). The columns were sampled along a natural clay gradient at six locations (L1, L2, L3, L4, L5 and L6 with 0.11, 0.16, 0.21, 0.32, 0.38 and 0.46 kg kg−1...

  20. Contrast computation methods for interferometric measurement of sensor modulation transfer function

    Battula, Tharun; Georgiev, Todor; Gille, Jennifer; Goma, Sergio

    2018-01-01

    Accurate measurement of image-sensor frequency response over a wide range of spatial frequencies is very important for analyzing pixel array characteristics, such as modulation transfer function (MTF), crosstalk, and active pixel shape. Such analysis is especially significant in computational photography for the purposes of deconvolution, multi-image superresolution, and improved light-field capture. We use a lensless interferometric setup that produces high-quality fringes for measuring MTF over a wide range of frequencies (here, 37 to 434 line pairs per mm). We discuss the theoretical framework, involving Michelson and Fourier contrast measurement of the MTF, addressing phase alignment problems using a moiré pattern. We solidify the definition of Fourier contrast mathematically and compare it to Michelson contrast. Our interferometric measurement method shows high detail in the MTF, especially at high frequencies (above Nyquist frequency). We are able to estimate active pixel size and pixel pitch from measurements. We compare both simulation and experimental MTF results to a lens-free slanted-edge implementation using commercial software.

  1. Computer-mediated communication preferences predict biobehavioral measures of social-emotional functioning.

    Babkirk, Sarah; Luehring-Jones, Peter; Dennis-Tiwary, Tracy A

    2016-12-01

    The use of computer-mediated communication (CMC) as a form of social interaction has become increasingly prevalent, yet few studies examine individual differences that may shed light on implications of CMC for adjustment. The current study examined neurocognitive individual differences associated with preferences to use technology in relation to social-emotional outcomes. In Study 1 (N = 91), a self-report measure, the Social Media Communication Questionnaire (SMCQ), was evaluated as an assessment of preferences for communicating positive and negative emotions on a scale ranging from purely via CMC to purely face-to-face. In Study 2, SMCQ preferences were examined in relation to event-related potentials (ERPs) associated with early emotional attention capture and reactivity (the frontal N1) and later sustained emotional processing and regulation (the late positive potential (LPP)). Electroencephalography (EEG) was recorded while 22 participants passively viewed emotional and neutral pictures and completed an emotion regulation task with instructions to increase, decrease, or maintain their emotional responses. A greater preference for CMC was associated with reduced size of and satisfaction with social support, greater early (N1) attention capture by emotional stimuli, and reduced LPP amplitudes to unpleasant stimuli in the increase emotion regulatory task. These findings are discussed in the context of possible emotion- and social-regulatory functions of CMC.

  2. Computational Modeling and Theoretical Calculations on the Interactions between Spermidine and Functional Monomer (Methacrylic Acid in a Molecularly Imprinted Polymer

    Yujie Huang

    2015-01-01

    Full Text Available This paper theoretically investigates interactions between a template and functional monomer required for synthesizing an efficient molecularly imprinted polymer (MIP. We employed density functional theory (DFT to compute geometry, single-point energy, and binding energy (ΔE of an MIP system, where spermidine (SPD and methacrylic acid (MAA were selected as template and functional monomer, respectively. The geometry was calculated by using B3LYP method with 6-31+(d basis set. Furthermore, 6-311++(d, p basis set was used to compute the single-point energy of the above geometry. The optimized geometries at different template to functional monomer molar ratios, mode of bonding between template and functional monomer, changes in charge on natural bond orbital (NBO, and binding energy were analyzed. The simulation results show that SPD and MAA form a stable complex via hydrogen bonding. At 1 : 5 SPD to MAA ratio, the binding energy is minimum, while the amount of transferred charge between the molecules is maximum; SPD and MAA form a stable complex at 1 : 5 molar ratio through six hydrogen bonds. Optimizing structure of template-functional monomer complex, through computational modeling prior synthesis, significantly contributes towards choosing a suitable pair of template-functional monomer that yields an efficient MIP with high specificity and selectivity.

  3. Studies on the zeros of Bessel functions and methods for their computation: 2. Monotonicity, convexity, concavity, and other properties

    Kerimov, M. K.

    2016-07-01

    This work continues the study of real zeros of first- and second-kind Bessel functions and Bessel general functions with real variables and orders begun in the first part of this paper (see M.K. Kerimov, Comput. Math. Math. Phys. 54 (9), 1337-1388 (2014)). Some new results concerning such zeros are described and analyzed. Special attention is given to the monotonicity, convexity, and concavity of zeros with respect to their ranks and other parameters.

  4. Functional needs which led to the use of digital computing devices in the protection system of 1300 MW units

    Dalle, H.

    1986-01-01

    After a review of classical protection functions used in 900 MW power plants, it is concluded that in order to have functioning margins it is useful to calculate more finely the controled parameters. These calculating needs lead to the use of digital computing devices. Drawing profit from the new possibilities one can improve the general performances of the protection system with regard to availability, safety and maintenance. These options in the case of PALUEL led to the realization of SPIN, described here

  5. Smoothing dynamic positron emission tomography time courses using functional principal components

    Jiang, Ci-Ren; Aston, John A. D.; Wang, Jane-Ling

    2009-01-01

    A functional smoothing approach to the analysis of PET time course data is presented. By borrowing information across space and accounting for this pooling through the use of a non-parametric covariate adjustment, it is possible to smooth the PET time course data thus reducing the noise. A new model for functional data analysis, the Multiplicative Nonparametric Random Effects Model, is introduced to more accurately account for the variation in the data. A locally adaptive bandwidth choice hel...

  6. Computation of the modified Bessel function of the third kind of imaginary orders: uniform Airy-type asymptotic expansion

    A. Gil (Amparo); J. Segura (Javier); N.M. Temme (Nico)

    2002-01-01

    textabstractThe use of a uniform Airy-type asymptotic expansion for the computation of the modified Bessel functions of the third kind of imaginary orders ($K_{ia}(x)$) near the transition point $x=a$, is discussed. In [2], an algorithm for the evaluation of $K_{ia}(x)$ was presented, which made use

  7. The EORTC computer-adaptive tests measuring physical functioning and fatigue exhibited high levels of measurement precision and efficiency

    Petersen, Morten Aa; Aaronson, Neil K; Arraras, Juan I

    2013-01-01

    The European Organisation for Research and Treatment of Cancer (EORTC) Quality of Life Group is developing a computer-adaptive test (CAT) version of the EORTC Quality of Life Questionnaire (QLQ-C30). We evaluated the measurement properties of the CAT versions of physical functioning (PF...

  8. The EORTC computer-adaptive tests measuring physical functioning and fatigue exhibited high levels of measurement precision and efficiency

    Petersen, M.A.; Aaronson, N.K.; Arraras, J.I.; Chie, W.C.; Conroy, T.; Constantini, A.; Giesinger, J.M.; Holzner, B.; King, M.T.; Singer, S.; Velikova, G.; Verdonck-de Leeuw, I.M.; Young, T.; Groenvold, M.

    2013-01-01

    Objectives The European Organisation for Research and Treatment of Cancer (EORTC) Quality of Life Group is developing a computer-adaptive test (CAT) version of the EORTC Quality of Life Questionnaire (QLQ-C30). We evaluated the measurement properties of the CAT versions of physical functioning (PF)

  9. The EORTC computer-adaptive tests measuring physical functioning and fatigue exhibited high levels of measurement precision and efficiency

    Petersen, M.A.; Aaronson, N.K.; Arraras, J.I.; Chie, W.C.; Conroy, T.; Costantini, A.; Giesinger, J.M.; Holzner, B.; King, M.T.; Singer, S.; Velikova, G.; de Leeuw, I.M.; Young, T.; Groenvold, M.

    2013-01-01

    Objectives: The European Organisation for Research and Treatment of Cancer (EORTC) Quality of Life Group is developing a computer-adaptive test (CAT) version of the EORTC Quality of Life Questionnaire (QLQ-C30). We evaluated the measurement properties of the CAT versions of physical functioning (PF)

  10. Density functional theory based screening of ternary alkali-transition metal borohydrides: A computational material design project

    Hummelshøj, Jens Strabo; Landis, David; Voss, Johannes

    2009-01-01

    We present a computational screening study of ternary metal borohydrides for reversible hydrogen storage based on density functional theory. We investigate the stability and decomposition of alloys containing 1 alkali metal atom, Li, Na, or K (M1); and 1 alkali, alkaline earth or 3d/4d transition...

  11. Complex functionality with minimal computation: Promise and pitfalls of reduced-tracer ocean biogeochemistry models

    Galbraith, Eric D.; Dunne, John P.; Gnanadesikan, Anand; Slater, Richard D.; Sarmiento, Jorge L.; Dufour, Carolina O.; de Souza, Gregory F.; Bianchi, Daniele; Claret, Mariona; Rodgers, Keith B.; Marvasti, Seyedehsafoura Sedigh

    2015-12-01

    Earth System Models increasingly include ocean biogeochemistry models in order to predict changes in ocean carbon storage, hypoxia, and biological productivity under climate change. However, state-of-the-art ocean biogeochemical models include many advected tracers, that significantly increase the computational resources required, forcing a trade-off with spatial resolution. Here, we compare a state-of-the art model with 30 prognostic tracers (TOPAZ) with two reduced-tracer models, one with 6 tracers (BLING), and the other with 3 tracers (miniBLING). The reduced-tracer models employ parameterized, implicit biological functions, which nonetheless capture many of the most important processes resolved by TOPAZ. All three are embedded in the same coupled climate model. Despite the large difference in tracer number, the absence of tracers for living organic matter is shown to have a minimal impact on the transport of nutrient elements, and the three models produce similar mean annual preindustrial distributions of macronutrients, oxygen, and carbon. Significant differences do exist among the models, in particular the seasonal cycle of biomass and export production, but it does not appear that these are necessary consequences of the reduced tracer number. With increasing CO2, changes in dissolved oxygen and anthropogenic carbon uptake are very similar across the different models. Thus, while the reduced-tracer models do not explicitly resolve the diversity and internal dynamics of marine ecosystems, we demonstrate that such models are applicable to a broad suite of major biogeochemical concerns, including anthropogenic change. These results are very promising for the further development and application of reduced-tracer biogeochemical models that incorporate "sub-ecosystem-scale" parameterizations.

  12. Single-photon emission computed tomography for the assessment of ventricular perfusion and function

    Gonzalez, Patricio; Dussaillant, Gaston; Gutierrez, Daniela; Berrocal, Isabel; Alay, Rita; Otarola, Sonia

    2013-01-01

    Background: Single-photon emission computed tomography (SPECT) can be used as a non-invasive tool for the assessment of coronary perfusion. Aim: To assess ventricular perfusion and function by SPECT in patients with single vessel coronary artery disease. Material and Methods: Among patients with indications for a coronary artery angiography, those with significant lesions in one vessel, were selected for the study. Within 24 hours, cardiac SPECT examinations on basal conditions and after high doses of dipyridamole, were performed. SPECT data from 38 patients with a low probability of coronary artery disease was used for comparisons. Results:Ten patients aged 61 ± 8 years (seven men) were studied. Visual analysis of SPECT revealed signs suggestive of ischemia in eight patients. The remaining two patients did not have perfusion disturbances. SPECT detected eight of ten abnormal vessels reported in the coronary artery angiography. There were two false negative results Summed stress, summed rest and summed difference scores were 9.78 ± 6.51, 3.22 ± 5.07 and 6.33 ± 4.97, respectively. The ejection fractions under stress and at rest were 53 ± 11.7% and 61 ± 15.7% respectively (p ≤ 0.01). The figures for the control group were 69.1 ± 13.5% and 75.2 ± 12.04% respectively (significantly different from patients). Two patients had a summed motion score above 14.9. Likewise, two patients had a summed thickening score above 10.9. Conclusions: SPECT detected 80% of coronary lesions found during coronary artery angiography. Visual analysis of perfusion is highly reliable for diagnosis. Quantitative parameters must be considered only as reference parameters

  13. WE-FG-207B-02: Material Reconstruction for Spectral Computed Tomography with Detector Response Function

    Liu, J; Gao, H

    2016-01-01

    Purpose: Different from the conventional computed tomography (CT), spectral CT based on energy-resolved photon-counting detectors is able to provide the unprecedented material composition. However, an important missing piece for accurate spectral CT is to incorporate the detector response function (DRF), which is distorted by factors such as pulse pileup and charge-sharing. In this work, we propose material reconstruction methods for spectral CT with DRF. Methods: The polyenergetic X-ray forward model takes the DRF into account for accurate material reconstruction. Two image reconstruction methods are proposed: a direct method based on the nonlinear data fidelity from DRF-based forward model; a linear-data-fidelity based method that relies on the spectral rebinning so that the corresponding DRF matrix is invertible. Then the image reconstruction problem is regularized with the isotropic TV term and solved by alternating direction method of multipliers. Results: The simulation results suggest that the proposed methods provided more accurate material compositions than the standard method without DRF. Moreover, the proposed method with linear data fidelity had improved reconstruction quality from the proposed method with nonlinear data fidelity. Conclusion: We have proposed material reconstruction methods for spectral CT with DRF, whichprovided more accurate material compositions than the standard methods without DRF. Moreover, the proposed method with linear data fidelity had improved reconstruction quality from the proposed method with nonlinear data fidelity. Jiulong Liu and Hao Gao were partially supported by the NSFC (#11405105), the 973 Program (#2015CB856000), and the Shanghai Pujiang Talent Program (#14PJ1404500).

  14. Nephrocalcinosis in rabbits - correlation of ultrasound, computed tomography, pathology and renal function

    Cramer, B.; Pushpanathan, C.

    1998-01-01

    Objective. The purpose of this study was to induce nephrocalcinosis (NC) in rabbits with phosphate, vitamin D, oxalate and furosemide, to determine the effect of renal function and to correlate detection of ultrasound (US) and computed tomography (CT) with pathology. Materials and methods. Seventy-five immature New Zealand white rabbits were divided into five groups of 15. In each group, 5 animals were controls and 10 were given oral phosphate, furosemide, vitamin D or oxalate, furosemide, vitamin D or oxalate. Unilateral nephrectomy was performed at 3-6 weeks, and 5 rabbits of each test group were withdrawn from the substance. Weekly US was performed as well as US, CT and measurement of serum creatinine at the time of nephrectomy and prior to planned demise. Results. A todal of 140 kidneys in 75 rabbits had both pathological and US correlation, with CT correlation in 126. Forty rabbits developed nephrocalcinosis with early (post nephrectomy at 3-6 weeks) or late (post demise at 10-20 weeks) phatological correlation obtained in 53 kidneys. Forty-one of these kidneys were from test animals: 23 developed NC early, 18 late. Twelve controls developed NC; 4 early, 8 late. Comparing US and CT to phatology, the sensitivity was 96% for US, 64% for CT. Specificity was 85% for US and 96% for CT. In 109 kidneys, information on serum creatinine level was available to correlate with phatology. The mean creatinine level was 138 mmol/l for those with NC and 118 mmol/l for those without NC (P<0.001)

  15. Nephrocalcinosis in rabbits - correlation of ultrasound, computed tomography, pathology and renal function

    Cramer, B.; Pushpanathan, C. [Janeway Child Health Centre, St. Johns`s (Canada). Radiology Dept.; Husa, L. [Memorial Univ. of Newfoundland, St. Johns`s (Canada)

    1998-01-01

    Objective. The purpose of this study was to induce nephrocalcinosis (NC) in rabbits with phosphate, vitamin D, oxalate and furosemide, to determine the effect of renal function and to correlate detection of ultrasound (US) and computed tomography (CT) with pathology. Materials and methods. Seventy-five immature New Zealand white rabbits were divided into five groups of 15. In each group, 5 animals were controls and 10 were given oral phosphate, furosemide, vitamin D or oxalate, furosemide, vitamin D or oxalate. Unilateral nephrectomy was performed at 3-6 weeks, and 5 rabbits of each test group were withdrawn from the substance. Weekly US was performed as well as US, CT and measurement of serum creatinine at the time of nephrectomy and prior to planned demise. Results. A todal of 140 kidneys in 75 rabbits had both pathological and US correlation, with CT correlation in 126. Forty rabbits developed nephrocalcinosis with early (post nephrectomy at 3-6 weeks) or late (post demise at 10-20 weeks) phatological correlation obtained in 53 kidneys. Forty-one of these kidneys were from test animals: 23 developed NC early, 18 late. Twelve controls developed NC; 4 early, 8 late. Comparing US and CT to phatology, the sensitivity was 96% for US, 64% for CT. Specificity was 85% for US and 96% for CT. In 109 kidneys, information on serum creatinine level was available to correlate with phatology. The mean creatinine level was 138 mmol/l for those with NC and 118 mmol/l for those without NC (P<0.001).

  16. Multidetector computed tomography predictors of late ventricular remodeling and function after acute myocardial infarction

    Lessick, Jonathan; Abadi, Sobhi; Agmon, Yoram; Keidar, Zohar; Carasso, Shemi; Aronson, Doron; Ghersin, Eduard; Rispler, Shmuel; Sebbag, Anat; Israel, Ora; Hammerman, Haim; Roguin, Ariel

    2012-01-01

    Background: Despite advent of rapid arterial revascularization as 1st line treatment for acute myocardial infarction (AMI), incomplete restoral of flow at the microvascular level remains a problem and is associated with adverse prognosis, including pathological ventricular remodeling. We aimed to study the association between multidetector row computed tomography (MDCT) perfusion defects and ventricular remodeling post-AMI. Methods: In a prospective study, 20 patients with ST-elevation AMI, treated by primary angioplasty, underwent arterial and late phase MDCT as well as radionuclide scans to study presence, size and severity of myocardial perfusion defects. Contrast echocardiography was performed at baseline and at 4 months follow-up to evaluate changes in myocardial function and remodeling. Results: Early defects (ED), late defects (LD) and late enhancement (LE) were detected in 15, 7 and 16 patients, respectively and radionuclide defects in 15 patients. The ED area (r = 0.74), and LD area (r = 0.72), and to a lesser extent LE area (r = 0.62) correlated moderately well with SPECT summed rest score. By univariate analysis, follow-up end-systolic volume index and ejection fraction were both significantly related to ED and LD size and severity, but not to LE size or severity. By multivariate analysis, end-systolic volume index was best predicted by LD area (p < 0.05) and ejection fraction by LD enhancement ratio. Conclusions: LD size and severity on MDCT are most closely associated with pathological ventricular remodeling after AMI and may thus play a role in early identification and treatment of this condition

  17. Assessment of left ventricular function and mass in dual-source computed tomography coronary angiography

    Jensen, Christoph J., E-mail: c.jensen@contilia.d [Department of Cardiology and Angiology, Elisabeth Hospital, Essen (Germany); Jochims, Markus [Department of Cardiology and Angiology, Elisabeth Hospital, Essen (Germany); Hunold, Peter; Forsting, Michael; Barkhausen, Joerg [Department of Diagnostic and Interventional Radiology and Neuroradiology, University of Essen (Germany); Sabin, Georg V.; Bruder, Oliver [Department of Cardiology and Angiology, Elisabeth Hospital, Essen (Germany); Schlosser, Thomas [Department of Diagnostic and Interventional Radiology and Neuroradiology, University of Essen (Germany)

    2010-06-15

    Purpose: To quantify left ventricular (LV) function and mass (LVM) derived from dual-source computed tomography (DSCT) and the influence of beta-blocker administration compared to cardiac magnetic resonance imaging (CMR). Methods: Thirty-two patients undergoing cardiac DSCT and CMR were included, where of fifteen received metoprolol intravenously before DSCT. LV parameters were calculated by the disc-summation method (DSM) and by a segmented region-growing algorithm (RGA). All data sets were analyzed by two blinded observers. Interobserver agreement was tested by the intraclass correlation coefficient. Results.: 1. Using DSM LV parameters were not statistically different between DSCT and CMR in all patients (DSCT vs. CMR: EF 63 {+-} 8% vs. 64 {+-} 8%, p = 0.47; EDV 136 {+-} 36 ml vs. 138 {+-} 35 ml, p = 0.66; ESV 52 {+-} 21 ml vs. 52 {+-} 22 ml, p = 0.61; SV 83 {+-} 22 ml vs. 87 {+-} 19 ml, p = 0.22; CO 5.4 {+-} 0.9 l/min vs. 5.7 {+-} 1.2 l/min, p = 0.09, LVM 132 {+-} 33 g vs. 132 {+-} 33 g, p = 0.99). 2. In a subgroup of 15 patients beta-blockade prior to DSCT resulted in a lower ejection fraction (EF), stroke volume (SV), cardiac output (CO) and increase in end systolic volume (ESV) in DSCT (EF 59 {+-} 8% vs. 62 {+-} 9%; SV 73 {+-} 17 ml vs. 81 {+-} 15 ml; CO 5.7 {+-} 1.2 l/min vs. 5.0 {+-} 0.8 l/min; ESV 52 {+-} 27 ml vs. 57 {+-} 24 ml, all p < 0.05). 3. Analyzing the RGA parameters LV volumes were not significantly different compared to DSM, whereas LVM was higher using RGA (177 {+-} 31 g vs. 132 {+-} 33 g, p < 0.05). Interobserver agreement was excellent comparing DSM values with best agreement between RGA calculations. Conclusion: Left ventricular volumes and mass can reliably be assessed by DSCT compared to CMR. However, beta-blocker administration leads to statistically significant reduced EF, SV and CO, whereas ESV significantly increases. DSCT RGA reliably analyzes LV function, whereas LVM is overestimated compared to DSM.

  18. Assessment of left ventricular function and mass in dual-source computed tomography coronary angiography

    Jensen, Christoph J.; Jochims, Markus; Hunold, Peter; Forsting, Michael; Barkhausen, Joerg; Sabin, Georg V.; Bruder, Oliver; Schlosser, Thomas

    2010-01-01

    Purpose: To quantify left ventricular (LV) function and mass (LVM) derived from dual-source computed tomography (DSCT) and the influence of beta-blocker administration compared to cardiac magnetic resonance imaging (CMR). Methods: Thirty-two patients undergoing cardiac DSCT and CMR were included, where of fifteen received metoprolol intravenously before DSCT. LV parameters were calculated by the disc-summation method (DSM) and by a segmented region-growing algorithm (RGA). All data sets were analyzed by two blinded observers. Interobserver agreement was tested by the intraclass correlation coefficient. Results.: 1. Using DSM LV parameters were not statistically different between DSCT and CMR in all patients (DSCT vs. CMR: EF 63 ± 8% vs. 64 ± 8%, p = 0.47; EDV 136 ± 36 ml vs. 138 ± 35 ml, p = 0.66; ESV 52 ± 21 ml vs. 52 ± 22 ml, p = 0.61; SV 83 ± 22 ml vs. 87 ± 19 ml, p = 0.22; CO 5.4 ± 0.9 l/min vs. 5.7 ± 1.2 l/min, p = 0.09, LVM 132 ± 33 g vs. 132 ± 33 g, p = 0.99). 2. In a subgroup of 15 patients beta-blockade prior to DSCT resulted in a lower ejection fraction (EF), stroke volume (SV), cardiac output (CO) and increase in end systolic volume (ESV) in DSCT (EF 59 ± 8% vs. 62 ± 9%; SV 73 ± 17 ml vs. 81 ± 15 ml; CO 5.7 ± 1.2 l/min vs. 5.0 ± 0.8 l/min; ESV 52 ± 27 ml vs. 57 ± 24 ml, all p < 0.05). 3. Analyzing the RGA parameters LV volumes were not significantly different compared to DSM, whereas LVM was higher using RGA (177 ± 31 g vs. 132 ± 33 g, p < 0.05). Interobserver agreement was excellent comparing DSM values with best agreement between RGA calculations. Conclusion: Left ventricular volumes and mass can reliably be assessed by DSCT compared to CMR. However, beta-blocker administration leads to statistically significant reduced EF, SV and CO, whereas ESV significantly increases. DSCT RGA reliably analyzes LV function, whereas LVM is overestimated compared to DSM.

  19. Functional computed tomography imaging of tumor-induced angiogenesis. Preliminary results of new tracer kinetic modeling using a computer discretization approach

    Kaneoya, Katsuhiko; Ueda, Takuya; Suito, Hiroshi

    2008-01-01

    The aim of this study was to establish functional computed tomography (CT) imaging as a method for assessing tumor-induced angiogenesis. Functional CT imaging was mathematically analyzed for 14 renal cell carcinomas by means of two-compartment modeling using a computer-discretization approach. The model incorporated diffusible kinetics of contrast medium including leakage from the capillary to the extravascular compartment and back-flux to the capillary compartment. The correlations between functional CT parameters [relative blood volume (rbv), permeability 1 (Pm1), and permeability 2 (Pm2)] and histopathological markers of angiogenesis [microvessel density (MVD) and vascular endothelial growth factor (VEGF)] were statistically analyzed. The modeling was successfully performed, showing similarity between the mathematically simulated curve and the measured time-density curve. There were significant linear correlations between MVD grade and Pm1 (r=0.841, P=0.001) and between VEGF grade and Pm2 (r=0.804, P=0.005) by Pearson's correlation coefficient. This method may be a useful tool for the assessment of tumor-induced angiogenesis. (author)

  20. Evaluation of left ventricular function and volume with multidetector-row computed tomography. Comparison with electrocardiogram-gated single photon emission computed tomography

    Suzuki, Takeya; Yamashina, Shohei; Nanjou, Shuji; Yamazaki, Junichi

    2007-01-01

    This study compared left ventricular systolic function and volume determined by multidetector-row computed tomography (MDCT) and electrocardiogram-gated single photon emission computed tomography (G-SPECT) Thirty-seven patients with coronary artery disease and non-cardiovascular disease underwent MDCT. In this study, left ventricular ejection fraction (EF), left ventricular end-diastolic volume (EDV) and left ventricular end-systolic volume (ESV) were calculated using only two-phase imaging with MDCT. Left ventricular function and volume were compared using measurements from G-SPECT. We conducted MDCT and G-SPECT virtually simultaneously. Both the EF and ESV evaluated by MDCT closely correlated with G-SPECT (r=0.763, P 65 bpm) during MDCT significantly influenced the difference in EF calculated from MDCT and G-SPECT (P<0.05). Left ventricular function can be measured with MDCT as well as G-SPECT. However, a heart rate over 65 bpm during MDCT negatively affects the EF correlation between MDCT and G-SPECT. (author)

  1. Computing wave functions in multichannel collisions with non-local potentials using the R-matrix method

    Bonitati, Joey; Slimmer, Ben; Li, Weichuan; Potel, Gregory; Nunes, Filomena

    2017-09-01

    The calculable form of the R-matrix method has been previously shown to be a useful tool in approximately solving the Schrodinger equation in nuclear scattering problems. We use this technique combined with the Gauss quadrature for the Lagrange-mesh method to efficiently solve for the wave functions of projectile nuclei in low energy collisions (1-100 MeV) involving an arbitrary number of channels. We include the local Woods-Saxon potential, the non-local potential of Perey and Buck, a Coulomb potential, and a coupling potential to computationally solve for the wave function of two nuclei at short distances. Object oriented programming is used to increase modularity, and parallel programming techniques are introduced to reduce computation time. We conclude that the R-matrix method is an effective method to predict the wave functions of nuclei in scattering problems involving both multiple channels and non-local potentials. Michigan State University iCER ACRES REU.

  2. Computation of Green function of the Schroedinger-like partial differential equations by the numerical functional integration

    Lobanov, Yu.Yu.; Shahbagian, R.R.; Zhidkov, E.P.

    1991-01-01

    A new method for numerical solution of the boundary problem for Schroedinger-like partial differential equations in R n is elaborated. The method is based on representation of multidimensional Green function in the form of multiple functional integral and on the use of approximation formulas which are constructed for such integrals. The convergence of approximations to the exact value is proved, the remainder of the formulas is estimated. Method reduces the initial differential problem to quadratures. 16 refs.; 7 tabs

  3. A Dynamic Connectome Supports the Emergence of Stable Computational Function of Neural Circuits through Reward-Based Learning.

    Kappel, David; Legenstein, Robert; Habenschuss, Stefan; Hsieh, Michael; Maass, Wolfgang

    2018-01-01

    Synaptic connections between neurons in the brain are dynamic because of continuously ongoing spine dynamics, axonal sprouting, and other processes. In fact, it was recently shown that the spontaneous synapse-autonomous component of spine dynamics is at least as large as the component that depends on the history of pre- and postsynaptic neural activity. These data are inconsistent with common models for network plasticity and raise the following questions: how can neural circuits maintain a stable computational function in spite of these continuously ongoing processes, and what could be functional uses of these ongoing processes? Here, we present a rigorous theoretical framework for these seemingly stochastic spine dynamics and rewiring processes in the context of reward-based learning tasks. We show that spontaneous synapse-autonomous processes, in combination with reward signals such as dopamine, can explain the capability of networks of neurons in the brain to configure themselves for specific computational tasks, and to compensate automatically for later changes in the network or task. Furthermore, we show theoretically and through computer simulations that stable computational performance is compatible with continuously ongoing synapse-autonomous changes. After reaching good computational performance it causes primarily a slow drift of network architecture and dynamics in task-irrelevant dimensions, as observed for neural activity in motor cortex and other areas. On the more abstract level of reinforcement learning the resulting model gives rise to an understanding of reward-driven network plasticity as continuous sampling of network configurations.

  4. FIT: Computer Program that Interactively Determines Polynomial Equations for Data which are a Function of Two Independent Variables

    Arbuckle, P. D.; Sliwa, S. M.; Roy, M. L.; Tiffany, S. H.

    1985-01-01

    A computer program for interactively developing least-squares polynomial equations to fit user-supplied data is described. The program is characterized by the ability to compute the polynomial equations of a surface fit through data that are a function of two independent variables. The program utilizes the Langley Research Center graphics packages to display polynomial equation curves and data points, facilitating a qualitative evaluation of the effectiveness of the fit. An explanation of the fundamental principles and features of the program, as well as sample input and corresponding output, are included.

  5. Cardiovascular measurement and cardiac function analysis with electron beam computed tomography in health Chinese people (50 cases report)

    Lu Bin; Dai Ruping; Zhang Shaoxiong; Bai Hua; Jing Baolian; Cao Cheng; He Sha; Ren Li

    1998-01-01

    Purpose: To quantitatively measure cardiovascular diameters and function parameters by using electron beam computed tomography, EBCT. Methods: Men 50 health Chinese people accepted EBCT common transverse and short-axis enhanced movie scan (27 men, 23 women, average age 47.7 years.). The transverse scan was used to measure the diameters of the ascending aorta, descending aorta, pulmonary artery and left atrium. The movie study was used to measure the left ventricular myocardium thickness and analysis global, sectional and segmental function of the right and left ventricles. Results: The cardiovascular diameters and cardiac functional parameters were calculated. The diameters and most functional parameters (end syspoble volume, syspole volume, ejection fraction, cardiac-output, cardiac index) of normal Chinese men were greater than those of women (P>0.05). However, the EDV and MyM(myocardium mass) of both ventricles were significant (p<0.01). Conclusion: EBCT is a minimally invasive method for cardiovascular measurement and cardiac function evaluation

  6. Application node system image manager subsystem within a distributed function laboratory computer system

    Stubblefield, F.W.; Beck, R.D.

    1978-10-01

    A computer system to control and acquire data from one x-ray diffraction, five neutron scattering, and four neutron diffraction experiments located at the Brookhaven National Laboratory High Flux Beam Reactor has operated in a routine manner for over three years. The computer system is configured as a network of computer processors with the processor interconnections assuming a star-like structure. At the points of the star are the ten experiment control-data acquisition computers, referred to as application nodes. At the center of the star is a shared service node which supplies a set of shared services utilized by all of the application nodes. A program development node occupies one additional point of the star. The design and implementation of a network subsystem to support development and execution of operating systems for the application nodes is described. 6 figures, 1 table

  7. A Functional Correspondence between Monadic Evaluators and Abstract Machines for Languages with Computational Effects

    Ager, Mads Sig; Danvy, Olivier; Midtgaard, Jan

    2005-01-01

    We extend our correspondence between evaluators and abstract machines from the pure setting of the lambda-calculus to the impure setting of the computational lambda-calculus. We show how to derive new abstract machines from monadic evaluators for the computational lambda-calculus. Starting from (1......) a generic evaluator parameterized by a monad and (2) a monad specifying a computational effect, we inline the components of the monad in the generic evaluator to obtain an evaluator written in a style that is specific to this computational effect. We then derive the corresponding abstract machine by closure......-converting, CPS-transforming, and defunctionalizing this specific evaluator. We illustrate the construction first with the identity monad, obtaining the CEK machine, and then with a lifting monad, a state monad, and with a lifted state monad, obtaining variants of the CEK machine with error handling, state...

  8. Computational Methods for Large Spatio-temporal Datasets and Functional Data Ranking

    Huang, Huang

    2017-01-01

    that are both computationally and statistically efficient. We explore the improvement of the approximation theoretically and investigate the performance by simulations. For real applications, we analyze a soil moisture dataset with 2 million measurements

  9. Assessment of tumor vascularization with functional computed tomography perfusion imaging in patients with cirrhotic liver disease.

    Li, Jin-Ping; Zhao, De-Li; Jiang, Hui-Jie; Huang, Ya-Hua; Li, Da-Qing; Wan, Yong; Liu, Xin-Ding; Wang, Jin-E

    2011-02-01

    Hepatocellular carcinoma (HCC) is a common malignant tumor in China, and early diagnosis is critical for patient outcome. In patients with HCC, it is mostly based on liver cirrhosis, developing from benign regenerative nodules and dysplastic nodules to HCC lesions, and a better understanding of its vascular supply and the hemodynamic changes may lead to early tumor detection. Angiogenesis is essential for the growth of primary and metastatic tumors due to changes in vascular perfusion, blood volume and permeability. These hemodynamic and physiological properties can be measured serially using functional computed tomography perfusion (CTP) imaging and can be used to assess the growth of HCC. This study aimed to clarify the physiological characteristics of tumor angiogenesis in cirrhotic liver disease by this fast imaging method. CTP was performed in 30 volunteers without liver disease (control subjects) and 49 patients with liver disease (experimental subjects: 27 with HCC and 22 with cirrhosis). All subjects were also evaluated by physical examination, laboratory screening and Doppler ultrasonography of the liver. The diagnosis of HCC was made according to the EASL criteria. All patients underwent contrast-enhanced ultrasonography, pre- and post-contrast triple-phase CT and CTP study. A mathematical deconvolution model was applied to provide hepatic blood flow (HBF), hepatic blood volume (HBV), mean transit time (MTT), permeability of capillary vessel surface (PS), hepatic arterial index (HAI), hepatic arterial perfusion (HAP) and hepatic portal perfusion (HPP) data. The Mann-Whitney U test was used to determine differences in perfusion parameters between the background cirrhotic liver parenchyma and HCC and between the cirrhotic liver parenchyma with HCC and that without HCC. In normal liver, the HAP/HVP ratio was about 1/4. HCC had significantly higher HAP and HAI and lower HPP than background liver parenchyma adjacent to the HCC. The value of HBF at the tumor

  10. Density functionalized [RuII(NO)(Salen)(Cl)] complex: Computational photodynamics and in vitro anticancer facets.

    Mir, Jan Mohammad; Jain, N; Jaget, P S; Maurya, R C

    2017-09-01

    Photodynamic therapy (PDT) is a treatment that uses photosensitizing agents to kill cancer cells. Scientific community has been eager for decades to design an efficient PDT drug. Under such purview, the current report deals with the computational photodynamic behavior of ruthenium(II) nitrosyl complex containing N, N'-salicyldehyde-ethylenediimine (SalenH 2 ), the synthesis and X-ray crystallography of which is already known [Ref. 38,39]. Gaussian 09W software package was employed to carry out the density functional (DFT) studies. DFT calculations with Becke-3-Lee-Yang-Parr (B3LYP)/Los Alamos National Laboratory 2 Double Z (LanL2DZ) specified for Ru atom and B3LYP/6-31G(d,p) combination for all other atoms were used using effective core potential method. Both, the ground and excited states of the complex were evolved. Some known photosensitizers were compared with the target complex. Pthalocyanine and porphyrin derivatives were the compounds selected for the respective comparative study. It is suggested that effective photoactivity was found due to the presence of ruthenium core in the model complex. In addition to the evaluation of theoretical aspects in vitro anticancer aspects against COLO-205 human cancer cells have also been carried out with regard to the complex. More emphasis was laid to extrapolate DFT to depict the chemical power of the target compound to release nitric oxide. A promising visible light triggered nitric oxide releasing power of the compound has been inferred. In vitro antiproliferative studies of [RuCl 3 (PPh 3 ) 3 ] and [Ru(NO)(Salen)(Cl)] have revealed the model complex as an excellent anticancer agent. From IC 50 values of 40.031mg/mL in former and of 9.74mg/mL in latter, it is established that latter bears more anticancer potentiality. From overall study the DFT based structural elucidation and the efficiency of NO, Ru and Salen co-ligands has shown promising drug delivery property and a good candidacy for both chemotherapy as well as

  11. Motion estimation for cardiac functional analysis using two x-ray computed tomography scans.

    Fung, George S K; Ciuffo, Luisa; Ashikaga, Hiroshi; Taguchi, Katsuyuki

    2017-09-01

    This work concerns computed tomography (CT)-based cardiac functional analysis (CFA) with a reduced radiation dose. As CT-CFA requires images over the entire heartbeat, the scans are often performed at 10-20% of the tube current settings that are typically used for coronary CT angiography. A large image noise then degrades the accuracy of motion estimation. Moreover, even if the scan was performed during the sinus rhythm, the cardiac motion observed in CT images may not be cyclic with patients with atrial fibrillation. In this study, we propose to use two CT scan data, one for CT angiography at a quiescent phase at a standard dose and the other for CFA over the entire heart beat at a lower dose. We have made the following four modifications to an image-based cardiac motion estimation method we have previously developed for a full-dose retrospectively gated coronary CT angiography: (a) a full-dose prospectively gated coronary CT angiography image acquired at the least motion phase was used as the reference image; (b) a three-dimensional median filter was applied to lower-dose retrospectively gated cardiac images acquired at 20 phases over one heartbeat in order to reduce image noise; (c) the strength of the temporal regularization term was made adaptive; and (d) a one-dimensional temporal filter was applied to the estimated motion vector field in order to decrease jaggy motion patterns. We describe the conventional method iME1 and the proposed method iME2 in this article. Five observers assessed the accuracy of the estimated motion vector field of iME2 and iME1 using a 4-point scale. The observers repeated the assessment with data presented in a new random order 1 week after the first assessment session. The study confirmed that the proposed iME2 was robust against the mismatch of noise levels, contrast enhancement levels, and shapes of the chambers. There was a statistically significant difference between iME2 and iME1 (accuracy score, 2.08 ± 0.81 versus 2.77

  12. Correlation of chest computed tomography findings with dyspnea and lung functions in post-tubercular sequelae

    Ananya Panda

    2016-01-01

    Full Text Available Aims: To study the correlation between dyspnea, radiological findings, and pulmonary function tests (PFTs in patients with sequelae of pulmonary tuberculosis (TB. Materials and Methods: Clinical history, chest computed tomography (CT, and PFT of patients with post-TB sequelae were recorded. Dyspnea was graded according to the Modified Medical Research Council (mMRC scale. CT scans were analyzed for fibrosis, cavitation, bronchiectasis, consolidation, nodules, and aspergilloma. Semi-quantitative analysis was done for these abnormalities. Scores were added to obtain a total morphological score (TMS. The lungs were also divided into three zones and scores added to obtain the total lung score (TLS. Spirometry was done for forced vital capacity (FVC, forced expiratory volume in 1 s (FEV1, and FEV1/FVC. Results: Dyspnea was present in 58/101 patients. A total of 22/58 patients had mMRC Grade 1, and 17/58 patients had Grades 2 and 3 dyspnea each. There was a significant difference in median fibrosis, bronchiectasis, nodules (P < 0.01 scores, TMS, and TLS (P < 0.0001 between dyspnea and nondyspnea groups. Significant correlations were obtained between grades of dyspnea and fibrosis (r = 0.34, P = 0.006, bronchiectasis (r = 0.35, P = 0.004, nodule (r = 0.24, P = 0.016 scores, TMS (r = 0.398, P = 0.000, and TLS (r = 0.35, P = 0.0003. PFTs were impaired in 78/101 (77.2% patients. Restrictive defect was most common in 39.6% followed by mixed in 34.7%. There was a negative but statistically insignificant trend between PFT and fibrosis, bronchiectasis, nodule scores, TMS, and TLS. However, there were significant differences in median fibrosis, cavitation, and bronchiectasis scores in patients with normal, mild to moderate, and severe respiratory defects. No difference was seen in TMS and TLS according to the severity of the respiratory defect. Conclusion: Both fibrosis and bronchiectasis correlated with dyspnea and with PFT. However, this correlation was not

  13. Stream function method for computing steady rotational transonic flows with application to solar wind-type problems

    Kopriva, D.A.

    1982-01-01

    A numerical scheme has been developed to solve the quasilinear form of the transonic stream function equation. The method is applied to compute steady two-dimensional axisymmetric solar wind-type problems. A single, perfect, non-dissipative, homentropic and polytropic gas-dynamics is assumed. The four equations governing mass and momentum conservation are reduced to a single nonlinear second order partial differential equation for the stream function. Bernoulli's equation is used to obtain a nonlinear algebraic relation for the density in terms of stream function derivatives. The vorticity includes the effects of azimuthal rotation and Bernoulli's function and is determined from quantities specified on boundaries. The approach is efficient. The number of equations and independent variables has been reduced and a rapid relaxation technique developed for the transonic full potential equation is used. Second order accurate central differences are used in elliptic regions. In hyperbolic regions a dissipation term motivated by the rotated differencing scheme of Jameson is added for stability. A successive-line-overrelaxation technique also introduced by Jameson is used to solve the equations. The nonlinear equation for the density is a double valued function of the stream function derivatives. The velocities are extrapolated from upwind points to determine the proper branch and Newton's method is used to iteratively compute the density. This allows accurate solutions with few grid points

  14. Assessment of left ventricular function by electrocardiogram-gated myocardial single photon emission computed tomography using quantitative gated single photon emission computed tomography software

    Morita, Koichi; Adachi, Itaru; Konno, Masanori

    1999-01-01

    Electrocardiogram (ECG)-gated myocardial single photon emission computed tomography (SPECT) can assess left ventricular (LV) perfusion and function easily using quantitative gated SPECT (QGS) software. ECG-gated SPECT was performed in 44 patients with coronary artery disease under post-stress and resting conditions to assess the values of LV functional parameters, by comparison to LV ejection fraction derived from gated blood pool scan and myocardial characteristics. A good correlation was obtained between ejection fraction using QGS and that using cardiac blood pool scan (r=0.812). Some patients with myocardial ischemia had lower ejection fraction under post-stress compared to resting conditions, indicating post-stress LV dysfunction. LV wall motion and wall thickening were significantly impaired in ischemic and infarcted myocardium, and the degree of abnormality in the infarcted areas was greater than in the ischemia area. LV functional parameters derived using QGS were useful to assess post-stress LV dysfunction and myocardial viability. In conclusion, ECG-gated myocardial SPECT permits simultaneous quantitative assessment of myocardial perfusion and function. (author)

  15. Application of modified analytical function for approximation and computer simulation of diffraction profile

    Marrero, S. I.; Turibus, S. N.; Assis, J. T. De; Monin, V. I.

    2011-01-01

    Data processing of the most of diffraction experiments is based on determination of diffraction line position and measurement of broadening of diffraction profile. High precision and digitalisation of these procedures can be resolved by approximation of experimental diffraction profiles by analytical functions. There are various functions for these purposes both simples, like Gauss function, but no suitable for wild range of experimental profiles and good approximating functions but complicated for practice using, like Vougt or PersonVII functions. Proposed analytical function is modified Cauchy function which uses two variable parameters allowing describing any experimental diffraction profile. In the presented paper modified function was applied for approximation of diffraction lines of steels after various physical and mechanical treatments and simulation of diffraction profiles applied for study of stress gradients and distortions of crystal structure. (Author)

  16. Development of an item bank and computer adaptive test for role functioning

    Anatchkova, Milena D; Rose, Matthias; Ware, John E

    2012-01-01

    Role functioning (RF) is a key component of health and well-being and an important outcome in health research. The aim of this study was to develop an item bank to measure impact of health on role functioning.......Role functioning (RF) is a key component of health and well-being and an important outcome in health research. The aim of this study was to develop an item bank to measure impact of health on role functioning....

  17. Short-term forecasting of meteorological time series using Nonparametric Functional Data Analysis (NPFDA)

    Curceac, S.; Ternynck, C.; Ouarda, T.

    2015-12-01

    Over the past decades, a substantial amount of research has been conducted to model and forecast climatic variables. In this study, Nonparametric Functional Data Analysis (NPFDA) methods are applied to forecast air temperature and wind speed time series in Abu Dhabi, UAE. The dataset consists of hourly measurements recorded for a period of 29 years, 1982-2010. The novelty of the Functional Data Analysis approach is in expressing the data as curves. In the present work, the focus is on daily forecasting and the functional observations (curves) express the daily measurements of the above mentioned variables. We apply a non-linear regression model with a functional non-parametric kernel estimator. The computation of the estimator is performed using an asymmetrical quadratic kernel function for local weighting based on the bandwidth obtained by a cross validation procedure. The proximities between functional objects are calculated by families of semi-metrics based on derivatives and Functional Principal Component Analysis (FPCA). Additionally, functional conditional mode and functional conditional median estimators are applied and the advantages of combining their results are analysed. A different approach employs a SARIMA model selected according to the minimum Akaike (AIC) and Bayessian (BIC) Information Criteria and based on the residuals of the model. The performance of the models is assessed by calculating error indices such as the root mean square error (RMSE), relative RMSE, BIAS and relative BIAS. The results indicate that the NPFDA models provide more accurate forecasts than the SARIMA models. Key words: Nonparametric functional data analysis, SARIMA, time series forecast, air temperature, wind speed

  18. Evaluating the Appropriateness of a New Computer-Administered Measure of Adaptive Function for Children and Youth with Autism Spectrum Disorders

    Coster, Wendy J.; Kramer, Jessica M.; Tian, Feng; Dooley, Meghan; Liljenquist, Kendra; Kao, Ying-Chia; Ni, Pengsheng

    2016-01-01

    The Pediatric Evaluation of Disability Inventory-Computer Adaptive Test is an alternative method for describing the adaptive function of children and youth with disabilities using a computer-administered assessment. This study evaluated the performance of the Pediatric Evaluation of Disability Inventory-Computer Adaptive Test with a national…

  19. Estimating a Smooth Common Transfer Function with a Panel of Time Series - Inflow of Larvae Cod as an Example

    Elizabeth Hansen

    2012-07-01

    Full Text Available Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";} The annual response variable in an ecological monitoring study often relates linearly to the weighted cumulative effect of some daily covariate, after adjusting for other annual covariates. Here we consider the problem of non-parametrically estimating the weights involved in computing the aforementioned cumulative effect, with a panel of short and contemporaneously correlated time series whose responses share the common cumulative effect of a daily covariate. The sequence of (unknown daily weights constitutes the so-called transfer function. Specifically, we consider the problem of estimating a smooth common transfer function shared by a panel of short time series that are contemporaneously correlated. We propose an estimation scheme using a likelihood approach that penalizes the roughness of the common transfer function. We illustrate the proposed method with a simulation study and a biological example of indirectly estimating the spawning date distribution of North Sea cod.

  20. A Karaoke System with Real-Time Media Merging and Sharing Functions for a Cloud-Computing-Integrated Mobile Device

    Her-Tyan Yeh

    2013-01-01

    Full Text Available Mobile devices such as personal digital assistants (PDAs, smartphones, and tablets have increased in popularity and are extremely efficient for work-related, social, and entertainment uses. Popular entertainment services have also attracted substantial attention. Thus, relevant industries have exerted considerable efforts in establishing a method by which mobile devices can be used to develop excellent and convenient entertainment services. Because cloud-computing technology is mature and possesses a strong computing processing capacity, integrating this technology into the entertainment service function in mobile devices can reduce the data load on a system and maintain mobile device performances. This study combines cloud computing with a mobile device to design a karaoke system that contains real-time media merging and sharing functions. This system enables users to download music videos (MVs from their mobile device and sing and record their singing by using the device. They can upload the recorded song to the cloud server where it is merged with real-time media. Subsequently, by employing a media streaming technology, users can store their personal MVs in their mobile device or computer and instantaneously share these videos with others on the Internet. Through this process, people can instantly watch shared videos, enjoy the leisure and entertainment effects of mobile devices, and satisfy their desire for singing.

  1. Fast Computation of Solvation Free Energies with Molecular Density Functional Theory: Thermodynamic-Ensemble Partial Molar Volume Corrections.

    Sergiievskyi, Volodymyr P; Jeanmairet, Guillaume; Levesque, Maximilien; Borgis, Daniel

    2014-06-05

    Molecular density functional theory (MDFT) offers an efficient implicit-solvent method to estimate molecule solvation free-energies, whereas conserving a fully molecular representation of the solvent. Even within a second-order approximation for the free-energy functional, the so-called homogeneous reference fluid approximation, we show that the hydration free-energies computed for a data set of 500 organic compounds are of similar quality as those obtained from molecular dynamics free-energy perturbation simulations, with a computer cost reduced by 2-3 orders of magnitude. This requires to introduce the proper partial volume correction to transform the results from the grand canonical to the isobaric-isotherm ensemble that is pertinent to experiments. We show that this correction can be extended to 3D-RISM calculations, giving a sound theoretical justification to empirical partial molar volume corrections that have been proposed recently.

  2. Symbolic computation of exact solutions expressible in rational formal hyperbolic and elliptic functions for nonlinear partial differential equations

    Wang Qi; Chen Yong

    2007-01-01

    With the aid of symbolic computation, some algorithms are presented for the rational expansion methods, which lead to closed-form solutions of nonlinear partial differential equations (PDEs). The new algorithms are given to find exact rational formal polynomial solutions of PDEs in terms of Jacobi elliptic functions, solutions of the Riccati equation and solutions of the generalized Riccati equation. They can be implemented in symbolic computation system Maple. As applications of the methods, we choose some nonlinear PDEs to illustrate the methods. As a result, we not only can successfully obtain the solutions found by most existing Jacobi elliptic function methods and Tanh-methods, but also find other new and more general solutions at the same time

  3. Hypertensive disease and renal hypertensions: renal structural and functional studies by using dynamic computed tomography

    Arabidze, G.G.; Pogrebnaya, G.N.; Todua, F.I.; Sokolova, R.I.; Kozdoba, O.A.

    1989-01-01

    Dynamic computed tomography was conducted by the original methods; the findings were analyzed by taking into account time-density curves which made it possible to gain an insight into the status of blood flow and filtration in each individual kidney. Computed tomography and dynamic computed tomography revealed that hypertensive disease was characterized by normal volume and thickness of the renal cortical layer and symmetric time-density curves, whereas a hypertensive type of chronic glomerulonephritis featured lower renal cartical layer thickness, reduced renal volume, symmetrically decrease amplitudes of the first and second peaks of the time-density curve, chronic pyelonephritis showed asymmetric time-density diagrams due to the lower density areas in the afflicted kidney

  4. Computer analysis of protein functional sites projection on exon structure of genes in Metazoa.

    Medvedeva, Irina V; Demenkov, Pavel S; Ivanisenko, Vladimir A

    2015-01-01

    Study of the relationship between the structural and functional organization of proteins and their coding genes is necessary for an understanding of the evolution of molecular systems and can provide new knowledge for many applications for designing proteins with improved medical and biological properties. It is well known that the functional properties of proteins are determined by their functional sites. Functional sites are usually represented by a small number of amino acid residues that are distantly located from each other in the amino acid sequence. They are highly conserved within their functional group and vary significantly in structure between such groups. According to this facts analysis of the general properties of the structural organization of the functional sites at the protein level and, at the level of exon-intron structure of the coding gene is still an actual problem. One approach to this analysis is the projection of amino acid residue positions of the functional sites along with the exon boundaries to the gene structure. In this paper, we examined the discontinuity of the functional sites in the exon-intron structure of genes and the distribution of lengths and phases of the functional site encoding exons in vertebrate genes. We have shown that the DNA fragments coding the functional sites were in the same exons, or in close exons. The observed tendency to cluster the exons that code functional sites which could be considered as the unit of protein evolution. We studied the characteristics of the structure of the exon boundaries that code, and do not code, functional sites in 11 Metazoa species. This is accompanied by a reduced frequency of intercodon gaps (phase 0) in exons encoding the amino acid residue functional site, which may be evidence of the existence of evolutionary limitations to the exon shuffling. These results characterize the features of the coding exon-intron structure that affect the functionality of the encoded protein and

  5. Computer Simulation Tests of Feedback Error Learning Controller with IDM and ISM for Functional Electrical Stimulation in Wrist Joint Control

    Watanabe, Takashi; Sugi, Yoshihiro

    2010-01-01

    Feedforward controller would be useful for hybrid Functional Electrical Stimulation (FES) system using powered orthotic devices. In this paper, Feedback Error Learning (FEL) controller for FES (FEL-FES controller) was examined using an inverse statics model (ISM) with an inverse dynamics model (IDM) to realize a feedforward FES controller. For FES application, the ISM was tested in learning off line using training data obtained by PID control of very slow movements. Computer simulation tests ...

  6. Geometric representation of the mean-variance-skewness portfolio frontier based upon the shortage function

    Kerstens, Kristiaan; Mounier, Amine; Van de Woestyne, Ignace

    2008-01-01

    The literature suggests that investors prefer portfolios based on mean, variance and skewness rather than portfolios based on mean-variance (MV) criteria solely. Furthermore, a small variety of methods have been proposed to determine mean-variance-skewness (MVS) optimal portfolios. Recently, the shortage function has been introduced as a measure of efficiency, allowing to characterize MVS optimalportfolios using non-parametric mathematical programming tools. While tracing the MV portfolio fro...

  7. Identification of Functional Clusters in the Striatum Using Infinite Relational Modeling

    Andersen, Kasper Winther; Madsen, Kristoffer Hougaard; Siebner, Hartwig

    2011-01-01

    In this paper we investigate how the Infinite Relational Model can be used to infer functional groupings of the human striatum using resting state fMRI data from 30 healthy subjects. The Infinite Relational Model is a non-parametric Bayesian method for infering community structure in complex netw...... and non-links in the graphs as missing. We find that the model is performing well above chance for all subjects....

  8. Computationally simple, analytic, closed form solution of the Coulomb self-interaction problem in Kohn Sham density functional theory

    Gonis, Antonios; Daene, Markus W.; Nicholson, Don M.; Stocks, George Malcolm

    2012-01-01

    We have developed and tested in terms of atomic calculations an exact, analytic and computationally simple procedure for determining the functional derivative of the exchange energy with respect to the density in the implementation of the Kohn Sham formulation of density functional theory (KS-DFT), providing an analytic, closed-form solution of the self-interaction problem in KS-DFT. We demonstrate the efficacy of our method through ground-state calculations of the exchange potential and energy for atomic He and Be atoms, and comparisons with experiment and the results obtained within the optimized effective potential (OEP) method.

  9. Bessel function expansion to reduce the calculation time and memory usage for cylindrical computer-generated holograms.

    Sando, Yusuke; Barada, Daisuke; Jackin, Boaz Jessie; Yatagai, Toyohiko

    2017-07-10

    This study proposes a method to reduce the calculation time and memory usage required for calculating cylindrical computer-generated holograms. The wavefront on the cylindrical observation surface is represented as a convolution integral in the 3D Fourier domain. The Fourier transformation of the kernel function involving this convolution integral is analytically performed using a Bessel function expansion. The analytical solution can drastically reduce the calculation time and the memory usage without any cost, compared with the numerical method using fast Fourier transform to Fourier transform the kernel function. In this study, we present the analytical derivation, the efficient calculation of Bessel function series, and a numerical simulation. Furthermore, we demonstrate the effectiveness of the analytical solution through comparisons of calculation time and memory usage.

  10. Maximize Minimum Utility Function of Fractional Cloud Computing System Based on Search Algorithm Utilizing the Mittag-Leffler Sum

    Rabha W. Ibrahim

    2018-01-01

    Full Text Available The maximum min utility function (MMUF problem is an important representative of a large class of cloud computing systems (CCS. Having numerous applications in practice, especially in economy and industry. This paper introduces an effective solution-based search (SBS algorithm for solving the problem MMUF. First, we suggest a new formula of the utility function in term of the capacity of the cloud. We formulate the capacity in CCS, by using a fractional diffeo-integral equation. This equation usually describes the flow of CCS. The new formula of the utility function is modified recent active utility functions. The suggested technique first creates a high-quality initial solution by eliminating the less promising components, and then develops the quality of the achieved solution by the summation search solution (SSS. This method is considered by the Mittag-Leffler sum as hash functions to determine the position of the agent. Experimental results commonly utilized in the literature demonstrate that the proposed algorithm competes approvingly with the state-of-the-art algorithms both in terms of solution quality and computational efficiency.

  11. Can people with Alzheimer's disease improve their day-to-day functioning with a tablet computer?

    Imbeault, Hélène; Langlois, Francis; Bocti, Christian; Gagnon, Lise; Bier, Nathalie

    2018-07-01

    New technologies, such as tablet computers, present great potential to support the day-to-day living of persons with Alzheimer's disease (AD). However, whether people with AD can learn how to use a tablet properly in daily life remains to be demonstrated. A single case study was conducted with a 65-year-old woman with AD. A specific and structured intervention tailored to her needs was conceptualised for the use of a calendar application on a tablet computer according to the following learning stages: Acquisition, Application and Adaptation. In spite of her severe episodic memory deficit, she showed progressive learning of the tablet application during the intervention phase. Furthermore, data compiled over 12 months post-use show that she used the tablet successfully in her day-to-day life. She was even able to transfer her newly acquired ability to other available applications designed to monitor regular purchases, consult various recipes and play games. Tablet computers thereby offer a promising avenue for cognitive rehabilitation for persons with AD. This success was mainly achieved through a one-on-one individual programme tailored to this person. The limits and constraints of utilising tablet computers for persons with AD are discussed.

  12. Computers, Mass Media, and Schooling: Functional Equivalence in Uses of New Media.

    Lieberman, Debra A.; And Others

    1988-01-01

    Presents a study of 156 California eighth grade students which contrasted their recreational and intellectual computer use in terms of academic performance and use of other media. Among the conclusions were that recreational users watched television heavily and performed poorly in school, whereas intellectual users watched less television,…

  13. Computational models for interpretation of wave function imaging in cross-sectional STM of quantum dots

    Maksym, P.A.; Roy, M.; Wijnheijmer, A.P.; Koenraad, P.M.

    2008-01-01

    Computational models are used to investigate the role of electron-electron interactions in cross-sectional STM of cleaved quantum dots. If correlation effects are weak, the tunnelling current reflects the nodal structure of the non-interacting dot states. If correlation is strong, peaks in the

  14. Attention and executive functions computer training for attention-deficit/hyperactivity disorder (ADHD)

    Bikic, Aida; Leckman, James F; Christensen, Torben Ø

    2018-01-01

    and both groups received treatment as usual and were assessed in regard to cognitive functions, symptoms, behavioral and functional outcome measures after 8, 12 and 24 weeks. There was no significant effect on the primary outcome, sustained attention (β = - 0.047; CI - 0.247 to 0.153) or the secondary...

  15. A hybrid method for the parallel computation of Green’s functions

    Petersen, Dan Erik; Li, Song; Stokbro, Kurt

    2009-01-01

    Quantum transport models for nanodevices using the non-equilibrium Green’s function method require the repeated calculation of the block tridiagonal part of the Green’s and lesser Green’s function matrices. This problem is related to the calculation of the inverse of a sparse matrix. Because of t...

  16. Computation of piecewise affine terminal cost functions for model predictive control

    Brunner, F.D.; Lazar, M.; Allgöwer, F.; Fränzle, Martin; Lygeros, John

    2014-01-01

    This paper proposes a method for the construction of piecewise affine terminal cost functions for model predictive control (MPC). The terminal cost function is constructed on a predefined partition by solving a linear program for a given piecewise affine system, a stabilizing piecewise affine

  17. Applying Computational Scoring Functions to Assess Biomolecular Interactions in Food Science: Applications to the Estrogen Receptors

    Francesca Spyrakis

    2016-10-01

    Thus, key computational medicinal chemistry methods like molecular dynamics can be used to decipher protein flexibility and to obtain stable models for docking and scoring in food-related studies, and virtual screening is increasingly being applied to identify molecules with potential to act as endocrine disruptors, food mycotoxins, and new nutraceuticals [3,4,5]. All of these methods and simulations are based on protein-ligand interaction phenomena, and represent the basis for any subsequent modification of the targeted receptor's or enzyme's physiological activity. We describe here the energetics of binding of biological complexes, providing a survey of the most common and successful algorithms used in evaluating these energetics, and we report case studies in which computational techniques have been applied to food science issues. In particular, we explore a handful of studies involving the estrogen receptors for which we have a long-term interest.

  18. How to maintain hundreds of computers offering different functionalities with only 2 system administrators

    Krempaska, R.; Bertrand, A.; Higgs, C.; Kapeller, R.; Lutz, H.; Provenzano, M.

    2012-01-01

    At the Paul Scherrer Institute, the control systems of our large research facilities are maintained by the Controls section. These facilities include two proton accelerators, (HIPA and PROSCAN), two electron accelerators, (SLS and the Injector Test Facility of the future SwissFEL) as well as the control systems of all their related beamlines and test facilities. The control system configuration and applications for each facility is stored on independent NFS file servers. The total number of Linux computers and servers is about 500. Since only two system administrators are responsible for their installation, configuration and maintenance, we have adopted a well defined solution that relies on 3 ideas: -) Virtualization, -) Unified operating system installation and update mechanism, and -) Automatic configuration by a common tool (puppet). This paper describes methods and tools which are used to develop and maintain the challenging computing infrastructure deployed by the Controls section

  19. Using speech recognition to enhance the Tongue Drive System functionality in computer access.

    Huo, Xueliang; Ghovanloo, Maysam

    2011-01-01

    Tongue Drive System (TDS) is a wireless tongue operated assistive technology (AT), which can enable people with severe physical disabilities to access computers and drive powered wheelchairs using their volitional tongue movements. TDS offers six discrete commands, simultaneously available to the users, for pointing and typing as a substitute for mouse and keyboard in computer access, respectively. To enhance the TDS performance in typing, we have added a microphone, an audio codec, and a wireless audio link to its readily available 3-axial magnetic sensor array, and combined it with a commercially available speech recognition software, the Dragon Naturally Speaking, which is regarded as one of the most efficient ways for text entry. Our preliminary evaluations indicate that the combined TDS and speech recognition technologies can provide end users with significantly higher performance than using each technology alone, particularly in completing tasks that require both pointing and text entry, such as web surfing.

  20. Metaanalysis of Diagnostic Performance of Computed Coronary Tomography Angiography, Computed Tomography Perfusion and Computed Tomography-Fractional Flow Reserve in Functional Myocardial Ischemia Assessment versus Invasive Fractional Flow Reserve

    Gonzalez, Jorge A.; Lipinski, Michael J.; Flors, Lucia F.; Shaw, Peter; Kramer, Christopher M.; Salerno, Michael

    2015-01-01

    We sought to compare the diagnostic performance of computed coronary tomography angiography (CCTA), computed tomography perfusion (CTP) and computed tomography fractional flow reserve (CT-FFR) for assessing the functional significance of coronary stenosis as defined by invasive fractional flow reserve (FFR), in patients with known or suspected coronary artery disease. CCTA has proven clinically useful for excluding obstructive CAD due to its high sensitivity and negative predictive value (NPV), however the ability of CTA to identify functionally significant CAD has remained challenging. We searched PubMed/Medline for studies evaluating CCTA, CTP or CT-FFR for the non-invasive detection of obstructive CAD as compared to catheter-derived FFR as the reference standard. Pooled sensitivity, specificity, PPV, NPV, likelihood ratios (LR), odds ratio (OR) of all diagnostic tests were assessed. Eighteen studies involving a total of 1535 patients were included. CTA demonstrated a pooled sensitivity of 0.92, specificity 0.43, PPV of 0.56 and NPV of 0.87 on a per-patient level. CT-FFR and CTP increased the specificity to 0.72 and 0.77 respectively (P=0.004 and P=0.0009)) resulting in higher point estimates for PPV 0.70 and 0.83 respectively. There was no improvement in the sensitivity. The CTP protocol involved more radiation (3.5 mSv CCTA VS 9.6 mSv CTP) and a higher volume of iodinated contrast (145 mL). In conclusion, CTP and CT-FFR improve the specificity of CCTA for detecting functionally significant stenosis as defined by invasive FFR on a per-patient level; both techniques could advance the ability to non-invasively detect the functional significance of coronary lesions. PMID:26347004