WorldWideScience

Sample records for selected time variable

  1. Variable selection for mixture and promotion time cure rate models.

    Science.gov (United States)

    Masud, Abdullah; Tu, Wanzhu; Yu, Zhangsheng

    2016-11-16

    Failure-time data with cured patients are common in clinical studies. Data from these studies are typically analyzed with cure rate models. Variable selection methods have not been well developed for cure rate models. In this research, we propose two least absolute shrinkage and selection operators based methods, for variable selection in mixture and promotion time cure models with parametric or nonparametric baseline hazards. We conduct an extensive simulation study to assess the operating characteristics of the proposed methods. We illustrate the use of the methods using data from a study of childhood wheezing. © The Author(s) 2016.

  2. THE TIME DOMAIN SPECTROSCOPIC SURVEY: VARIABLE SELECTION AND ANTICIPATED RESULTS

    Energy Technology Data Exchange (ETDEWEB)

    Morganson, Eric; Green, Paul J. [Harvard Smithsonian Center for Astrophysics, 60 Garden St, Cambridge, MA 02138 (United States); Anderson, Scott F.; Ruan, John J. [Department of Astronomy, University of Washington, Box 351580, Seattle, WA 98195 (United States); Myers, Adam D. [Department of Physics and Astronomy, University of Wyoming, Laramie, WY 82071 (United States); Eracleous, Michael; Brandt, William Nielsen [Department of Astronomy and Astrophysics, 525 Davey Laboratory, The Pennsylvania State University, University Park, PA 16802 (United States); Kelly, Brandon [Department of Physics, Broida Hall, University of California, Santa Barbara, CA 93106-9530 (United States); Badenes, Carlos [Department of Physics and Astronomy and Pittsburgh Particle Physics, Astrophysics and Cosmology Center (PITT PACC), University of Pittsburgh, 3941 O’Hara St, Pittsburgh, PA 15260 (United States); Bañados, Eduardo [Max-Planck-Institut für Astronomie, Königstuhl 17, D-69117 Heidelberg (Germany); Blanton, Michael R. [Center for Cosmology and Particle Physics, Department of Physics, New York University, 4 Washington Place, New York, NY 10003 (United States); Bershady, Matthew A. [Department of Astronomy, University of Wisconsin, 475 N. Charter St., Madison, WI 53706 (United States); Borissova, Jura [Instituto de Física y Astronomía, Universidad de Valparaíso, Av. Gran Bretaña 1111, Playa Ancha, Casilla 5030, and Millennium Institute of Astrophysics (MAS), Santiago (Chile); Burgett, William S. [GMTO Corp, Suite 300, 251 S. Lake Ave, Pasadena, CA 91101 (United States); Chambers, Kenneth, E-mail: emorganson@cfa.harvard.edu [Institute for Astronomy, University of Hawaii at Manoa, Honolulu, HI 96822 (United States); and others

    2015-06-20

    We present the selection algorithm and anticipated results for the Time Domain Spectroscopic Survey (TDSS). TDSS is an Sloan Digital Sky Survey (SDSS)-IV Extended Baryon Oscillation Spectroscopic Survey (eBOSS) subproject that will provide initial identification spectra of approximately 220,000 luminosity-variable objects (variable stars and active galactic nuclei across 7500 deg{sup 2} selected from a combination of SDSS and multi-epoch Pan-STARRS1 photometry. TDSS will be the largest spectroscopic survey to explicitly target variable objects, avoiding pre-selection on the basis of colors or detailed modeling of specific variability characteristics. Kernel Density Estimate analysis of our target population performed on SDSS Stripe 82 data suggests our target sample will be 95% pure (meaning 95% of objects we select have genuine luminosity variability of a few magnitudes or more). Our final spectroscopic sample will contain roughly 135,000 quasars and 85,000 stellar variables, approximately 4000 of which will be RR Lyrae stars which may be used as outer Milky Way probes. The variability-selected quasar population has a smoother redshift distribution than a color-selected sample, and variability measurements similar to those we develop here may be used to make more uniform quasar samples in large surveys. The stellar variable targets are distributed fairly uniformly across color space, indicating that TDSS will obtain spectra for a wide variety of stellar variables including pulsating variables, stars with significant chromospheric activity, cataclysmic variables, and eclipsing binaries. TDSS will serve as a pathfinder mission to identify and characterize the multitude of variable objects that will be detected photometrically in even larger variability surveys such as Large Synoptic Survey Telescope.

  3. A Time-Series Water Level Forecasting Model Based on Imputation and Variable Selection Method.

    Science.gov (United States)

    Yang, Jun-He; Cheng, Ching-Hsue; Chan, Chia-Pan

    2017-01-01

    Reservoirs are important for households and impact the national economy. This paper proposed a time-series forecasting model based on estimating a missing value followed by variable selection to forecast the reservoir's water level. This study collected data from the Taiwan Shimen Reservoir as well as daily atmospheric data from 2008 to 2015. The two datasets are concatenated into an integrated dataset based on ordering of the data as a research dataset. The proposed time-series forecasting model summarily has three foci. First, this study uses five imputation methods to directly delete the missing value. Second, we identified the key variable via factor analysis and then deleted the unimportant variables sequentially via the variable selection method. Finally, the proposed model uses a Random Forest to build the forecasting model of the reservoir's water level. This was done to compare with the listing method under the forecasting error. These experimental results indicate that the Random Forest forecasting model when applied to variable selection with full variables has better forecasting performance than the listing model. In addition, this experiment shows that the proposed variable selection can help determine five forecast methods used here to improve the forecasting capability.

  4. A Time-Series Water Level Forecasting Model Based on Imputation and Variable Selection Method

    Directory of Open Access Journals (Sweden)

    Jun-He Yang

    2017-01-01

    Full Text Available Reservoirs are important for households and impact the national economy. This paper proposed a time-series forecasting model based on estimating a missing value followed by variable selection to forecast the reservoir’s water level. This study collected data from the Taiwan Shimen Reservoir as well as daily atmospheric data from 2008 to 2015. The two datasets are concatenated into an integrated dataset based on ordering of the data as a research dataset. The proposed time-series forecasting model summarily has three foci. First, this study uses five imputation methods to directly delete the missing value. Second, we identified the key variable via factor analysis and then deleted the unimportant variables sequentially via the variable selection method. Finally, the proposed model uses a Random Forest to build the forecasting model of the reservoir’s water level. This was done to compare with the listing method under the forecasting error. These experimental results indicate that the Random Forest forecasting model when applied to variable selection with full variables has better forecasting performance than the listing model. In addition, this experiment shows that the proposed variable selection can help determine five forecast methods used here to improve the forecasting capability.

  5. SELECTING QUASARS BY THEIR INTRINSIC VARIABILITY

    International Nuclear Information System (INIS)

    Schmidt, Kasper B.; Rix, Hans-Walter; Jester, Sebastian; Hennawi, Joseph F.; Marshall, Philip J.; Dobler, Gregory

    2010-01-01

    We present a new and simple technique for selecting extensive, complete, and pure quasar samples, based on their intrinsic variability. We parameterize the single-band variability by a power-law model for the light-curve structure function, with amplitude A and power-law index γ. We show that quasars can be efficiently separated from other non-variable and variable sources by the location of the individual sources in the A-γ plane. We use ∼60 epochs of imaging data, taken over ∼5 years, from the SDSS stripe 82 (S82) survey, where extensive spectroscopy provides a reference sample of quasars, to demonstrate the power of variability as a quasar classifier in multi-epoch surveys. For UV-excess selected objects, variability performs just as well as the standard SDSS color selection, identifying quasars with a completeness of 90% and a purity of 95%. In the redshift range 2.5 < z < 3, where color selection is known to be problematic, variability can select quasars with a completeness of 90% and a purity of 96%. This is a factor of 5-10 times more pure than existing color selection of quasars in this redshift range. Selecting objects from a broad griz color box without u-band information, variability selection in S82 can afford completeness and purity of 92%, despite a factor of 30 more contaminants than quasars in the color-selected feeder sample. This confirms that the fraction of quasars hidden in the 'stellar locus' of color space is small. To test variability selection in the context of Pan-STARRS 1 (PS1) we created mock PS1 data by down-sampling the S82 data to just six epochs over 3 years. Even with this much sparser time sampling, variability is an encouragingly efficient classifier. For instance, a 92% pure and 44% complete quasar candidate sample is attainable from the above griz-selected catalog. Finally, we show that the presented A-γ technique, besides selecting clean and pure samples of quasars (which are stochastically varying objects), is also

  6. A Time-Series Water Level Forecasting Model Based on Imputation and Variable Selection Method

    OpenAIRE

    Jun-He Yang; Ching-Hsue Cheng; Chia-Pan Chan

    2017-01-01

    Reservoirs are important for households and impact the national economy. This paper proposed a time-series forecasting model based on estimating a missing value followed by variable selection to forecast the reservoir's water level. This study collected data from the Taiwan Shimen Reservoir as well as daily atmospheric data from 2008 to 2015. The two datasets are concatenated into an integrated dataset based on ordering of the data as a research dataset. The proposed time-series forecasting m...

  7. Selective attrition and intraindividual variability in response time moderate cognitive change.

    Science.gov (United States)

    Yao, Christie; Stawski, Robert S; Hultsch, David F; MacDonald, Stuart W S

    2016-01-01

    Selection of a developmental time metric is useful for understanding causal processes that underlie aging-related cognitive change and for the identification of potential moderators of cognitive decline. Building on research suggesting that time to attrition is a metric sensitive to non-normative influences of aging (e.g., subclinical health conditions), we examined reason for attrition and intraindividual variability (IIV) in reaction time as predictors of cognitive performance. Three hundred and four community dwelling older adults (64-92 years) completed annual assessments in a longitudinal study. IIV was calculated from baseline performance on reaction time tasks. Multilevel models were fit to examine patterns and predictors of cognitive change. We show that time to attrition was associated with cognitive decline. Greater IIV was associated with declines on executive functioning and episodic memory measures. Attrition due to personal health reasons was also associated with decreased executive functioning compared to that of individuals who remained in the study. These findings suggest that time to attrition is a useful metric for representing cognitive change, and reason for attrition and IIV are predictive of non-normative influences that may underlie instances of cognitive loss in older adults.

  8. Robust cluster analysis and variable selection

    CERN Document Server

    Ritter, Gunter

    2014-01-01

    Clustering remains a vibrant area of research in statistics. Although there are many books on this topic, there are relatively few that are well founded in the theoretical aspects. In Robust Cluster Analysis and Variable Selection, Gunter Ritter presents an overview of the theory and applications of probabilistic clustering and variable selection, synthesizing the key research results of the last 50 years. The author focuses on the robust clustering methods he found to be the most useful on simulated data and real-time applications. The book provides clear guidance for the varying needs of bot

  9. Variable Selection in Time Series Forecasting Using Random Forests

    Directory of Open Access Journals (Sweden)

    Hristos Tyralis

    2017-10-01

    Full Text Available Time series forecasting using machine learning algorithms has gained popularity recently. Random forest is a machine learning algorithm implemented in time series forecasting; however, most of its forecasting properties have remained unexplored. Here we focus on assessing the performance of random forests in one-step forecasting using two large datasets of short time series with the aim to suggest an optimal set of predictor variables. Furthermore, we compare its performance to benchmarking methods. The first dataset is composed by 16,000 simulated time series from a variety of Autoregressive Fractionally Integrated Moving Average (ARFIMA models. The second dataset consists of 135 mean annual temperature time series. The highest predictive performance of RF is observed when using a low number of recent lagged predictor variables. This outcome could be useful in relevant future applications, with the prospect to achieve higher predictive accuracy.

  10. Penalized variable selection in competing risks regression.

    Science.gov (United States)

    Fu, Zhixuan; Parikh, Chirag R; Zhou, Bingqing

    2017-07-01

    Penalized variable selection methods have been extensively studied for standard time-to-event data. Such methods cannot be directly applied when subjects are at risk of multiple mutually exclusive events, known as competing risks. The proportional subdistribution hazard (PSH) model proposed by Fine and Gray (J Am Stat Assoc 94:496-509, 1999) has become a popular semi-parametric model for time-to-event data with competing risks. It allows for direct assessment of covariate effects on the cumulative incidence function. In this paper, we propose a general penalized variable selection strategy that simultaneously handles variable selection and parameter estimation in the PSH model. We rigorously establish the asymptotic properties of the proposed penalized estimators and modify the coordinate descent algorithm for implementation. Simulation studies are conducted to demonstrate the good performance of the proposed method. Data from deceased donor kidney transplants from the United Network of Organ Sharing illustrate the utility of the proposed method.

  11. Benchmarking Variable Selection in QSAR.

    Science.gov (United States)

    Eklund, Martin; Norinder, Ulf; Boyer, Scott; Carlsson, Lars

    2012-02-01

    Variable selection is important in QSAR modeling since it can improve model performance and transparency, as well as reduce the computational cost of model fitting and predictions. Which variable selection methods that perform well in QSAR settings is largely unknown. To address this question we, in a total of 1728 benchmarking experiments, rigorously investigated how eight variable selection methods affect the predictive performance and transparency of random forest models fitted to seven QSAR datasets covering different endpoints, descriptors sets, types of response variables, and number of chemical compounds. The results show that univariate variable selection methods are suboptimal and that the number of variables in the benchmarked datasets can be reduced with about 60 % without significant loss in model performance when using multivariate adaptive regression splines MARS and forward selection. Copyright © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  12. Variable Selection via Partial Correlation.

    Science.gov (United States)

    Li, Runze; Liu, Jingyuan; Lou, Lejia

    2017-07-01

    Partial correlation based variable selection method was proposed for normal linear regression models by Bühlmann, Kalisch and Maathuis (2010) as a comparable alternative method to regularization methods for variable selection. This paper addresses two important issues related to partial correlation based variable selection method: (a) whether this method is sensitive to normality assumption, and (b) whether this method is valid when the dimension of predictor increases in an exponential rate of the sample size. To address issue (a), we systematically study this method for elliptical linear regression models. Our finding indicates that the original proposal may lead to inferior performance when the marginal kurtosis of predictor is not close to that of normal distribution. Our simulation results further confirm this finding. To ensure the superior performance of partial correlation based variable selection procedure, we propose a thresholded partial correlation (TPC) approach to select significant variables in linear regression models. We establish the selection consistency of the TPC in the presence of ultrahigh dimensional predictors. Since the TPC procedure includes the original proposal as a special case, our theoretical results address the issue (b) directly. As a by-product, the sure screening property of the first step of TPC was obtained. The numerical examples also illustrate that the TPC is competitively comparable to the commonly-used regularization methods for variable selection.

  13. ENSEMBLE VARIABILITY OF NEAR-INFRARED-SELECTED ACTIVE GALACTIC NUCLEI

    International Nuclear Information System (INIS)

    Kouzuma, S.; Yamaoka, H.

    2012-01-01

    We present the properties of the ensemble variability V for nearly 5000 near-infrared active galactic nuclei (AGNs) selected from the catalog of Quasars and Active Galactic Nuclei (13th Edition) and the SDSS-DR7 quasar catalog. From three near-infrared point source catalogs, namely, Two Micron All Sky Survey (2MASS), Deep Near Infrared Survey (DENIS), and UKIDSS/LAS catalogs, we extract 2MASS-DENIS and 2MASS-UKIDSS counterparts for cataloged AGNs by cross-identification between catalogs. We further select variable AGNs based on an optimal criterion for selecting the variable sources. The sample objects are divided into subsets according to whether near-infrared light originates by optical emission or by near-infrared emission in the rest frame; and we examine the correlations of the ensemble variability with the rest-frame wavelength, redshift, luminosity, and rest-frame time lag. In addition, we also examine the correlations of variability amplitude with optical variability, radio intensity, and radio-to-optical flux ratio. The rest-frame optical variability of our samples shows negative correlations with luminosity and positive correlations with rest-frame time lag (i.e., the structure function, SF), and this result is consistent with previous analyses. However, no well-known negative correlation exists between the rest-frame wavelength and optical variability. This inconsistency might be due to a biased sampling of high-redshift AGNs. Near-infrared variability in the rest frame is anticorrelated with the rest-frame wavelength, which is consistent with previous suggestions. However, correlations of near-infrared variability with luminosity and rest-frame time lag are the opposite of these correlations of the optical variability; that is, the near-infrared variability is positively correlated with luminosity but negatively correlated with the rest-frame time lag. Because these trends are qualitatively consistent with the properties of radio-loud quasars reported

  14. Variable selection and estimation for longitudinal survey data

    KAUST Repository

    Wang, Li

    2014-09-01

    There is wide interest in studying longitudinal surveys where sample subjects are observed successively over time. Longitudinal surveys have been used in many areas today, for example, in the health and social sciences, to explore relationships or to identify significant variables in regression settings. This paper develops a general strategy for the model selection problem in longitudinal sample surveys. A survey weighted penalized estimating equation approach is proposed to select significant variables and estimate the coefficients simultaneously. The proposed estimators are design consistent and perform as well as the oracle procedure when the correct submodel was known. The estimating function bootstrap is applied to obtain the standard errors of the estimated parameters with good accuracy. A fast and efficient variable selection algorithm is developed to identify significant variables for complex longitudinal survey data. Simulated examples are illustrated to show the usefulness of the proposed methodology under various model settings and sampling designs. © 2014 Elsevier Inc.

  15. Physical attraction to reliable, low variability nervous systems: Reaction time variability predicts attractiveness.

    Science.gov (United States)

    Butler, Emily E; Saville, Christopher W N; Ward, Robert; Ramsey, Richard

    2017-01-01

    The human face cues a range of important fitness information, which guides mate selection towards desirable others. Given humans' high investment in the central nervous system (CNS), cues to CNS function should be especially important in social selection. We tested if facial attractiveness preferences are sensitive to the reliability of human nervous system function. Several decades of research suggest an operational measure for CNS reliability is reaction time variability, which is measured by standard deviation of reaction times across trials. Across two experiments, we show that low reaction time variability is associated with facial attractiveness. Moreover, variability in performance made a unique contribution to attractiveness judgements above and beyond both physical health and sex-typicality judgements, which have previously been associated with perceptions of attractiveness. In a third experiment, we empirically estimated the distribution of attractiveness preferences expected by chance and show that the size and direction of our results in Experiments 1 and 2 are statistically unlikely without reference to reaction time variability. We conclude that an operating characteristic of the human nervous system, reliability of information processing, is signalled to others through facial appearance. Copyright © 2016 Elsevier B.V. All rights reserved.

  16. A numeric comparison of variable selection algorithms for supervised learning

    International Nuclear Information System (INIS)

    Palombo, G.; Narsky, I.

    2009-01-01

    Datasets in modern High Energy Physics (HEP) experiments are often described by dozens or even hundreds of input variables. Reducing a full variable set to a subset that most completely represents information about data is therefore an important task in analysis of HEP data. We compare various variable selection algorithms for supervised learning using several datasets such as, for instance, imaging gamma-ray Cherenkov telescope (MAGIC) data found at the UCI repository. We use classifiers and variable selection methods implemented in the statistical package StatPatternRecognition (SPR), a free open-source C++ package developed in the HEP community ( (http://sourceforge.net/projects/statpatrec/)). For each dataset, we select a powerful classifier and estimate its learning accuracy on variable subsets obtained by various selection algorithms. When possible, we also estimate the CPU time needed for the variable subset selection. The results of this analysis are compared with those published previously for these datasets using other statistical packages such as R and Weka. We show that the most accurate, yet slowest, method is a wrapper algorithm known as generalized sequential forward selection ('Add N Remove R') implemented in SPR.

  17. QUASI-STELLAR OBJECT SELECTION ALGORITHM USING TIME VARIABILITY AND MACHINE LEARNING: SELECTION OF 1620 QUASI-STELLAR OBJECT CANDIDATES FROM MACHO LARGE MAGELLANIC CLOUD DATABASE

    International Nuclear Information System (INIS)

    Kim, Dae-Won; Protopapas, Pavlos; Alcock, Charles; Trichas, Markos; Byun, Yong-Ik; Khardon, Roni

    2011-01-01

    We present a new quasi-stellar object (QSO) selection algorithm using a Support Vector Machine, a supervised classification method, on a set of extracted time series features including period, amplitude, color, and autocorrelation value. We train a model that separates QSOs from variable stars, non-variable stars, and microlensing events using 58 known QSOs, 1629 variable stars, and 4288 non-variables in the MAssive Compact Halo Object (MACHO) database as a training set. To estimate the efficiency and the accuracy of the model, we perform a cross-validation test using the training set. The test shows that the model correctly identifies ∼80% of known QSOs with a 25% false-positive rate. The majority of the false positives are Be stars. We applied the trained model to the MACHO Large Magellanic Cloud (LMC) data set, which consists of 40 million light curves, and found 1620 QSO candidates. During the selection none of the 33,242 known MACHO variables were misclassified as QSO candidates. In order to estimate the true false-positive rate, we crossmatched the candidates with astronomical catalogs including the Spitzer Surveying the Agents of a Galaxy's Evolution LMC catalog and a few X-ray catalogs. The results further suggest that the majority of the candidates, more than 70%, are QSOs.

  18. Using variable combination population analysis for variable selection in multivariate calibration.

    Science.gov (United States)

    Yun, Yong-Huan; Wang, Wei-Ting; Deng, Bai-Chuan; Lai, Guang-Bi; Liu, Xin-bo; Ren, Da-Bing; Liang, Yi-Zeng; Fan, Wei; Xu, Qing-Song

    2015-03-03

    Variable (wavelength or feature) selection techniques have become a critical step for the analysis of datasets with high number of variables and relatively few samples. In this study, a novel variable selection strategy, variable combination population analysis (VCPA), was proposed. This strategy consists of two crucial procedures. First, the exponentially decreasing function (EDF), which is the simple and effective principle of 'survival of the fittest' from Darwin's natural evolution theory, is employed to determine the number of variables to keep and continuously shrink the variable space. Second, in each EDF run, binary matrix sampling (BMS) strategy that gives each variable the same chance to be selected and generates different variable combinations, is used to produce a population of subsets to construct a population of sub-models. Then, model population analysis (MPA) is employed to find the variable subsets with the lower root mean squares error of cross validation (RMSECV). The frequency of each variable appearing in the best 10% sub-models is computed. The higher the frequency is, the more important the variable is. The performance of the proposed procedure was investigated using three real NIR datasets. The results indicate that VCPA is a good variable selection strategy when compared with four high performing variable selection methods: genetic algorithm-partial least squares (GA-PLS), Monte Carlo uninformative variable elimination by PLS (MC-UVE-PLS), competitive adaptive reweighted sampling (CARS) and iteratively retains informative variables (IRIV). The MATLAB source code of VCPA is available for academic research on the website: http://www.mathworks.com/matlabcentral/fileexchange/authors/498750. Copyright © 2015 Elsevier B.V. All rights reserved.

  19. Variable Selection for Regression Models of Percentile Flows

    Science.gov (United States)

    Fouad, G.

    2017-12-01

    Percentile flows describe the flow magnitude equaled or exceeded for a given percent of time, and are widely used in water resource management. However, these statistics are normally unavailable since most basins are ungauged. Percentile flows of ungauged basins are often predicted using regression models based on readily observable basin characteristics, such as mean elevation. The number of these independent variables is too large to evaluate all possible models. A subset of models is typically evaluated using automatic procedures, like stepwise regression. This ignores a large variety of methods from the field of feature (variable) selection and physical understanding of percentile flows. A study of 918 basins in the United States was conducted to compare an automatic regression procedure to the following variable selection methods: (1) principal component analysis, (2) correlation analysis, (3) random forests, (4) genetic programming, (5) Bayesian networks, and (6) physical understanding. The automatic regression procedure only performed better than principal component analysis. Poor performance of the regression procedure was due to a commonly used filter for multicollinearity, which rejected the strongest models because they had cross-correlated independent variables. Multicollinearity did not decrease model performance in validation because of a representative set of calibration basins. Variable selection methods based strictly on predictive power (numbers 2-5 from above) performed similarly, likely indicating a limit to the predictive power of the variables. Similar performance was also reached using variables selected based on physical understanding, a finding that substantiates recent calls to emphasize physical understanding in modeling for predictions in ungauged basins. The strongest variables highlighted the importance of geology and land cover, whereas widely used topographic variables were the weakest predictors. Variables suffered from a high

  20. Variability of Travel Times on New Jersey Highways

    Science.gov (United States)

    2011-06-01

    This report presents the results of a link and path travel time study conducted on selected New Jersey (NJ) highways to produce estimates of the corresponding variability of travel time (VTT) by departure time of the day and days of the week. The tra...

  1. Characterizing the Optical Variability of Bright Blazars: Variability-based Selection of Fermi Active Galactic Nuclei

    Science.gov (United States)

    Ruan, John J.; Anderson, Scott F.; MacLeod, Chelsea L.; Becker, Andrew C.; Burnett, T. H.; Davenport, James R. A.; Ivezić, Željko; Kochanek, Christopher S.; Plotkin, Richard M.; Sesar, Branimir; Stuart, J. Scott

    2012-11-01

    We investigate the use of optical photometric variability to select and identify blazars in large-scale time-domain surveys, in part to aid in the identification of blazar counterparts to the ~30% of γ-ray sources in the Fermi 2FGL catalog still lacking reliable associations. Using data from the optical LINEAR asteroid survey, we characterize the optical variability of blazars by fitting a damped random walk model to individual light curves with two main model parameters, the characteristic timescales of variability τ, and driving amplitudes on short timescales \\hat{\\sigma }. Imposing cuts on minimum τ and \\hat{\\sigma } allows for blazar selection with high efficiency E and completeness C. To test the efficacy of this approach, we apply this method to optically variable LINEAR objects that fall within the several-arcminute error ellipses of γ-ray sources in the Fermi 2FGL catalog. Despite the extreme stellar contamination at the shallow depth of the LINEAR survey, we are able to recover previously associated optical counterparts to Fermi active galactic nuclei with E >= 88% and C = 88% in Fermi 95% confidence error ellipses having semimajor axis r beaming. After correcting for beaming, we estimate that the characteristic timescale of blazar variability is ~3 years in the rest frame of the jet, in contrast with the ~320 day disk flux timescale observed in quasars. The variability-based selection method presented will be useful for blazar identification in time-domain optical surveys and is also a probe of jet physics.

  2. Variable selection by lasso-type methods

    Directory of Open Access Journals (Sweden)

    Sohail Chand

    2011-09-01

    Full Text Available Variable selection is an important property of shrinkage methods. The adaptive lasso is an oracle procedure and can do consistent variable selection. In this paper, we provide an explanation that how use of adaptive weights make it possible for the adaptive lasso to satisfy the necessary and almost sufcient condition for consistent variable selection. We suggest a novel algorithm and give an important result that for the adaptive lasso if predictors are normalised after the introduction of adaptive weights, it makes the adaptive lasso performance identical to the lasso.

  3. Coupled variable selection for regression modeling of complex treatment patterns in a clinical cancer registry.

    Science.gov (United States)

    Schmidtmann, I; Elsäßer, A; Weinmann, A; Binder, H

    2014-12-30

    For determining a manageable set of covariates potentially influential with respect to a time-to-event endpoint, Cox proportional hazards models can be combined with variable selection techniques, such as stepwise forward selection or backward elimination based on p-values, or regularized regression techniques such as component-wise boosting. Cox regression models have also been adapted for dealing with more complex event patterns, for example, for competing risks settings with separate, cause-specific hazard models for each event type, or for determining the prognostic effect pattern of a variable over different landmark times, with one conditional survival model for each landmark. Motivated by a clinical cancer registry application, where complex event patterns have to be dealt with and variable selection is needed at the same time, we propose a general approach for linking variable selection between several Cox models. Specifically, we combine score statistics for each covariate across models by Fisher's method as a basis for variable selection. This principle is implemented for a stepwise forward selection approach as well as for a regularized regression technique. In an application to data from hepatocellular carcinoma patients, the coupled stepwise approach is seen to facilitate joint interpretation of the different cause-specific Cox models. In conditional survival models at landmark times, which address updates of prediction as time progresses and both treatment and other potential explanatory variables may change, the coupled regularized regression approach identifies potentially important, stably selected covariates together with their effect time pattern, despite having only a small number of events. These results highlight the promise of the proposed approach for coupling variable selection between Cox models, which is particularly relevant for modeling for clinical cancer registries with their complex event patterns. Copyright © 2014 John Wiley & Sons

  4. CHARACTERIZING THE OPTICAL VARIABILITY OF BRIGHT BLAZARS: VARIABILITY-BASED SELECTION OF FERMI ACTIVE GALACTIC NUCLEI

    International Nuclear Information System (INIS)

    Ruan, John J.; Anderson, Scott F.; MacLeod, Chelsea L.; Becker, Andrew C.; Davenport, James R. A.; Ivezić, Željko; Burnett, T. H.; Kochanek, Christopher S.; Plotkin, Richard M.; Sesar, Branimir; Stuart, J. Scott

    2012-01-01

    We investigate the use of optical photometric variability to select and identify blazars in large-scale time-domain surveys, in part to aid in the identification of blazar counterparts to the ∼30% of γ-ray sources in the Fermi 2FGL catalog still lacking reliable associations. Using data from the optical LINEAR asteroid survey, we characterize the optical variability of blazars by fitting a damped random walk model to individual light curves with two main model parameters, the characteristic timescales of variability τ, and driving amplitudes on short timescales σ-circumflex. Imposing cuts on minimum τ and σ-circumflex allows for blazar selection with high efficiency E and completeness C. To test the efficacy of this approach, we apply this method to optically variable LINEAR objects that fall within the several-arcminute error ellipses of γ-ray sources in the Fermi 2FGL catalog. Despite the extreme stellar contamination at the shallow depth of the LINEAR survey, we are able to recover previously associated optical counterparts to Fermi active galactic nuclei with E ≥ 88% and C = 88% in Fermi 95% confidence error ellipses having semimajor axis r < 8'. We find that the suggested radio counterpart to Fermi source 2FGL J1649.6+5238 has optical variability consistent with other γ-ray blazars and is likely to be the γ-ray source. Our results suggest that the variability of the non-thermal jet emission in blazars is stochastic in nature, with unique variability properties due to the effects of relativistic beaming. After correcting for beaming, we estimate that the characteristic timescale of blazar variability is ∼3 years in the rest frame of the jet, in contrast with the ∼320 day disk flux timescale observed in quasars. The variability-based selection method presented will be useful for blazar identification in time-domain optical surveys and is also a probe of jet physics.

  5. Periodicity and stability for variable-time impulsive neural networks.

    Science.gov (United States)

    Li, Hongfei; Li, Chuandong; Huang, Tingwen

    2017-10-01

    The paper considers a general neural networks model with variable-time impulses. It is shown that each solution of the system intersects with every discontinuous surface exactly once via several new well-proposed assumptions. Moreover, based on the comparison principle, this paper shows that neural networks with variable-time impulse can be reduced to the corresponding neural network with fixed-time impulses under well-selected conditions. Meanwhile, the fixed-time impulsive systems can be regarded as the comparison system of the variable-time impulsive neural networks. Furthermore, a series of sufficient criteria are derived to ensure the existence and global exponential stability of periodic solution of variable-time impulsive neural networks, and to illustrate the same stability properties between variable-time impulsive neural networks and the fixed-time ones. The new criteria are established by applying Schaefer's fixed point theorem combined with the use of inequality technique. Finally, a numerical example is presented to show the effectiveness of the proposed results. Copyright © 2017 Elsevier Ltd. All rights reserved.

  6. Variable selection in multivariate calibration based on clustering of variable concept.

    Science.gov (United States)

    Farrokhnia, Maryam; Karimi, Sadegh

    2016-01-01

    Recently we have proposed a new variable selection algorithm, based on clustering of variable concept (CLoVA) in classification problem. With the same idea, this new concept has been applied to a regression problem and then the obtained results have been compared with conventional variable selection strategies for PLS. The basic idea behind the clustering of variable is that, the instrument channels are clustered into different clusters via clustering algorithms. Then, the spectral data of each cluster are subjected to PLS regression. Different real data sets (Cargill corn, Biscuit dough, ACE QSAR, Soy, and Tablet) have been used to evaluate the influence of the clustering of variables on the prediction performances of PLS. Almost in the all cases, the statistical parameter especially in prediction error shows the superiority of CLoVA-PLS respect to other variable selection strategies. Finally the synergy clustering of variable (sCLoVA-PLS), which is used the combination of cluster, has been proposed as an efficient and modification of CLoVA algorithm. The obtained statistical parameter indicates that variable clustering can split useful part from redundant ones, and then based on informative cluster; stable model can be reached. Copyright © 2015 Elsevier B.V. All rights reserved.

  7. Predictive and Descriptive CoMFA Models: The Effect of Variable Selection.

    Science.gov (United States)

    Sepehri, Bakhtyar; Omidikia, Nematollah; Kompany-Zareh, Mohsen; Ghavami, Raouf

    2018-01-01

    Aims & Scope: In this research, 8 variable selection approaches were used to investigate the effect of variable selection on the predictive power and stability of CoMFA models. Three data sets including 36 EPAC antagonists, 79 CD38 inhibitors and 57 ATAD2 bromodomain inhibitors were modelled by CoMFA. First of all, for all three data sets, CoMFA models with all CoMFA descriptors were created then by applying each variable selection method a new CoMFA model was developed so for each data set, 9 CoMFA models were built. Obtained results show noisy and uninformative variables affect CoMFA results. Based on created models, applying 5 variable selection approaches including FFD, SRD-FFD, IVE-PLS, SRD-UVEPLS and SPA-jackknife increases the predictive power and stability of CoMFA models significantly. Among them, SPA-jackknife removes most of the variables while FFD retains most of them. FFD and IVE-PLS are time consuming process while SRD-FFD and SRD-UVE-PLS run need to few seconds. Also applying FFD, SRD-FFD, IVE-PLS, SRD-UVE-PLS protect CoMFA countor maps information for both fields. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  8. Purposeful selection of variables in logistic regression

    Directory of Open Access Journals (Sweden)

    Williams David Keith

    2008-12-01

    Full Text Available Abstract Background The main problem in many model-building situations is to choose from a large set of covariates those that should be included in the "best" model. A decision to keep a variable in the model might be based on the clinical or statistical significance. There are several variable selection algorithms in existence. Those methods are mechanical and as such carry some limitations. Hosmer and Lemeshow describe a purposeful selection of covariates within which an analyst makes a variable selection decision at each step of the modeling process. Methods In this paper we introduce an algorithm which automates that process. We conduct a simulation study to compare the performance of this algorithm with three well documented variable selection procedures in SAS PROC LOGISTIC: FORWARD, BACKWARD, and STEPWISE. Results We show that the advantage of this approach is when the analyst is interested in risk factor modeling and not just prediction. In addition to significant covariates, this variable selection procedure has the capability of retaining important confounding variables, resulting potentially in a slightly richer model. Application of the macro is further illustrated with the Hosmer and Lemeshow Worchester Heart Attack Study (WHAS data. Conclusion If an analyst is in need of an algorithm that will help guide the retention of significant covariates as well as confounding ones they should consider this macro as an alternative tool.

  9. Using Variable Dwell Time to Accelerate Gaze-based Web Browsing with Two-step Selection

    OpenAIRE

    Chen, Zhaokang; Shi, Bertram E.

    2017-01-01

    In order to avoid the "Midas Touch" problem, gaze-based interfaces for selection often introduce a dwell time: a fixed amount of time the user must fixate upon an object before it is selected. Past interfaces have used a uniform dwell time across all objects. Here, we propose an algorithm for adjusting the dwell times of different objects based on the inferred probability that the user intends to select them. In particular, we introduce a probabilistic model of natural gaze behavior while sur...

  10. A New Variable Selection Method Based on Mutual Information Maximization by Replacing Collinear Variables for Nonlinear Quantitative Structure-Property Relationship Models

    Energy Technology Data Exchange (ETDEWEB)

    Ghasemi, Jahan B.; Zolfonoun, Ehsan [Toosi University of Technology, Tehran (Korea, Republic of)

    2012-05-15

    Selection of the most informative molecular descriptors from the original data set is a key step for development of quantitative structure activity/property relationship models. Recently, mutual information (MI) has gained increasing attention in feature selection problems. This paper presents an effective mutual information-based feature selection approach, named mutual information maximization by replacing collinear variables (MIMRCV), for nonlinear quantitative structure-property relationship models. The proposed variable selection method was applied to three different QSPR datasets, soil degradation half-life of 47 organophosphorus pesticides, GC-MS retention times of 85 volatile organic compounds, and water-to-micellar cetyltrimethylammonium bromide partition coefficients of 62 organic compounds.The obtained results revealed that using MIMRCV as feature selection method improves the predictive quality of the developed models compared to conventional MI based variable selection algorithms.

  11. A New Variable Selection Method Based on Mutual Information Maximization by Replacing Collinear Variables for Nonlinear Quantitative Structure-Property Relationship Models

    International Nuclear Information System (INIS)

    Ghasemi, Jahan B.; Zolfonoun, Ehsan

    2012-01-01

    Selection of the most informative molecular descriptors from the original data set is a key step for development of quantitative structure activity/property relationship models. Recently, mutual information (MI) has gained increasing attention in feature selection problems. This paper presents an effective mutual information-based feature selection approach, named mutual information maximization by replacing collinear variables (MIMRCV), for nonlinear quantitative structure-property relationship models. The proposed variable selection method was applied to three different QSPR datasets, soil degradation half-life of 47 organophosphorus pesticides, GC-MS retention times of 85 volatile organic compounds, and water-to-micellar cetyltrimethylammonium bromide partition coefficients of 62 organic compounds.The obtained results revealed that using MIMRCV as feature selection method improves the predictive quality of the developed models compared to conventional MI based variable selection algorithms

  12. Variable selection in Logistic regression model with genetic algorithm.

    Science.gov (United States)

    Zhang, Zhongheng; Trevino, Victor; Hoseini, Sayed Shahabuddin; Belciug, Smaranda; Boopathi, Arumugam Manivanna; Zhang, Ping; Gorunescu, Florin; Subha, Velappan; Dai, Songshi

    2018-02-01

    Variable or feature selection is one of the most important steps in model specification. Especially in the case of medical-decision making, the direct use of a medical database, without a previous analysis and preprocessing step, is often counterproductive. In this way, the variable selection represents the method of choosing the most relevant attributes from the database in order to build a robust learning models and, thus, to improve the performance of the models used in the decision process. In biomedical research, the purpose of variable selection is to select clinically important and statistically significant variables, while excluding unrelated or noise variables. A variety of methods exist for variable selection, but none of them is without limitations. For example, the stepwise approach, which is highly used, adds the best variable in each cycle generally producing an acceptable set of variables. Nevertheless, it is limited by the fact that it commonly trapped in local optima. The best subset approach can systematically search the entire covariate pattern space, but the solution pool can be extremely large with tens to hundreds of variables, which is the case in nowadays clinical data. Genetic algorithms (GA) are heuristic optimization approaches and can be used for variable selection in multivariable regression models. This tutorial paper aims to provide a step-by-step approach to the use of GA in variable selection. The R code provided in the text can be extended and adapted to other data analysis needs.

  13. Variable and subset selection in PLS regression

    DEFF Research Database (Denmark)

    Høskuldsson, Agnar

    2001-01-01

    The purpose of this paper is to present some useful methods for introductory analysis of variables and subsets in relation to PLS regression. We present here methods that are efficient in finding the appropriate variables or subset to use in the PLS regression. The general conclusion...... is that variable selection is important for successful analysis of chemometric data. An important aspect of the results presented is that lack of variable selection can spoil the PLS regression, and that cross-validation measures using a test set can show larger variation, when we use different subsets of X, than...

  14. Comparison of selected variables of gaming performance in football

    OpenAIRE

    Parachin, Jiří

    2014-01-01

    Title: Comparison of selected variables of gaming performance in football Objectives: Analysis of selected variables of gaming performance in the matches of professional Czech football teams in the Champions League and UEFA Europa League in 2013. During the observation to register set variables, then evaluate obtained results and compare them. Methods: The use of observational analysis and comparison of selected variables of gaming performance in competitive matches of professional football. ...

  15. Constructing Proxy Variables to Measure Adult Learners' Time Management Strategies in LMS

    Science.gov (United States)

    Jo, Il-Hyun; Kim, Dongho; Yoon, Meehyun

    2015-01-01

    This study describes the process of constructing proxy variables from recorded log data within a Learning Management System (LMS), which represents adult learners' time management strategies in an online course. Based on previous research, three variables of total login time, login frequency, and regularity of login interval were selected as…

  16. Bayesian Group Bridge for Bi-level Variable Selection.

    Science.gov (United States)

    Mallick, Himel; Yi, Nengjun

    2017-06-01

    A Bayesian bi-level variable selection method (BAGB: Bayesian Analysis of Group Bridge) is developed for regularized regression and classification. This new development is motivated by grouped data, where generic variables can be divided into multiple groups, with variables in the same group being mechanistically related or statistically correlated. As an alternative to frequentist group variable selection methods, BAGB incorporates structural information among predictors through a group-wise shrinkage prior. Posterior computation proceeds via an efficient MCMC algorithm. In addition to the usual ease-of-interpretation of hierarchical linear models, the Bayesian formulation produces valid standard errors, a feature that is notably absent in the frequentist framework. Empirical evidence of the attractiveness of the method is illustrated by extensive Monte Carlo simulations and real data analysis. Finally, several extensions of this new approach are presented, providing a unified framework for bi-level variable selection in general models with flexible penalties.

  17. Using Random Forests to Select Optimal Input Variables for Short-Term Wind Speed Forecasting Models

    Directory of Open Access Journals (Sweden)

    Hui Wang

    2017-10-01

    Full Text Available Achieving relatively high-accuracy short-term wind speed forecasting estimates is a precondition for the construction and grid-connected operation of wind power forecasting systems for wind farms. Currently, most research is focused on the structure of forecasting models and does not consider the selection of input variables, which can have significant impacts on forecasting performance. This paper presents an input variable selection method for wind speed forecasting models. The candidate input variables for various leading periods are selected and random forests (RF is employed to evaluate the importance of all variable as features. The feature subset with the best evaluation performance is selected as the optimal feature set. Then, kernel-based extreme learning machine is constructed to evaluate the performance of input variables selection based on RF. The results of the case study show that by removing the uncorrelated and redundant features, RF effectively extracts the most strongly correlated set of features from the candidate input variables. By finding the optimal feature combination to represent the original information, RF simplifies the structure of the wind speed forecasting model, shortens the training time required, and substantially improves the model’s accuracy and generalization ability, demonstrating that the input variables selected by RF are effective.

  18. A Simple K-Map Based Variable Selection Scheme in the Direct ...

    African Journals Online (AJOL)

    A multiplexer with (n-l) data select inputs can realise directly a function of n variables. In this paper, a simple k-map based variable selection scheme is proposed such that an n variable logic function can be synthesised using a multiplexer with (n-q) data input variables and q data select variables. The procedure is based on ...

  19. Genome-wide prediction of traits with different genetic architecture through efficient variable selection.

    Science.gov (United States)

    Wimmer, Valentin; Lehermeier, Christina; Albrecht, Theresa; Auinger, Hans-Jürgen; Wang, Yu; Schön, Chris-Carolin

    2013-10-01

    In genome-based prediction there is considerable uncertainty about the statistical model and method required to maximize prediction accuracy. For traits influenced by a small number of quantitative trait loci (QTL), predictions are expected to benefit from methods performing variable selection [e.g., BayesB or the least absolute shrinkage and selection operator (LASSO)] compared to methods distributing effects across the genome [ridge regression best linear unbiased prediction (RR-BLUP)]. We investigate the assumptions underlying successful variable selection by combining computer simulations with large-scale experimental data sets from rice (Oryza sativa L.), wheat (Triticum aestivum L.), and Arabidopsis thaliana (L.). We demonstrate that variable selection can be successful when the number of phenotyped individuals is much larger than the number of causal mutations contributing to the trait. We show that the sample size required for efficient variable selection increases dramatically with decreasing trait heritabilities and increasing extent of linkage disequilibrium (LD). We contrast and discuss contradictory results from simulation and experimental studies with respect to superiority of variable selection methods over RR-BLUP. Our results demonstrate that due to long-range LD, medium heritabilities, and small sample sizes, superiority of variable selection methods cannot be expected in plant breeding populations even for traits like FRIGIDA gene expression in Arabidopsis and flowering time in rice, assumed to be influenced by a few major QTL. We extend our conclusions to the analysis of whole-genome sequence data and infer upper bounds for the number of causal mutations which can be identified by LASSO. Our results have major impact on the choice of statistical method needed to make credible inferences about genetic architecture and prediction accuracy of complex traits.

  20. A novel variable selection approach that iteratively optimizes variable space using weighted binary matrix sampling.

    Science.gov (United States)

    Deng, Bai-chuan; Yun, Yong-huan; Liang, Yi-zeng; Yi, Lun-zhao

    2014-10-07

    In this study, a new optimization algorithm called the Variable Iterative Space Shrinkage Approach (VISSA) that is based on the idea of model population analysis (MPA) is proposed for variable selection. Unlike most of the existing optimization methods for variable selection, VISSA statistically evaluates the performance of variable space in each step of optimization. Weighted binary matrix sampling (WBMS) is proposed to generate sub-models that span the variable subspace. Two rules are highlighted during the optimization procedure. First, the variable space shrinks in each step. Second, the new variable space outperforms the previous one. The second rule, which is rarely satisfied in most of the existing methods, is the core of the VISSA strategy. Compared with some promising variable selection methods such as competitive adaptive reweighted sampling (CARS), Monte Carlo uninformative variable elimination (MCUVE) and iteratively retaining informative variables (IRIV), VISSA showed better prediction ability for the calibration of NIR data. In addition, VISSA is user-friendly; only a few insensitive parameters are needed, and the program terminates automatically without any additional conditions. The Matlab codes for implementing VISSA are freely available on the website: https://sourceforge.net/projects/multivariateanalysis/files/VISSA/.

  1. Comparison of climate envelope models developed using expert-selected variables versus statistical selection

    Science.gov (United States)

    Brandt, Laura A.; Benscoter, Allison; Harvey, Rebecca G.; Speroterra, Carolina; Bucklin, David N.; Romañach, Stephanie; Watling, James I.; Mazzotti, Frank J.

    2017-01-01

    Climate envelope models are widely used to describe potential future distribution of species under different climate change scenarios. It is broadly recognized that there are both strengths and limitations to using climate envelope models and that outcomes are sensitive to initial assumptions, inputs, and modeling methods Selection of predictor variables, a central step in modeling, is one of the areas where different techniques can yield varying results. Selection of climate variables to use as predictors is often done using statistical approaches that develop correlations between occurrences and climate data. These approaches have received criticism in that they rely on the statistical properties of the data rather than directly incorporating biological information about species responses to temperature and precipitation. We evaluated and compared models and prediction maps for 15 threatened or endangered species in Florida based on two variable selection techniques: expert opinion and a statistical method. We compared model performance between these two approaches for contemporary predictions, and the spatial correlation, spatial overlap and area predicted for contemporary and future climate predictions. In general, experts identified more variables as being important than the statistical method and there was low overlap in the variable sets (0.9 for area under the curve (AUC) and >0.7 for true skill statistic (TSS). Spatial overlap, which compares the spatial configuration between maps constructed using the different variable selection techniques, was only moderate overall (about 60%), with a great deal of variability across species. Difference in spatial overlap was even greater under future climate projections, indicating additional divergence of model outputs from different variable selection techniques. Our work is in agreement with other studies which have found that for broad-scale species distribution modeling, using statistical methods of variable

  2. Surface Estimation, Variable Selection, and the Nonparametric Oracle Property.

    Science.gov (United States)

    Storlie, Curtis B; Bondell, Howard D; Reich, Brian J; Zhang, Hao Helen

    2011-04-01

    Variable selection for multivariate nonparametric regression is an important, yet challenging, problem due, in part, to the infinite dimensionality of the function space. An ideal selection procedure should be automatic, stable, easy to use, and have desirable asymptotic properties. In particular, we define a selection procedure to be nonparametric oracle (np-oracle) if it consistently selects the correct subset of predictors and at the same time estimates the smooth surface at the optimal nonparametric rate, as the sample size goes to infinity. In this paper, we propose a model selection procedure for nonparametric models, and explore the conditions under which the new method enjoys the aforementioned properties. Developed in the framework of smoothing spline ANOVA, our estimator is obtained via solving a regularization problem with a novel adaptive penalty on the sum of functional component norms. Theoretical properties of the new estimator are established. Additionally, numerous simulated and real examples further demonstrate that the new approach substantially outperforms other existing methods in the finite sample setting.

  3. Variability of gastric emptying time using standardized radiolabeled meals

    International Nuclear Information System (INIS)

    Christian, P.E.; Brophy, C.M.; Egger, M.J.; Taylor, A.; Moore, J.G.

    1984-01-01

    To define the range of inter- and intra-subject variability on gastric emptying measurements, eight healthy male subjects (ages 19-40) received meals on four separate occasions. The meal consisted of 150 g of beef stew labeled with Tc-99m SC labeled liver (600 μCi) and 150 g of orange juice containing In-111 DTPA (100 μCi) as the solid- and liquid-phase markers respectively. Images of the solid and liquid phases were obtained at 20 min intervals immediately after meal ingestion. The stomach region was selected from digital images and data were corrected for radionuclide interference, radioactive decay and the geometric mean of anterior and posterior counts. More absolute variability was seen with the solid than the liquid marker emptying for the group. The mean solid half-emptying time was 58 +- 17 min (range 29-92) while the mean liquid half-emptying time was 24 +- 8 min (range 12-37). A nested random effects analysis of variance showed moderate intra-subject variability for solid half-emptying times (rho = 0.4594), and high intra-subject variability was implied by a low correlation (rho = 0.2084) for liquid half-emptying. The average inter-subject differences were 58.3% of the total variance for solids (rho = 0.0017). For liquids, the inter-subject variability was 69.1% of the total variance, but was only suggestive of statistical significance (rho = 0.0666). The normal half emptying time for gastric emptying of liquids and solids is a variable phenomenon in healthy subjects and has great inter- and intra-individual day-to-day differences

  4. Variability of gastric emptying time using standardized radiolabeled meals

    Energy Technology Data Exchange (ETDEWEB)

    Christian, P.E.; Brophy, C.M.; Egger, M.J.; Taylor, A.; Moore, J.G.

    1984-01-01

    To define the range of inter- and intra-subject variability on gastric emptying measurements, eight healthy male subjects (ages 19-40) received meals on four separate occasions. The meal consisted of 150 g of beef stew labeled with Tc-99m SC labeled liver (600 ..mu..Ci) and 150 g of orange juice containing In-111 DTPA (100 ..mu..Ci) as the solid- and liquid-phase markers respectively. Images of the solid and liquid phases were obtained at 20 min intervals immediately after meal ingestion. The stomach region was selected from digital images and data were corrected for radionuclide interference, radioactive decay and the geometric mean of anterior and posterior counts. More absolute variability was seen with the solid than the liquid marker emptying for the group. The mean solid half-emptying time was 58 +- 17 min (range 29-92) while the mean liquid half-emptying time was 24 +- 8 min (range 12-37). A nested random effects analysis of variance showed moderate intra-subject variability for solid half-emptying times (rho = 0.4594), and high intra-subject variability was implied by a low correlation (rho = 0.2084) for liquid half-emptying. The average inter-subject differences were 58.3% of the total variance for solids (rho = 0.0017). For liquids, the inter-subject variability was 69.1% of the total variance, but was only suggestive of statistical significance (rho = 0.0666). The normal half emptying time for gastric emptying of liquids and solids is a variable phenomenon in healthy subjects and has great inter- and intra-individual day-to-day differences.

  5. Effects of carprofen or meloxicam on selected haemostatic variables in miniature pigs after orthopaedic surgery

    Directory of Open Access Journals (Sweden)

    Petr Raušer

    2011-01-01

    Full Text Available The aim of the study was to detect and compare the haemostatic variables and bleeding after 7‑days administration of carprofen or meloxicam in clinically healthy miniature pigs. Twenty-one clinically healthy Göttingen miniature pigs were divided into 3 groups. Selected haemostatic variables such as platelet count, prothrombin time, activated partial thromboplastin time, thrombin time, fibrinogen, serum biochemical variables such as total protein, bilirubin, urea, creatinine, alkaline phosphatase, alanine aminotransferase and gamma-glutamyltransferase and haemoglobin, haematocrit, red blood cells, white blood cells and buccal mucosal bleeding time were assessed before and 7 days after daily intramuscular administration of saline (1.5 ml per animal, control group, carprofen (2 mg·kg-1 or meloxicam (0.1 mg·kg-1. In pigs receiving carprofen or meloxicam, the thrombin time was significantly increased (p p p p < 0.05 compared to the control group. Significant differences were not detected in other haemostatic, biochemical variables or bleeding time compared to other groups or to the pretreatment values. Intramuscular administration of carprofen or meloxicam in healthy miniature pigs for 7 days causes sporadic, but not clinically important changes of selected haemostatic variables. Therefore, we can recommend them for perioperative use, e.g. for their analgesic effects, in orthopaedic or other surgical procedures without increased bleeding.

  6. Bayesian Multiresolution Variable Selection for Ultra-High Dimensional Neuroimaging Data.

    Science.gov (United States)

    Zhao, Yize; Kang, Jian; Long, Qi

    2018-01-01

    Ultra-high dimensional variable selection has become increasingly important in analysis of neuroimaging data. For example, in the Autism Brain Imaging Data Exchange (ABIDE) study, neuroscientists are interested in identifying important biomarkers for early detection of the autism spectrum disorder (ASD) using high resolution brain images that include hundreds of thousands voxels. However, most existing methods are not feasible for solving this problem due to their extensive computational costs. In this work, we propose a novel multiresolution variable selection procedure under a Bayesian probit regression framework. It recursively uses posterior samples for coarser-scale variable selection to guide the posterior inference on finer-scale variable selection, leading to very efficient Markov chain Monte Carlo (MCMC) algorithms. The proposed algorithms are computationally feasible for ultra-high dimensional data. Also, our model incorporates two levels of structural information into variable selection using Ising priors: the spatial dependence between voxels and the functional connectivity between anatomical brain regions. Applied to the resting state functional magnetic resonance imaging (R-fMRI) data in the ABIDE study, our methods identify voxel-level imaging biomarkers highly predictive of the ASD, which are biologically meaningful and interpretable. Extensive simulations also show that our methods achieve better performance in variable selection compared to existing methods.

  7. Three-Factor Market-Timing Models with Fama and French's Spread Variables

    Directory of Open Access Journals (Sweden)

    Joanna Olbryś

    2010-01-01

    Full Text Available The traditional performance measurement literature has attempted to distinguish security selection, or stock-picking ability, from market-timing, or the ability to predict overall market returns. However, the literature finds that it is not easy to separate ability into such dichotomous categories. Some researchers have developed models that allow the decomposition of manager performance into market-timing and selectivity skills. The main goal of this paper is to present modified versions of classical market-timing models with Fama and French’s spread variables SMB and HML, in the case of Polish equity mutual funds. (original abstract

  8. Automatic variable selection method and a comparison for quantitative analysis in laser-induced breakdown spectroscopy

    Science.gov (United States)

    Duan, Fajie; Fu, Xiao; Jiang, Jiajia; Huang, Tingting; Ma, Ling; Zhang, Cong

    2018-05-01

    In this work, an automatic variable selection method for quantitative analysis of soil samples using laser-induced breakdown spectroscopy (LIBS) is proposed, which is based on full spectrum correction (FSC) and modified iterative predictor weighting-partial least squares (mIPW-PLS). The method features automatic selection without artificial processes. To illustrate the feasibility and effectiveness of the method, a comparison with genetic algorithm (GA) and successive projections algorithm (SPA) for different elements (copper, barium and chromium) detection in soil was implemented. The experimental results showed that all the three methods could accomplish variable selection effectively, among which FSC-mIPW-PLS required significantly shorter computation time (12 s approximately for 40,000 initial variables) than the others. Moreover, improved quantification models were got with variable selection approaches. The root mean square errors of prediction (RMSEP) of models utilizing the new method were 27.47 (copper), 37.15 (barium) and 39.70 (chromium) mg/kg, which showed comparable prediction effect with GA and SPA.

  9. Exhaustive Search for Sparse Variable Selection in Linear Regression

    Science.gov (United States)

    Igarashi, Yasuhiko; Takenaka, Hikaru; Nakanishi-Ohno, Yoshinori; Uemura, Makoto; Ikeda, Shiro; Okada, Masato

    2018-04-01

    We propose a K-sparse exhaustive search (ES-K) method and a K-sparse approximate exhaustive search method (AES-K) for selecting variables in linear regression. With these methods, K-sparse combinations of variables are tested exhaustively assuming that the optimal combination of explanatory variables is K-sparse. By collecting the results of exhaustively computing ES-K, various approximate methods for selecting sparse variables can be summarized as density of states. With this density of states, we can compare different methods for selecting sparse variables such as relaxation and sampling. For large problems where the combinatorial explosion of explanatory variables is crucial, the AES-K method enables density of states to be effectively reconstructed by using the replica-exchange Monte Carlo method and the multiple histogram method. Applying the ES-K and AES-K methods to type Ia supernova data, we confirmed the conventional understanding in astronomy when an appropriate K is given beforehand. However, we found the difficulty to determine K from the data. Using virtual measurement and analysis, we argue that this is caused by data shortage.

  10. Combining epidemiologic and biostatistical tools to enhance variable selection in HIV cohort analyses.

    Directory of Open Access Journals (Sweden)

    Christopher Rentsch

    Full Text Available BACKGROUND: Variable selection is an important step in building a multivariate regression model for which several methods and statistical packages are available. A comprehensive approach for variable selection in complex multivariate regression analyses within HIV cohorts is explored by utilizing both epidemiological and biostatistical procedures. METHODS: Three different methods for variable selection were illustrated in a study comparing survival time between subjects in the Department of Defense's National History Study and the Atlanta Veterans Affairs Medical Center's HIV Atlanta VA Cohort Study. The first two methods were stepwise selection procedures, based either on significance tests (Score test, or on information theory (Akaike Information Criterion, while the third method employed a Bayesian argument (Bayesian Model Averaging. RESULTS: All three methods resulted in a similar parsimonious survival model. Three of the covariates previously used in the multivariate model were not included in the final model suggested by the three approaches. When comparing the parsimonious model to the previously published model, there was evidence of less variance in the main survival estimates. CONCLUSIONS: The variable selection approaches considered in this study allowed building a model based on significance tests, on an information criterion, and on averaging models using their posterior probabilities. A parsimonious model that balanced these three approaches was found to provide a better fit than the previously reported model.

  11. THE TIME-DOMAIN SPECTROSCOPIC SURVEY: UNDERSTANDING THE OPTICALLY VARIABLE SKY WITH SEQUELS IN SDSS-III

    International Nuclear Information System (INIS)

    Ruan, John J.; Anderson, Scott F.; Davenport, James R. A.; Green, Paul J.; Morganson, Eric; Eracleous, Michael; Brandt, William N.; Myers, Adam D.; Badenes, Carles; Bershady, Matthew A.; Chambers, Kenneth C.; Flewelling, Heather; Kaiser, Nick; Dawson, Kyle S.; Heckman, Timothy M.; Isler, Jedidah C.; Kneib, Jean-Paul; MacLeod, Chelsea L.; Ross, Nicholas P.; Paris, Isabelle

    2016-01-01

    The Time-Domain Spectroscopic Survey (TDSS) is an SDSS-IV eBOSS subproject primarily aimed at obtaining identification spectra of ∼220,000 optically variable objects systematically selected from SDSS/Pan-STARRS1 multi-epoch imaging. We present a preview of the science enabled by TDSS, based on TDSS spectra taken over ∼320 deg 2 of sky as part of the SEQUELS survey in SDSS-III, which is in part a pilot survey for eBOSS in SDSS-IV. Using the 15,746 TDSS-selected single-epoch spectra of photometrically variable objects in SEQUELS, we determine the demographics of our variability-selected sample and investigate the unique spectral characteristics inherent in samples selected by variability. We show that variability-based selection of quasars complements color-based selection by selecting additional redder quasars and mitigates redshift biases to produce a smooth quasar redshift distribution over a wide range of redshifts. The resulting quasar sample contains systematically higher fractions of blazars and broad absorption line quasars than from color-selected samples. Similarly, we show that M dwarfs in the TDSS-selected stellar sample have systematically higher chromospheric active fractions than the underlying M-dwarf population based on their H α emission. TDSS also contains a large number of RR Lyrae and eclipsing binary stars with main-sequence colors, including a few composite-spectrum binaries. Finally, our visual inspection of TDSS spectra uncovers a significant number of peculiar spectra, and we highlight a few cases of these interesting objects. With a factor of ∼15 more spectra, the main TDSS survey in SDSS-IV will leverage the lessons learned from these early results for a variety of time-domain science applications.

  12. Ensembling Variable Selectors by Stability Selection for the Cox Model

    Directory of Open Access Journals (Sweden)

    Qing-Yan Yin

    2017-01-01

    Full Text Available As a pivotal tool to build interpretive models, variable selection plays an increasingly important role in high-dimensional data analysis. In recent years, variable selection ensembles (VSEs have gained much interest due to their many advantages. Stability selection (Meinshausen and Bühlmann, 2010, a VSE technique based on subsampling in combination with a base algorithm like lasso, is an effective method to control false discovery rate (FDR and to improve selection accuracy in linear regression models. By adopting lasso as a base learner, we attempt to extend stability selection to handle variable selection problems in a Cox model. According to our experience, it is crucial to set the regularization region Λ in lasso and the parameter λmin properly so that stability selection can work well. To the best of our knowledge, however, there is no literature addressing this problem in an explicit way. Therefore, we first provide a detailed procedure to specify Λ and λmin. Then, some simulated and real-world data with various censoring rates are used to examine how well stability selection performs. It is also compared with several other variable selection approaches. Experimental results demonstrate that it achieves better or competitive performance in comparison with several other popular techniques.

  13. Variable selection methods in PLS regression - a comparison study on metabolomics data

    DEFF Research Database (Denmark)

    Karaman, İbrahim; Hedemann, Mette Skou; Knudsen, Knud Erik Bach

    . The aim of the metabolomics study was to investigate the metabolic profile in pigs fed various cereal fractions with special attention to the metabolism of lignans using LC-MS based metabolomic approach. References 1. Lê Cao KA, Rossouw D, Robert-Granié C, Besse P: A Sparse PLS for Variable Selection when...... integrated approach. Due to the high number of variables in data sets (both raw data and after peak picking) the selection of important variables in an explorative analysis is difficult, especially when different data sets of metabolomics data need to be related. Variable selection (or removal of irrelevant...... different strategies for variable selection on PLSR method were considered and compared with respect to selected subset of variables and the possibility for biological validation. Sparse PLSR [1] as well as PLSR with Jack-knifing [2] was applied to data in order to achieve variable selection prior...

  14. The XRF spectrometer and the selection of analysis conditions (instrumental variables)

    International Nuclear Information System (INIS)

    Willis, J.P.

    2002-01-01

    Full text: This presentation will begin with a brief discussion of EDXRF and flat- and curved-crystal WDXRF spectrometers, contrasting the major differences between the three types. The remainder of the presentation will contain a detailed overview of the choice and settings of the many instrumental variables contained in a modern WDXRF spectrometer, and will discuss critically the choices facing the analyst in setting up a WDXRF spectrometer for different elements and applications. In particular it will discuss the choice of tube target (when a choice is possible), the kV and mA settings, tube filters, collimator masks, collimators, analyzing crystals, secondary collimators, detectors, pulse height selection, X-ray path medium (air, nitrogen, vacuum or helium), counting times for peak and background positions and their effect on counting statistics and lower limit of detection (LLD). The use of Figure of Merit (FOM) calculations to objectively choose the best combination of instrumental variables also will be discussed. This presentation will be followed by a shorter session on a subsequent day entitled - A Selection of XRF Conditions - Practical Session, where participants will be given the opportunity to discuss in groups the selection of the best instrumental variables for three very diverse applications. Copyright (2002) Australian X-ray Analytical Association Inc

  15. Variability-based active galactic nucleus selection using image subtraction in the SDSS and LSST era

    Energy Technology Data Exchange (ETDEWEB)

    Choi, Yumi; Gibson, Robert R.; Becker, Andrew C.; Ivezić, Željko; Connolly, Andrew J.; Ruan, John J.; Anderson, Scott F. [Department of Astronomy, University of Washington, Box 351580, Seattle, WA 98195 (United States); MacLeod, Chelsea L., E-mail: ymchoi@astro.washington.edu [Physics Department, U.S. Naval Academy, 572 Holloway Road, Annapolis, MD 21402 (United States)

    2014-02-10

    With upcoming all-sky surveys such as LSST poised to generate a deep digital movie of the optical sky, variability-based active galactic nucleus (AGN) selection will enable the construction of highly complete catalogs with minimum contamination. In this study, we generate g-band difference images and construct light curves (LCs) for QSO/AGN candidates listed in Sloan Digital Sky Survey Stripe 82 public catalogs compiled from different methods, including spectroscopy, optical colors, variability, and X-ray detection. Image differencing excels at identifying variable sources embedded in complex or blended emission regions such as Type II AGNs and other low-luminosity AGNs that may be omitted from traditional photometric or spectroscopic catalogs. To separate QSOs/AGNs from other sources using our difference image LCs, we explore several LC statistics and parameterize optical variability by the characteristic damping timescale (τ) and variability amplitude. By virtue of distinguishable variability parameters of AGNs, we are able to select them with high completeness of 93.4% and efficiency (i.e., purity) of 71.3%. Based on optical variability, we also select highly variable blazar candidates, whose infrared colors are consistent with known blazars. One-third of them are also radio detected. With the X-ray selected AGN candidates, we probe the optical variability of X-ray detected optically extended sources using their difference image LCs for the first time. A combination of optical variability and X-ray detection enables us to select various types of host-dominated AGNs. Contrary to the AGN unification model prediction, two Type II AGN candidates (out of six) show detectable variability on long-term timescales like typical Type I AGNs. This study will provide a baseline for future optical variability studies of extended sources.

  16. THE TIME-DOMAIN SPECTROSCOPIC SURVEY: UNDERSTANDING THE OPTICALLY VARIABLE SKY WITH SEQUELS IN SDSS-III

    Energy Technology Data Exchange (ETDEWEB)

    Ruan, John J.; Anderson, Scott F.; Davenport, James R. A. [Department of Astronomy, University of Washington, Box 351580, Seattle, WA 98195 (United States); Green, Paul J.; Morganson, Eric [Harvard Smithsonian Center for Astrophysics, 60 Garden Street, Cambridge, MA 02138 (United States); Eracleous, Michael; Brandt, William N. [Department of Astronomy and Astrophysics, 525 Davey Lab, The Pennsylvania State University, University Park, PA 16802 (United States); Myers, Adam D. [Department of Physics and Astronomy 3905, University of Wyoming, 1000 E. University, Laramie, WY 82071 (United States); Badenes, Carles [Department of Physics and Astronomy and Pittsburgh Particle Physics, Astrophysics, and Cosmology Center (PITT-PACC), University of Pittsburgh (United States); Bershady, Matthew A. [Department of Astronomy, University of Wisconsin-Madison, 475 N. Charter Street, Madison, WI 53706 (United States); Chambers, Kenneth C.; Flewelling, Heather; Kaiser, Nick [Institute for Astronomy, University of Hawaii at Manoa, Honolulu, HI 96822 (United States); Dawson, Kyle S. [Department of Physics and Astronomy, University of Utah, Salt Lake City, UT 84112 (United States); Heckman, Timothy M. [Center for Astrophysical Sciences, Department of Physics and Astronomy, Johns Hopkins University, Baltimore, MD 21218 (United States); Isler, Jedidah C. [Department of Physics and Astronomy, Vanderbilt University, Nashville, TN 37235 (United States); Kneib, Jean-Paul [Laboratoire d’astrophysique, Ecole Polytechnique Fédérale de Lausanne Observatoire de Sauverny, 1290 Versoix (Switzerland); MacLeod, Chelsea L.; Ross, Nicholas P. [Institute for Astronomy, University of Edinburgh, Royal Observatory, Edinburgh, EH9 3HJ (United Kingdom); Paris, Isabelle, E-mail: jruan@astro.washington.edu [INAF—Osservatorio Astronomico di Trieste, Via G. B. Tiepolo 11, I-34131 Trieste (Italy); and others

    2016-07-10

    The Time-Domain Spectroscopic Survey (TDSS) is an SDSS-IV eBOSS subproject primarily aimed at obtaining identification spectra of ∼220,000 optically variable objects systematically selected from SDSS/Pan-STARRS1 multi-epoch imaging. We present a preview of the science enabled by TDSS, based on TDSS spectra taken over ∼320 deg{sup 2} of sky as part of the SEQUELS survey in SDSS-III, which is in part a pilot survey for eBOSS in SDSS-IV. Using the 15,746 TDSS-selected single-epoch spectra of photometrically variable objects in SEQUELS, we determine the demographics of our variability-selected sample and investigate the unique spectral characteristics inherent in samples selected by variability. We show that variability-based selection of quasars complements color-based selection by selecting additional redder quasars and mitigates redshift biases to produce a smooth quasar redshift distribution over a wide range of redshifts. The resulting quasar sample contains systematically higher fractions of blazars and broad absorption line quasars than from color-selected samples. Similarly, we show that M dwarfs in the TDSS-selected stellar sample have systematically higher chromospheric active fractions than the underlying M-dwarf population based on their H α emission. TDSS also contains a large number of RR Lyrae and eclipsing binary stars with main-sequence colors, including a few composite-spectrum binaries. Finally, our visual inspection of TDSS spectra uncovers a significant number of peculiar spectra, and we highlight a few cases of these interesting objects. With a factor of ∼15 more spectra, the main TDSS survey in SDSS-IV will leverage the lessons learned from these early results for a variety of time-domain science applications.

  17. A spatio-temporal nonparametric Bayesian variable selection model of fMRI data for clustering correlated time courses.

    Science.gov (United States)

    Zhang, Linlin; Guindani, Michele; Versace, Francesco; Vannucci, Marina

    2014-07-15

    In this paper we present a novel wavelet-based Bayesian nonparametric regression model for the analysis of functional magnetic resonance imaging (fMRI) data. Our goal is to provide a joint analytical framework that allows to detect regions of the brain which exhibit neuronal activity in response to a stimulus and, simultaneously, infer the association, or clustering, of spatially remote voxels that exhibit fMRI time series with similar characteristics. We start by modeling the data with a hemodynamic response function (HRF) with a voxel-dependent shape parameter. We detect regions of the brain activated in response to a given stimulus by using mixture priors with a spike at zero on the coefficients of the regression model. We account for the complex spatial correlation structure of the brain by using a Markov random field (MRF) prior on the parameters guiding the selection of the activated voxels, therefore capturing correlation among nearby voxels. In order to infer association of the voxel time courses, we assume correlated errors, in particular long memory, and exploit the whitening properties of discrete wavelet transforms. Furthermore, we achieve clustering of the voxels by imposing a Dirichlet process (DP) prior on the parameters of the long memory process. For inference, we use Markov Chain Monte Carlo (MCMC) sampling techniques that combine Metropolis-Hastings schemes employed in Bayesian variable selection with sampling algorithms for nonparametric DP models. We explore the performance of the proposed model on simulated data, with both block- and event-related design, and on real fMRI data. Copyright © 2014 Elsevier Inc. All rights reserved.

  18. Machine learning techniques to select variable stars

    Directory of Open Access Journals (Sweden)

    García-Varela Alejandro

    2017-01-01

    Full Text Available In order to perform a supervised classification of variable stars, we propose and evaluate a set of six features extracted from the magnitude density of the light curves. They are used to train automatic classification systems using state-of-the-art classifiers implemented in the R statistical computing environment. We find that random forests is the most successful method to select variables.

  19. A Variable-Selection Heuristic for K-Means Clustering.

    Science.gov (United States)

    Brusco, Michael J.; Cradit, J. Dennis

    2001-01-01

    Presents a variable selection heuristic for nonhierarchical (K-means) cluster analysis based on the adjusted Rand index for measuring cluster recovery. Subjected the heuristic to Monte Carlo testing across more than 2,200 datasets. Results indicate that the heuristic is extremely effective at eliminating masking variables. (SLD)

  20. HEART RATE VARIABILITY CLASSIFICATION USING SADE-ELM CLASSIFIER WITH BAT FEATURE SELECTION

    Directory of Open Access Journals (Sweden)

    R Kavitha

    2017-07-01

    Full Text Available The electrical activity of the human heart is measured by the vital bio medical signal called ECG. This electrocardiogram is employed as a crucial source to gather the diagnostic information of a patient’s cardiopathy. The monitoring function of cardiac disease is diagnosed by documenting and handling the electrocardiogram (ECG impulses. In the recent years many research has been done and developing an enhanced method to identify the risk in the patient’s body condition by processing and analysing the ECG signal. This analysis of the signal helps to find the cardiac abnormalities, arrhythmias, and many other heart problems. ECG signal is processed to detect the variability in heart rhythm; heart rate variability is calculated based on the time interval between heart beats. Heart Rate Variability HRV is measured by the variation in the beat to beat interval. The Heart rate Variability (HRV is an essential aspect to diagnose the properties of the heart. Recent development enhances the potential with the aid of non-linear metrics in reference point with feature selection. In this paper, the fundamental elements are taken from the ECG signal for feature selection process where Bat algorithm is employed for feature selection to predict the best feature and presented to the classifier for accurate classification. The popular machine learning algorithm ELM is taken for classification, integrated with evolutionary algorithm named Self- Adaptive Differential Evolution Extreme Learning Machine SADEELM to improve the reliability of classification. It combines Effective Fuzzy Kohonen clustering network (EFKCN to be able to increase the accuracy of the effect for HRV transmission classification. Hence, it is observed that the experiment carried out unveils that the precision is improved by the SADE-ELM method and concurrently optimizes the computation time.

  1. A survey of variable selection methods in two Chinese epidemiology journals

    Directory of Open Access Journals (Sweden)

    Lynn Henry S

    2010-09-01

    Full Text Available Abstract Background Although much has been written on developing better procedures for variable selection, there is little research on how it is practiced in actual studies. This review surveys the variable selection methods reported in two high-ranking Chinese epidemiology journals. Methods Articles published in 2004, 2006, and 2008 in the Chinese Journal of Epidemiology and the Chinese Journal of Preventive Medicine were reviewed. Five categories of methods were identified whereby variables were selected using: A - bivariate analyses; B - multivariable analysis; e.g. stepwise or individual significance testing of model coefficients; C - first bivariate analyses, followed by multivariable analysis; D - bivariate analyses or multivariable analysis; and E - other criteria like prior knowledge or personal judgment. Results Among the 287 articles that reported using variable selection methods, 6%, 26%, 30%, 21%, and 17% were in categories A through E, respectively. One hundred sixty-three studies selected variables using bivariate analyses, 80% (130/163 via multiple significance testing at the 5% alpha-level. Of the 219 multivariable analyses, 97 (44% used stepwise procedures, 89 (41% tested individual regression coefficients, but 33 (15% did not mention how variables were selected. Sixty percent (58/97 of the stepwise routines also did not specify the algorithm and/or significance levels. Conclusions The variable selection methods reported in the two journals were limited in variety, and details were often missing. Many studies still relied on problematic techniques like stepwise procedures and/or multiple testing of bivariate associations at the 0.05 alpha-level. These deficiencies should be rectified to safeguard the scientific validity of articles published in Chinese epidemiology journals.

  2. Using exogenous variables in testing for monotonic trends in hydrologic time series

    Science.gov (United States)

    Alley, William M.

    1988-01-01

    One approach that has been used in performing a nonparametric test for monotonic trend in a hydrologic time series consists of a two-stage analysis. First, a regression equation is estimated for the variable being tested as a function of an exogenous variable. A nonparametric trend test such as the Kendall test is then performed on the residuals from the equation. By analogy to stagewise regression and through Monte Carlo experiments, it is demonstrated that this approach will tend to underestimate the magnitude of the trend and to result in some loss in power as a result of ignoring the interaction between the exogenous variable and time. An alternative approach, referred to as the adjusted variable Kendall test, is demonstrated to generally have increased statistical power and to provide more reliable estimates of the trend slope. In addition, the utility of including an exogenous variable in a trend test is examined under selected conditions.

  3. Variable selection and model choice in geoadditive regression models.

    Science.gov (United States)

    Kneib, Thomas; Hothorn, Torsten; Tutz, Gerhard

    2009-06-01

    Model choice and variable selection are issues of major concern in practical regression analyses, arising in many biometric applications such as habitat suitability analyses, where the aim is to identify the influence of potentially many environmental conditions on certain species. We describe regression models for breeding bird communities that facilitate both model choice and variable selection, by a boosting algorithm that works within a class of geoadditive regression models comprising spatial effects, nonparametric effects of continuous covariates, interaction surfaces, and varying coefficients. The major modeling components are penalized splines and their bivariate tensor product extensions. All smooth model terms are represented as the sum of a parametric component and a smooth component with one degree of freedom to obtain a fair comparison between the model terms. A generic representation of the geoadditive model allows us to devise a general boosting algorithm that automatically performs model choice and variable selection.

  4. Selected Macroeconomic Variables and Stock Market Movements: Empirical evidence from Thailand

    Directory of Open Access Journals (Sweden)

    Joseph Ato Forson

    2014-06-01

    Full Text Available This paper investigates and analyzes the long-run equilibrium relationship between the Thai stock Exchange Index (SETI and selected macroeconomic variables using monthly time series data that cover a 20-year period from January 1990 to December 2009. The following macroeconomic variables are included in our analysis: money supply (MS, the consumer price index (CPI, interest rate (IR and the industrial production index (IP (as a proxy for GDP. Our findings prove that the SET Index and the selected macroeconomic variables are cointegrated at I (1 and have a significant equilibrium relationship over the long run. Money supply demonstrates a strong positive relationship with the SET Index over the long run, whereas the industrial production index and consumer price index show negative long-run relationships with the SET Index. Furthermore, in non-equilibrium situations, the error correction mechanism suggests that the consumer price index, industrial production index and money supply each contribute in some way to restore equilibrium. In addition, using Toda and Yamamoto’s augmented Granger causality test, we identify a bi-causal relationship between industrial production and money supply and unilateral causal relationships between CPI and IR, IP and CPI, MS and CPI, and IP and SETI, indicating that all of these variables are sensitive to Thai stock market movements. The policy implications of these findings are also discussed.

  5. Sparse Reduced-Rank Regression for Simultaneous Dimension Reduction and Variable Selection

    KAUST Repository

    Chen, Lisha

    2012-12-01

    The reduced-rank regression is an effective method in predicting multiple response variables from the same set of predictor variables. It reduces the number of model parameters and takes advantage of interrelations between the response variables and hence improves predictive accuracy. We propose to select relevant variables for reduced-rank regression by using a sparsity-inducing penalty. We apply a group-lasso type penalty that treats each row of the matrix of the regression coefficients as a group and show that this penalty satisfies certain desirable invariance properties. We develop two numerical algorithms to solve the penalized regression problem and establish the asymptotic consistency of the proposed method. In particular, the manifold structure of the reduced-rank regression coefficient matrix is considered and studied in our theoretical analysis. In our simulation study and real data analysis, the new method is compared with several existing variable selection methods for multivariate regression and exhibits competitive performance in prediction and variable selection. © 2012 American Statistical Association.

  6. Adaptive and Selective Time Averaging of Auditory Scenes

    DEFF Research Database (Denmark)

    McWalter, Richard Ian; McDermott, Josh H.

    2018-01-01

    longer than previously reported integration times in the auditory system. Integration also showed signs of being restricted to sound elements attributed to a common source. The results suggest an integration process that depends on stimulus characteristics, integrating over longer extents when......To overcome variability, estimate scene characteristics, and compress sensory input, perceptual systems pool data into statistical summaries. Despite growing evidence for statistical representations in perception, the underlying mechanisms remain poorly understood. One example...... it benefits statistical estimation of variable signals and selectively integrating stimulus components likely to have a common cause in the world. Our methodology could be naturally extended to examine statistical representations of other types of sensory signals. Sound texture perception is thought...

  7. Locating disease genes using Bayesian variable selection with the Haseman-Elston method

    Directory of Open Access Journals (Sweden)

    He Qimei

    2003-12-01

    Full Text Available Abstract Background We applied stochastic search variable selection (SSVS, a Bayesian model selection method, to the simulated data of Genetic Analysis Workshop 13. We used SSVS with the revisited Haseman-Elston method to find the markers linked to the loci determining change in cholesterol over time. To study gene-gene interaction (epistasis and gene-environment interaction, we adopted prior structures, which incorporate the relationship among the predictors. This allows SSVS to search in the model space more efficiently and avoid the less likely models. Results In applying SSVS, instead of looking at the posterior distribution of each of the candidate models, which is sensitive to the setting of the prior, we ranked the candidate variables (markers according to their marginal posterior probability, which was shown to be more robust to the prior. Compared with traditional methods that consider one marker at a time, our method considers all markers simultaneously and obtains more favorable results. Conclusions We showed that SSVS is a powerful method for identifying linked markers using the Haseman-Elston method, even for weak effects. SSVS is very effective because it does a smart search over the entire model space.

  8. The Properties of Model Selection when Retaining Theory Variables

    DEFF Research Database (Denmark)

    Hendry, David F.; Johansen, Søren

    Economic theories are often fitted directly to data to avoid possible model selection biases. We show that embedding a theory model that specifies the correct set of m relevant exogenous variables, x{t}, within the larger set of m+k candidate variables, (x{t},w{t}), then selection over the second...... set by their statistical significance can be undertaken without affecting the estimator distribution of the theory parameters. This strategy returns the theory-parameter estimates when the theory is correct, yet protects against the theory being under-specified because some w{t} are relevant....

  9. Additive measures of travel time variability

    DEFF Research Database (Denmark)

    Engelson, Leonid; Fosgerau, Mogens

    2011-01-01

    This paper derives a measure of travel time variability for travellers equipped with scheduling preferences defined in terms of time-varying utility rates, and who choose departure time optimally. The corresponding value of travel time variability is a constant that depends only on preference...... parameters. The measure is unique in being additive with respect to independent parts of a trip. It has the variance of travel time as a special case. Extension is provided to the case of travellers who use a scheduled service with fixed headway....

  10. Important variables in explaining real-time peak price in the independent power market of Ontario

    International Nuclear Information System (INIS)

    Rueda, I.E.A.; Marathe, A.

    2005-01-01

    This paper uses support vector machines (SVM) based learning algorithm to select important variables that help explain the real-time peak electricity price in the Ontario market. The Ontario market was opened to competition only in May 2002. Due to the limited number of observations available, finding a set of variables that can explain the independent power market of Ontario (IMO) real-time peak price is a significant challenge for the traders and analysts. The kernel regressions of the explanatory variables on the IMO real-time average peak price show that non-linear dependencies exist between the explanatory variables and the IMO price. This non-linear relationship combined with the low variable-observation ratio rule out conventional statistical analysis. Hence, we use an alternative machine learning technique to find the important explanatory variables for the IMO real-time average peak price. SVM sensitivity analysis based results find that the IMO's predispatch average peak price, the actual import peak volume, the peak load of the Ontario market and the net available supply after accounting for load (energy excess) are some of the most important variables in explaining the real-time average peak price in the Ontario electricity market. (author)

  11. Variable selection in near-infrared spectroscopy: Benchmarking of feature selection methods on biodiesel data

    International Nuclear Information System (INIS)

    Balabin, Roman M.; Smirnov, Sergey V.

    2011-01-01

    During the past several years, near-infrared (near-IR/NIR) spectroscopy has increasingly been adopted as an analytical tool in various fields from petroleum to biomedical sectors. The NIR spectrum (above 4000 cm -1 ) of a sample is typically measured by modern instruments at a few hundred of wavelengths. Recently, considerable effort has been directed towards developing procedures to identify variables (wavelengths) that contribute useful information. Variable selection (VS) or feature selection, also called frequency selection or wavelength selection, is a critical step in data analysis for vibrational spectroscopy (infrared, Raman, or NIRS). In this paper, we compare the performance of 16 different feature selection methods for the prediction of properties of biodiesel fuel, including density, viscosity, methanol content, and water concentration. The feature selection algorithms tested include stepwise multiple linear regression (MLR-step), interval partial least squares regression (iPLS), backward iPLS (BiPLS), forward iPLS (FiPLS), moving window partial least squares regression (MWPLS), (modified) changeable size moving window partial least squares (CSMWPLS/MCSMWPLSR), searching combination moving window partial least squares (SCMWPLS), successive projections algorithm (SPA), uninformative variable elimination (UVE, including UVE-SPA), simulated annealing (SA), back-propagation artificial neural networks (BP-ANN), Kohonen artificial neural network (K-ANN), and genetic algorithms (GAs, including GA-iPLS). Two linear techniques for calibration model building, namely multiple linear regression (MLR) and partial least squares regression/projection to latent structures (PLS/PLSR), are used for the evaluation of biofuel properties. A comparison with a non-linear calibration model, artificial neural networks (ANN-MLP), is also provided. Discussion of gasoline, ethanol-gasoline (bioethanol), and diesel fuel data is presented. The results of other spectroscopic

  12. Meta-Statistics for Variable Selection: The R Package BioMark

    Directory of Open Access Journals (Sweden)

    Ron Wehrens

    2012-11-01

    Full Text Available Biomarker identification is an ever more important topic in the life sciences. With the advent of measurement methodologies based on microarrays and mass spectrometry, thousands of variables are routinely being measured on complex biological samples. Often, the question is what makes two groups of samples different. Classical hypothesis testing suffers from the multiple testing problem; however, correcting for this often leads to a lack of power. In addition, choosing α cutoff levels remains somewhat arbitrary. Also in a regression context, a model depending on few but relevant variables will be more accurate and precise, and easier to interpret biologically.We propose an R package, BioMark, implementing two meta-statistics for variable selection. The first, higher criticism, presents a data-dependent selection threshold for significance, instead of a cookbook value of α = 0.05. It is applicable in all cases where two groups are compared. The second, stability selection, is more general, and can also be applied in a regression context. This approach uses repeated subsampling of the data in order to assess the variability of the model coefficients and selects those that remain consistently important. It is shown using experimental spike-in data from the field of metabolomics that both approaches work well with real data. BioMark also contains functionality for simulating data with specific characteristics for algorithm development and testing.

  13. Variability of Cost and Time Delivery of Educational Buildings in Nigeria

    Directory of Open Access Journals (Sweden)

    Aghimien, Douglas Omoregie

    2017-09-01

    Full Text Available Cost and time overrun in construction projects has become a reoccurring problem in construction industries around the world especially in developing countries. This situation is unhealthy for public educational buildings which are executed with limited government funds, and are in most cases time sensitive, as they need to cater for the influx of students into the institutions. This study therefore assessed the variability of cost and time delivery of educational buildings in Nigeria, using a study of selected educational buildings within the country. A pro forma was used to gather cost and time data on selected building projects, while structured questionnaire was used to harness information on the possible measures for reducing the variability from the construction participants that were involved in the delivery of these projects. Paired sample t-test, percentage, relative importance index, and Kruskal-Walis test were adopted for data analyses. The study reveals that there is a significant difference between the initial and final cost of delivering educational buildings, as an average of 4.87% deviation, with a sig. p-value of 0.000 was experienced on all assessed projects. For time delivery, there is also a significant difference between the initial estimated time and final time of construction as a whopping 130% averaged deviation with a sig. p-value of 0.000 was discovered. To remedy these problems, the study revealed that prompt payment for executed works, predicting market price fluctuation and inculcating it into the initial estimate, and owner’s involvement at the planning and design phase are some of the possible measures to be adopted.

  14. Input variable selection for data-driven models of Coriolis flowmeters for two-phase flow measurement

    International Nuclear Information System (INIS)

    Wang, Lijuan; Yan, Yong; Wang, Xue; Wang, Tao

    2017-01-01

    Input variable selection is an essential step in the development of data-driven models for environmental, biological and industrial applications. Through input variable selection to eliminate the irrelevant or redundant variables, a suitable subset of variables is identified as the input of a model. Meanwhile, through input variable selection the complexity of the model structure is simplified and the computational efficiency is improved. This paper describes the procedures of the input variable selection for the data-driven models for the measurement of liquid mass flowrate and gas volume fraction under two-phase flow conditions using Coriolis flowmeters. Three advanced input variable selection methods, including partial mutual information (PMI), genetic algorithm-artificial neural network (GA-ANN) and tree-based iterative input selection (IIS) are applied in this study. Typical data-driven models incorporating support vector machine (SVM) are established individually based on the input candidates resulting from the selection methods. The validity of the selection outcomes is assessed through an output performance comparison of the SVM based data-driven models and sensitivity analysis. The validation and analysis results suggest that the input variables selected from the PMI algorithm provide more effective information for the models to measure liquid mass flowrate while the IIS algorithm provides a fewer but more effective variables for the models to predict gas volume fraction. (paper)

  15. BAYESIAN TECHNIQUES FOR COMPARING TIME-DEPENDENT GRMHD SIMULATIONS TO VARIABLE EVENT HORIZON TELESCOPE OBSERVATIONS

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Junhan; Marrone, Daniel P.; Chan, Chi-Kwan; Medeiros, Lia; Özel, Feryal; Psaltis, Dimitrios, E-mail: junhankim@email.arizona.edu [Department of Astronomy and Steward Observatory, University of Arizona, 933 N. Cherry Avenue, Tucson, AZ 85721 (United States)

    2016-12-01

    The Event Horizon Telescope (EHT) is a millimeter-wavelength, very-long-baseline interferometry (VLBI) experiment that is capable of observing black holes with horizon-scale resolution. Early observations have revealed variable horizon-scale emission in the Galactic Center black hole, Sagittarius A* (Sgr A*). Comparing such observations to time-dependent general relativistic magnetohydrodynamic (GRMHD) simulations requires statistical tools that explicitly consider the variability in both the data and the models. We develop here a Bayesian method to compare time-resolved simulation images to variable VLBI data, in order to infer model parameters and perform model comparisons. We use mock EHT data based on GRMHD simulations to explore the robustness of this Bayesian method and contrast it to approaches that do not consider the effects of variability. We find that time-independent models lead to offset values of the inferred parameters with artificially reduced uncertainties. Moreover, neglecting the variability in the data and the models often leads to erroneous model selections. We finally apply our method to the early EHT data on Sgr A*.

  16. Commuters’ valuation of travel time variability in Barcelona

    OpenAIRE

    Javier Asensio; Anna Matas

    2007-01-01

    The value given by commuters to the variability of travel times is empirically analysed using stated preference data from Barcelona (Spain). Respondents are asked to choose between alternatives that differ in terms of cost, average travel time, variability of travel times and departure time. Different specifications of a scheduling choice model are used to measure the influence of various socioeconomic characteristics. Our results show that travel time variability.

  17. Electrical Activity in a Time-Delay Four-Variable Neuron Model under Electromagnetic Induction

    Directory of Open Access Journals (Sweden)

    Keming Tang

    2017-11-01

    Full Text Available To investigate the effect of electromagnetic induction on the electrical activity of neuron, the variable for magnetic flow is used to improve Hindmarsh–Rose neuron model. Simultaneously, due to the existence of time-delay when signals are propagated between neurons or even in one neuron, it is important to study the role of time-delay in regulating the electrical activity of the neuron. For this end, a four-variable neuron model is proposed to investigate the effects of electromagnetic induction and time-delay. Simulation results suggest that the proposed neuron model can show multiple modes of electrical activity, which is dependent on the time-delay and external forcing current. It means that suitable discharge mode can be obtained by selecting the time-delay or external forcing current, which could be helpful for further investigation of electromagnetic radiation on biological neuronal system.

  18. Spatial and temporal variability of interhemispheric transport times

    Science.gov (United States)

    Wu, Xiaokang; Yang, Huang; Waugh, Darryn W.; Orbe, Clara; Tilmes, Simone; Lamarque, Jean-Francois

    2018-05-01

    The seasonal and interannual variability of transport times from the northern midlatitude surface into the Southern Hemisphere is examined using simulations of three idealized age tracers: an ideal age tracer that yields the mean transit time from northern midlatitudes and two tracers with uniform 50- and 5-day decay. For all tracers the largest seasonal and interannual variability occurs near the surface within the tropics and is generally closely coupled to movement of the Intertropical Convergence Zone (ITCZ). There are, however, notable differences in variability between the different tracers. The largest seasonal and interannual variability in the mean age is generally confined to latitudes spanning the ITCZ, with very weak variability in the southern extratropics. In contrast, for tracers subject to spatially uniform exponential loss the peak variability tends to be south of the ITCZ, and there is a smaller contrast between tropical and extratropical variability. These differences in variability occur because the distribution of transit times from northern midlatitudes is very broad and tracers with more rapid loss are more sensitive to changes in fast transit times than the mean age tracer. These simulations suggest that the seasonal-interannual variability in the southern extratropics of trace gases with predominantly NH midlatitude sources may differ depending on the gases' chemical lifetimes.

  19. Travel time variability and rational inattention

    DEFF Research Database (Denmark)

    Fosgerau, Mogens; Jiang, Gege

    2017-01-01

    This paper sets up a rational inattention model for the choice of departure time for a traveler facing random travel time. The traveler chooses how much information to acquire about the travel time out-come before choosing departure time. This reduces the cost of travel time variability compared...

  20. Portfolio Selection Based on Distance between Fuzzy Variables

    Directory of Open Access Journals (Sweden)

    Weiyi Qian

    2014-01-01

    Full Text Available This paper researches portfolio selection problem in fuzzy environment. We introduce a new simple method in which the distance between fuzzy variables is used to measure the divergence of fuzzy investment return from a prior one. Firstly, two new mathematical models are proposed by expressing divergence as distance, investment return as expected value, and risk as variance and semivariance, respectively. Secondly, the crisp forms of the new models are also provided for different types of fuzzy variables. Finally, several numerical examples are given to illustrate the effectiveness of the proposed approach.

  1. Selecting minimum dataset soil variables using PLSR as a regressive multivariate method

    Science.gov (United States)

    Stellacci, Anna Maria; Armenise, Elena; Castellini, Mirko; Rossi, Roberta; Vitti, Carolina; Leogrande, Rita; De Benedetto, Daniela; Ferrara, Rossana M.; Vivaldi, Gaetano A.

    2017-04-01

    Long-term field experiments and science-based tools that characterize soil status (namely the soil quality indices, SQIs) assume a strategic role in assessing the effect of agronomic techniques and thus in improving soil management especially in marginal environments. Selecting key soil variables able to best represent soil status is a critical step for the calculation of SQIs. Current studies show the effectiveness of statistical methods for variable selection to extract relevant information deriving from multivariate datasets. Principal component analysis (PCA) has been mainly used, however supervised multivariate methods and regressive techniques are progressively being evaluated (Armenise et al., 2013; de Paul Obade et al., 2016; Pulido Moncada et al., 2014). The present study explores the effectiveness of partial least square regression (PLSR) in selecting critical soil variables, using a dataset comparing conventional tillage and sod-seeding on durum wheat. The results were compared to those obtained using PCA and stepwise discriminant analysis (SDA). The soil data derived from a long-term field experiment in Southern Italy. On samples collected in April 2015, the following set of variables was quantified: (i) chemical: total organic carbon and nitrogen (TOC and TN), alkali-extractable C (TEC and humic substances - HA-FA), water extractable N and organic C (WEN and WEOC), Olsen extractable P, exchangeable cations, pH and EC; (ii) physical: texture, dry bulk density (BD), macroporosity (Pmac), air capacity (AC), and relative field capacity (RFC); (iii) biological: carbon of the microbial biomass quantified with the fumigation-extraction method. PCA and SDA were previously applied to the multivariate dataset (Stellacci et al., 2016). PLSR was carried out on mean centered and variance scaled data of predictors (soil variables) and response (wheat yield) variables using the PLS procedure of SAS/STAT. In addition, variable importance for projection (VIP

  2. A New Variable Weighting and Selection Procedure for K-Means Cluster Analysis

    Science.gov (United States)

    Steinley, Douglas; Brusco, Michael J.

    2008-01-01

    A variance-to-range ratio variable weighting procedure is proposed. We show how this weighting method is theoretically grounded in the inherent variability found in data exhibiting cluster structure. In addition, a variable selection procedure is proposed to operate in conjunction with the variable weighting technique. The performances of these…

  3. Punishment induced behavioural and neurophysiological variability reveals dopamine-dependent selection of kinematic movement parameters

    Science.gov (United States)

    Galea, Joseph M.; Ruge, Diane; Buijink, Arthur; Bestmann, Sven; Rothwell, John C.

    2013-01-01

    Action selection describes the high-level process which selects between competing movements. In animals, behavioural variability is critical for the motor exploration required to select the action which optimizes reward and minimizes cost/punishment, and is guided by dopamine (DA). The aim of this study was to test in humans whether low-level movement parameters are affected by punishment and reward in ways similar to high-level action selection. Moreover, we addressed the proposed dependence of behavioural and neurophysiological variability on DA, and whether this may underpin the exploration of kinematic parameters. Participants performed an out-and-back index finger movement and were instructed that monetary reward and punishment were based on its maximal acceleration (MA). In fact, the feedback was not contingent on the participant’s behaviour but pre-determined. Blocks highly-biased towards punishment were associated with increased MA variability relative to blocks with either reward or without feedback. This increase in behavioural variability was positively correlated with neurophysiological variability, as measured by changes in cortico-spinal excitability with transcranial magnetic stimulation over the primary motor cortex. Following the administration of a DA-antagonist, the variability associated with punishment diminished and the correlation between behavioural and neurophysiological variability no longer existed. Similar changes in variability were not observed when participants executed a pre-determined MA, nor did DA influence resting neurophysiological variability. Thus, under conditions of punishment, DA-dependent processes influence the selection of low-level movement parameters. We propose that the enhanced behavioural variability reflects the exploration of kinematic parameters for less punishing, or conversely more rewarding, outcomes. PMID:23447607

  4. Curve fitting and modeling with splines using statistical variable selection techniques

    Science.gov (United States)

    Smith, P. L.

    1982-01-01

    The successful application of statistical variable selection techniques to fit splines is demonstrated. Major emphasis is given to knot selection, but order determination is also discussed. Two FORTRAN backward elimination programs, using the B-spline basis, were developed. The program for knot elimination is compared in detail with two other spline-fitting methods and several statistical software packages. An example is also given for the two-variable case using a tensor product basis, with a theoretical discussion of the difficulties of their use.

  5. Norepinephrine genes predict response time variability and methylphenidate-induced changes in neuropsychological function in attention deficit hyperactivity disorder.

    Science.gov (United States)

    Kim, Bung-Nyun; Kim, Jae-Won; Cummins, Tarrant D R; Bellgrove, Mark A; Hawi, Ziarih; Hong, Soon-Beom; Yang, Young-Hui; Kim, Hyo-Jin; Shin, Min-Sup; Cho, Soo-Churl; Kim, Ji-Hoon; Son, Jung-Woo; Shin, Yun-Mi; Chung, Un-Sun; Han, Doug-Hyun

    2013-06-01

    Noradrenergic dysfunction may be associated with cognitive impairments in attention-deficit/hyperactivity disorder (ADHD), including increased response time variability, which has been proposed as a leading endophenotype for ADHD. The aim of this study was to examine the relationship between polymorphisms in the α-2A-adrenergic receptor (ADRA2A) and norepinephrine transporter (SLC6A2) genes and attentional performance in ADHD children before and after pharmacological treatment.One hundred one medication-naive ADHD children were included. All subjects were administered methylphenidate (MPH)-OROS for 12 weeks. The subjects underwent a computerized comprehensive attention test to measure the response time variability at baseline before MPH treatment and after 12 weeks. Additive regression analyses controlling for ADHD symptom severity, age, sex, IQ, and final dose of MPH examined the association between response time variability on the comprehensive attention test measures and allelic variations in single-nucleotide polymorphisms of the ADRA2A and SLC6A2 before and after MPH treatment.Increasing possession of an A allele at the G1287A polymorphism of SLC6A2 was significantly related to heightened response time variability at baseline in the sustained (P = 2.0 × 10) and auditory selective attention (P = 1.0 × 10) tasks. Response time variability at baseline increased additively with possession of the T allele at the DraI polymorphism of the ADRA2A gene in the auditory selective attention task (P = 2.0 × 10). After medication, increasing possession of a G allele at the MspI polymorphism of the ADRA2A gene was associated with increased MPH-related change in response time variability in the flanker task (P = 1.0 × 10).Our study suggested an association between norepinephrine gene variants and response time variability measured at baseline and after MPH treatment in children with ADHD. Our results add to a growing body of evidence, suggesting that response time

  6. The use of vector bootstrapping to improve variable selection precision in Lasso models

    NARCIS (Netherlands)

    Laurin, C.; Boomsma, D.I.; Lubke, G.H.

    2016-01-01

    The Lasso is a shrinkage regression method that is widely used for variable selection in statistical genetics. Commonly, K-fold cross-validation is used to fit a Lasso model. This is sometimes followed by using bootstrap confidence intervals to improve precision in the resulting variable selections.

  7. PLS-based and regularization-based methods for the selection of relevant variables in non-targeted metabolomics data

    Directory of Open Access Journals (Sweden)

    Renata Bujak

    2016-07-01

    Full Text Available Non-targeted metabolomics constitutes a part of systems biology and aims to determine many metabolites in complex biological samples. Datasets obtained in non-targeted metabolomics studies are multivariate and high-dimensional due to the sensitivity of mass spectrometry-based detection methods as well as complexity of biological matrices. Proper selection of variables which contribute into group classification is a crucial step, especially in metabolomics studies which are focused on searching for disease biomarker candidates. In the present study, three different statistical approaches were tested using two metabolomics datasets (RH and PH study. Orthogonal projections to latent structures-discriminant analysis (OPLS-DA without and with multiple testing correction as well as least absolute shrinkage and selection operator (LASSO were tested and compared. For the RH study, OPLS-DA model built without multiple testing correction, selected 46 and 218 variables based on VIP criteria using Pareto and UV scaling, respectively. In the case of the PH study, 217 and 320 variables were selected based on VIP criteria using Pareto and UV scaling, respectively. In the RH study, OPLS-DA model built with multiple testing correction, selected 4 and 19 variables as statistically significant in terms of Pareto and UV scaling, respectively. For PH study, 14 and 18 variables were selected based on VIP criteria in terms of Pareto and UV scaling, respectively. Additionally, the concept and fundaments of the least absolute shrinkage and selection operator (LASSO with bootstrap procedure evaluating reproducibility of results, was demonstrated. In the RH and PH study, the LASSO selected 14 and 4 variables with reproducibility between 99.3% and 100%. However, apart from the popularity of PLS-DA and OPLS-DA methods in metabolomics, it should be highlighted that they do not control type I or type II error, but only arbitrarily establish a cut-off value for PLS-DA loadings

  8. Effects of spring temperatures on the strength of selection on timing of reproduction in a long-distance migratory bird

    NARCIS (Netherlands)

    Visser, Marcel E; Gienapp, Phillip; Husby, Arild; Morrisey, Michael; de la Hera, Iván; Pulido, Francisco; Both, Christiaan

    Climate change has differentially affected the timing of seasonal events for interacting trophic levels, and this has often led to increased selection on seasonal timing. Yet, the environmental variables driving this selection have rarely been identified, limiting our ability to predict future

  9. Continuous-variable quantum computing in optical time-frequency modes using quantum memories.

    Science.gov (United States)

    Humphreys, Peter C; Kolthammer, W Steven; Nunn, Joshua; Barbieri, Marco; Datta, Animesh; Walmsley, Ian A

    2014-09-26

    We develop a scheme for time-frequency encoded continuous-variable cluster-state quantum computing using quantum memories. In particular, we propose a method to produce, manipulate, and measure two-dimensional cluster states in a single spatial mode by exploiting the intrinsic time-frequency selectivity of Raman quantum memories. Time-frequency encoding enables the scheme to be extremely compact, requiring a number of memories that are a linear function of only the number of different frequencies in which the computational state is encoded, independent of its temporal duration. We therefore show that quantum memories can be a powerful component for scalable photonic quantum information processing architectures.

  10. Time perception, attention, and memory: a selective review.

    Science.gov (United States)

    Block, Richard A; Gruber, Ronald P

    2014-06-01

    This article provides a selective review of time perception research, mainly focusing on the authors' research. Aspects of psychological time include simultaneity, successiveness, temporal order, and duration judgments. In contrast to findings at interstimulus intervals or durations less than 3.0-5.0 s, there is little evidence for an "across-senses" effect of perceptual modality (visual vs. auditory) at longer intervals or durations. In addition, the flow of time (events) is a pervasive perceptual illusion, and we review evidence on that. Some temporal information is encoded All rights reserved. relatively automatically into memory: People can judge time-related attributes such as recency, frequency, temporal order, and duration of events. Duration judgments in prospective and retrospective paradigms reveal differences between them, as well as variables that moderate the processes involved. An attentional-gate model is needed to account for prospective judgments, and a contextual-change model is needed to account for retrospective judgments. Copyright © 2013 Elsevier B.V. All rights reserved.

  11. Ethnic variability in adiposity and cardiovascular risk: the variable disease selection hypothesis.

    Science.gov (United States)

    Wells, Jonathan C K

    2009-02-01

    Evidence increasingly suggests that ethnic differences in cardiovascular risk are partly mediated by adipose tissue biology, which refers to the regional distribution of adipose tissue and its differential metabolic activity. This paper proposes a novel evolutionary hypothesis for ethnic genetic variability in adipose tissue biology. Whereas medical interest focuses on the harmful effect of excess fat, the value of adipose tissue is greatest during chronic energy insufficiency. Following Neel's influential paper on the thrifty genotype, proposed to have been favoured by exposure to cycles of feast and famine, much effort has been devoted to searching for genetic markers of 'thrifty metabolism'. However, whether famine-induced starvation was the primary selective pressure on adipose tissue biology has been questioned, while the notion that fat primarily represents a buffer against starvation appears inconsistent with historical records of mortality during famines. This paper reviews evidence for the role played by adipose tissue in immune function and proposes that adipose tissue biology responds to selective pressures acting through infectious disease. Different diseases activate the immune system in different ways and induce different metabolic costs. It is hypothesized that exposure to different infectious disease burdens has favoured ethnic genetic variability in the anatomical location of, and metabolic profile of, adipose tissue depots.

  12. Cooperative Orthogonal Space-Time-Frequency Block Codes over a MIMO-OFDM Frequency Selective Channel

    Directory of Open Access Journals (Sweden)

    M. Rezaei

    2016-03-01

    Full Text Available In this paper, a cooperative algorithm to improve the orthogonal space-timefrequency block codes (OSTFBC in frequency selective channels for 2*1, 2*2, 4*1, 4*2 MIMO-OFDM systems, is presented. The algorithm of three node, a source node, a relay node and a destination node is formed, and is implemented in two stages. During the first stage, the destination and the relay antennas receive the symbols sent by the source antennas. The destination node and the relay node obtain the decision variables employing time-space-frequency decoding process by the received signals. During the second stage, the relay node transmits decision variables to the destination node. Due to the increasing diversity in the proposed algorithm, decision variables in the destination node are increased to improve system performance. The bit error rate of the proposed algorithm at high SNR is estimated by considering the BPSK modulation. The simulation results show that cooperative orthogonal space-time-frequency block coding, improves system performance and reduces the BER in a frequency selective channel.

  13. Random forest variable selection in spatial malaria transmission modelling in Mpumalanga Province, South Africa

    Directory of Open Access Journals (Sweden)

    Thandi Kapwata

    2016-11-01

    Full Text Available Malaria is an environmentally driven disease. In order to quantify the spatial variability of malaria transmission, it is imperative to understand the interactions between environmental variables and malaria epidemiology at a micro-geographic level using a novel statistical approach. The random forest (RF statistical learning method, a relatively new variable-importance ranking method, measures the variable importance of potentially influential parameters through the percent increase of the mean squared error. As this value increases, so does the relative importance of the associated variable. The principal aim of this study was to create predictive malaria maps generated using the selected variables based on the RF algorithm in the Ehlanzeni District of Mpumalanga Province, South Africa. From the seven environmental variables used [temperature, lag temperature, rainfall, lag rainfall, humidity, altitude, and the normalized difference vegetation index (NDVI], altitude was identified as the most influential predictor variable due its high selection frequency. It was selected as the top predictor for 4 out of 12 months of the year, followed by NDVI, temperature and lag rainfall, which were each selected twice. The combination of climatic variables that produced the highest prediction accuracy was altitude, NDVI, and temperature. This suggests that these three variables have high predictive capabilities in relation to malaria transmission. Furthermore, it is anticipated that the predictive maps generated from predictions made by the RF algorithm could be used to monitor the progression of malaria and assist in intervention and prevention efforts with respect to malaria.

  14. Uninformative variable elimination assisted by Gram-Schmidt Orthogonalization/successive projection algorithm for descriptor selection in QSAR

    DEFF Research Database (Denmark)

    Omidikia, Nematollah; Kompany-Zareh, Mohsen

    2013-01-01

    Employment of Uninformative Variable Elimination (UVE) as a robust variable selection method is reported in this study. Each regression coefficient represents the contribution of the corresponding variable in the established model, but in the presence of uninformative variables as well as colline......Employment of Uninformative Variable Elimination (UVE) as a robust variable selection method is reported in this study. Each regression coefficient represents the contribution of the corresponding variable in the established model, but in the presence of uninformative variables as well...... as collinearity reliability of the regression coefficient's magnitude is suspicious. Successive Projection Algorithm (SPA) and Gram-Schmidt Orthogonalization (GSO) were implemented as pre-selection technique for removing collinearity and redundancy among variables in the model. Uninformative variable elimination...

  15. Time-to-code converter with selection of time intervals on duration

    International Nuclear Information System (INIS)

    Atanasov, I.Kh.; Rusanov, I.R.; )

    2001-01-01

    Identification of elementary particles on the basis of time-of-flight represents the important approach of the preliminary selection procedure. Paper describes a time-to-code converter with preliminary selection of the measured time intervals as to duration. It consists of a time-to-amplitude converter, an analog-to-digital converter, a unit of selection of time intervals as to duration, a unit of total reset and CAMAC command decoder. The time-to-code converter enables to measure time intervals with 100 ns accuracy within 0-100 ns range. Output code capacity is of 10. Selection time constitutes 50 ns [ru

  16. Novel Harmonic Regularization Approach for Variable Selection in Cox’s Proportional Hazards Model

    Directory of Open Access Journals (Sweden)

    Ge-Jin Chu

    2014-01-01

    Full Text Available Variable selection is an important issue in regression and a number of variable selection methods have been proposed involving nonconvex penalty functions. In this paper, we investigate a novel harmonic regularization method, which can approximate nonconvex Lq  (1/2select key risk factors in the Cox’s proportional hazards model using microarray gene expression data. The harmonic regularization method can be efficiently solved using our proposed direct path seeking approach, which can produce solutions that closely approximate those for the convex loss function and the nonconvex regularization. Simulation results based on the artificial datasets and four real microarray gene expression datasets, such as real diffuse large B-cell lymphoma (DCBCL, the lung cancer, and the AML datasets, show that the harmonic regularization method can be more accurate for variable selection than existing Lasso series methods.

  17. Assessing the accuracy and stability of variable selection methods for random forest modeling in ecology.

    Science.gov (United States)

    Fox, Eric W; Hill, Ryan A; Leibowitz, Scott G; Olsen, Anthony R; Thornbrugh, Darren J; Weber, Marc H

    2017-07-01

    Random forest (RF) modeling has emerged as an important statistical learning method in ecology due to its exceptional predictive performance. However, for large and complex ecological data sets, there is limited guidance on variable selection methods for RF modeling. Typically, either a preselected set of predictor variables are used or stepwise procedures are employed which iteratively remove variables according to their importance measures. This paper investigates the application of variable selection methods to RF models for predicting probable biological stream condition. Our motivating data set consists of the good/poor condition of n = 1365 stream survey sites from the 2008/2009 National Rivers and Stream Assessment, and a large set (p = 212) of landscape features from the StreamCat data set as potential predictors. We compare two types of RF models: a full variable set model with all 212 predictors and a reduced variable set model selected using a backward elimination approach. We assess model accuracy using RF's internal out-of-bag estimate, and a cross-validation procedure with validation folds external to the variable selection process. We also assess the stability of the spatial predictions generated by the RF models to changes in the number of predictors and argue that model selection needs to consider both accuracy and stability. The results suggest that RF modeling is robust to the inclusion of many variables of moderate to low importance. We found no substantial improvement in cross-validated accuracy as a result of variable reduction. Moreover, the backward elimination procedure tended to select too few variables and exhibited numerous issues such as upwardly biased out-of-bag accuracy estimates and instabilities in the spatial predictions. We use simulations to further support and generalize results from the analysis of real data. A main purpose of this work is to elucidate issues of model selection bias and instability to ecologists interested in

  18. Prewhitening of hydroclimatic time series? Implications for inferred change and variability across time scales

    Science.gov (United States)

    Razavi, Saman; Vogel, Richard

    2018-02-01

    Prewhitening, the process of eliminating or reducing short-term stochastic persistence to enable detection of deterministic change, has been extensively applied to time series analysis of a range of geophysical variables. Despite the controversy around its utility, methodologies for prewhitening time series continue to be a critical feature of a variety of analyses including: trend detection of hydroclimatic variables and reconstruction of climate and/or hydrology through proxy records such as tree rings. With a focus on the latter, this paper presents a generalized approach to exploring the impact of a wide range of stochastic structures of short- and long-term persistence on the variability of hydroclimatic time series. Through this approach, we examine the impact of prewhitening on the inferred variability of time series across time scales. We document how a focus on prewhitened, residual time series can be misleading, as it can drastically distort (or remove) the structure of variability across time scales. Through examples with actual data, we show how such loss of information in prewhitened time series of tree rings (so-called "residual chronologies") can lead to the underestimation of extreme conditions in climate and hydrology, particularly droughts, reconstructed for centuries preceding the historical period.

  19. A QSAR Study of Environmental Estrogens Based on a Novel Variable Selection Method

    Directory of Open Access Journals (Sweden)

    Aiqian Zhang

    2012-05-01

    Full Text Available A large number of descriptors were employed to characterize the molecular structure of 53 natural, synthetic, and environmental chemicals which are suspected of disrupting endocrine functions by mimicking or antagonizing natural hormones and may thus pose a serious threat to the health of humans and wildlife. In this work, a robust quantitative structure-activity relationship (QSAR model with a novel variable selection method has been proposed for the effective estrogens. The variable selection method is based on variable interaction (VSMVI with leave-multiple-out cross validation (LMOCV to select the best subset. During variable selection, model construction and assessment, the Organization for Economic Co-operation and Development (OECD principles for regulation of QSAR acceptability were fully considered, such as using an unambiguous multiple-linear regression (MLR algorithm to build the model, using several validation methods to assessment the performance of the model, giving the define of applicability domain and analyzing the outliers with the results of molecular docking. The performance of the QSAR model indicates that the VSMVI is an effective, feasible and practical tool for rapid screening of the best subset from large molecular descriptors.

  20. Travel time variability and airport accessibility

    NARCIS (Netherlands)

    Koster, P.R.; Kroes, E.P.; Verhoef, E.T.

    2011-01-01

    We analyze the cost of access travel time variability for air travelers. Reliable access to airports is important since the cost of missing a flight is likely to be high. First, the determinants of the preferred arrival times at airports are analyzed. Second, the willingness to pay (WTP) for

  1. A model for AGN variability on multiple time-scales

    Science.gov (United States)

    Sartori, Lia F.; Schawinski, Kevin; Trakhtenbrot, Benny; Caplar, Neven; Treister, Ezequiel; Koss, Michael J.; Urry, C. Megan; Zhang, C. E.

    2018-05-01

    We present a framework to link and describe active galactic nuclei (AGN) variability on a wide range of time-scales, from days to billions of years. In particular, we concentrate on the AGN variability features related to changes in black hole fuelling and accretion rate. In our framework, the variability features observed in different AGN at different time-scales may be explained as realisations of the same underlying statistical properties. In this context, we propose a model to simulate the evolution of AGN light curves with time based on the probability density function (PDF) and power spectral density (PSD) of the Eddington ratio (L/LEdd) distribution. Motivated by general galaxy population properties, we propose that the PDF may be inspired by the L/LEdd distribution function (ERDF), and that a single (or limited number of) ERDF+PSD set may explain all observed variability features. After outlining the framework and the model, we compile a set of variability measurements in terms of structure function (SF) and magnitude difference. We then combine the variability measurements on a SF plot ranging from days to Gyr. The proposed framework enables constraints on the underlying PSD and the ability to link AGN variability on different time-scales, therefore providing new insights into AGN variability and black hole growth phenomena.

  2. Social variables exert selective pressures in the evolution and form of primate mimetic musculature.

    Science.gov (United States)

    Burrows, Anne M; Li, Ly; Waller, Bridget M; Micheletta, Jerome

    2016-04-01

    Mammals use their faces in social interactions more so than any other vertebrates. Primates are an extreme among most mammals in their complex, direct, lifelong social interactions and their frequent use of facial displays is a means of proximate visual communication with conspecifics. The available repertoire of facial displays is primarily controlled by mimetic musculature, the muscles that move the face. The form of these muscles is, in turn, limited by and influenced by phylogenetic inertia but here we use examples, both morphological and physiological, to illustrate the influence that social variables may exert on the evolution and form of mimetic musculature among primates. Ecomorphology is concerned with the adaptive responses of morphology to various ecological variables such as diet, foliage density, predation pressures, and time of day activity. We present evidence that social variables also exert selective pressures on morphology, specifically using mimetic muscles among primates as an example. Social variables include group size, dominance 'style', and mating systems. We present two case studies to illustrate the potential influence of social behavior on adaptive morphology of mimetic musculature in primates: (1) gross morphology of the mimetic muscles around the external ear in closely related species of macaque (Macaca mulatta and Macaca nigra) characterized by varying dominance styles and (2) comparative physiology of the orbicularis oris muscle among select ape species. This muscle is used in both facial displays/expressions and in vocalizations/human speech. We present qualitative observations of myosin fiber-type distribution in this muscle of siamang (Symphalangus syndactylus), chimpanzee (Pan troglodytes), and human to demonstrate the potential influence of visual and auditory communication on muscle physiology. In sum, ecomorphologists should be aware of social selective pressures as well as ecological ones, and that observed morphology might

  3. Surgeon and type of anesthesia predict variability in surgical procedure times.

    Science.gov (United States)

    Strum, D P; Sampson, A R; May, J H; Vargas, L G

    2000-05-01

    Variability in surgical procedure times increases the cost of healthcare delivery by increasing both the underutilization and overutilization of expensive surgical resources. To reduce variability in surgical procedure times, we must identify and study its sources. Our data set consisted of all surgeries performed over a 7-yr period at a large teaching hospital, resulting in 46,322 surgical cases. To study factors associated with variability in surgical procedure times, data mining techniques were used to segment and focus the data so that the analyses would be both technically and intellectually feasible. The data were subdivided into 40 representative segments of manageable size and variability based on headers adopted from the common procedural terminology classification. Each data segment was then analyzed using a main-effects linear model to identify and quantify specific sources of variability in surgical procedure times. The single most important source of variability in surgical procedure times was surgeon effect. Type of anesthesia, age, gender, and American Society of Anesthesiologists risk class were additional sources of variability. Intrinsic case-specific variability, unexplained by any of the preceding factors, was found to be highest for shorter surgeries relative to longer procedures. Variability in procedure times among surgeons was a multiplicative function (proportionate to time) of surgical time and total procedure time, such that as procedure times increased, variability in surgeons' surgical time increased proportionately. Surgeon-specific variability should be considered when building scheduling heuristics for longer surgeries. Results concerning variability in surgical procedure times due to factors such as type of anesthesia, age, gender, and American Society of Anesthesiologists risk class may be extrapolated to scheduling in other institutions, although specifics on individual surgeons may not. This research identifies factors associated

  4. Discrete-time BAM neural networks with variable delays

    Science.gov (United States)

    Liu, Xin-Ge; Tang, Mei-Lan; Martin, Ralph; Liu, Xin-Bi

    2007-07-01

    This Letter deals with the global exponential stability of discrete-time bidirectional associative memory (BAM) neural networks with variable delays. Using a Lyapunov functional, and linear matrix inequality techniques (LMI), we derive a new delay-dependent exponential stability criterion for BAM neural networks with variable delays. As this criterion has no extra constraints on the variable delay functions, it can be applied to quite general BAM neural networks with a broad range of time delay functions. It is also easy to use in practice. An example is provided to illustrate the theoretical development.

  5. Discrete-time BAM neural networks with variable delays

    International Nuclear Information System (INIS)

    Liu Xinge; Tang Meilan; Martin, Ralph; Liu Xinbi

    2007-01-01

    This Letter deals with the global exponential stability of discrete-time bidirectional associative memory (BAM) neural networks with variable delays. Using a Lyapunov functional, and linear matrix inequality techniques (LMI), we derive a new delay-dependent exponential stability criterion for BAM neural networks with variable delays. As this criterion has no extra constraints on the variable delay functions, it can be applied to quite general BAM neural networks with a broad range of time delay functions. It is also easy to use in practice. An example is provided to illustrate the theoretical development

  6. Two-step variable selection in quantile regression models

    Directory of Open Access Journals (Sweden)

    FAN Yali

    2015-06-01

    Full Text Available We propose a two-step variable selection procedure for high dimensional quantile regressions, in which the dimension of the covariates, pn is much larger than the sample size n. In the first step, we perform ℓ1 penalty, and we demonstrate that the first step penalized estimator with the LASSO penalty can reduce the model from an ultra-high dimensional to a model whose size has the same order as that of the true model, and the selected model can cover the true model. The second step excludes the remained irrelevant covariates by applying the adaptive LASSO penalty to the reduced model obtained from the first step. Under some regularity conditions, we show that our procedure enjoys the model selection consistency. We conduct a simulation study and a real data analysis to evaluate the finite sample performance of the proposed approach.

  7. Walking speed-related changes in stride time variability: effects of decreased speed

    Directory of Open Access Journals (Sweden)

    Dubost Veronique

    2009-08-01

    Full Text Available Abstract Background Conflicting results have been reported regarding the relationship between stride time variability (STV and walking speed. While some studies failed to establish any relationship, others reported either a linear or a non-linear relationship. We therefore sought to determine the extent to which decrease in self-selected walking speed influenced STV among healthy young adults. Methods The mean value, the standard deviation and the coefficient of variation of stride time, as well as the mean value of stride velocity were recorded while steady-state walking using the GAITRite® system in 29 healthy young adults who walked consecutively at 88%, 79%, 71%, 64%, 58%, 53%, 46% and 39% of their preferred walking speed. Results The decrease in stride velocity increased significantly mean values, SD and CoV of stride time (p Conclusion The results support the assumption that gait variability increases while walking speed decreases and, thus, gait might be more unstable when healthy subjects walk slower compared with their preferred walking speed. Furthermore, these results highlight that a decrease in walking speed can be a potential confounder while evaluating STV.

  8. Protein construct storage: Bayesian variable selection and prediction with mixtures.

    Science.gov (United States)

    Clyde, M A; Parmigiani, G

    1998-07-01

    Determining optimal conditions for protein storage while maintaining a high level of protein activity is an important question in pharmaceutical research. A designed experiment based on a space-filling design was conducted to understand the effects of factors affecting protein storage and to establish optimal storage conditions. Different model-selection strategies to identify important factors may lead to very different answers about optimal conditions. Uncertainty about which factors are important, or model uncertainty, can be a critical issue in decision-making. We use Bayesian variable selection methods for linear models to identify important variables in the protein storage data, while accounting for model uncertainty. We also use the Bayesian framework to build predictions based on a large family of models, rather than an individual model, and to evaluate the probability that certain candidate storage conditions are optimal.

  9. A fast chaos-based image encryption scheme with a dynamic state variables selection mechanism

    Science.gov (United States)

    Chen, Jun-xin; Zhu, Zhi-liang; Fu, Chong; Yu, Hai; Zhang, Li-bo

    2015-03-01

    In recent years, a variety of chaos-based image cryptosystems have been investigated to meet the increasing demand for real-time secure image transmission. Most of them are based on permutation-diffusion architecture, in which permutation and diffusion are two independent procedures with fixed control parameters. This property results in two flaws. (1) At least two chaotic state variables are required for encrypting one plain pixel, in permutation and diffusion stages respectively. Chaotic state variables produced with high computation complexity are not sufficiently used. (2) The key stream solely depends on the secret key, and hence the cryptosystem is vulnerable against known/chosen-plaintext attacks. In this paper, a fast chaos-based image encryption scheme with a dynamic state variables selection mechanism is proposed to enhance the security and promote the efficiency of chaos-based image cryptosystems. Experimental simulations and extensive cryptanalysis have been carried out and the results prove the superior security and high efficiency of the scheme.

  10. A Robust Supervised Variable Selection for Noisy High-Dimensional Data

    Czech Academy of Sciences Publication Activity Database

    Kalina, Jan; Schlenker, Anna

    2015-01-01

    Roč. 2015, Article 320385 (2015), s. 1-10 ISSN 2314-6133 R&D Projects: GA ČR GA13-17187S Institutional support: RVO:67985807 Keywords : dimensionality reduction * variable selection * robustness Subject RIV: BA - General Mathematics Impact factor: 2.134, year: 2015

  11. Variable selection in the explorative analysis of several data blocks in metabolomics

    DEFF Research Database (Denmark)

    Karaman, İbrahim; Nørskov, Natalja; Yde, Christian Clement

    highly correlated data sets in one integrated approach. Due to the high number of variables in data sets from metabolomics (both raw data and after peak picking) the selection of important variables in an explorative analysis is difficult, especially when different data sets of metabolomics data need...... to be related. Tools for the handling of mental overflow minimising false discovery rates both by using statistical and biological validation in an integrative approach are needed. In this paper different strategies for variable selection were considered with respect to false discovery and the possibility...... for biological validation. The data set used in this study is metabolomics data from an animal intervention study. The aim of the metabolomics study was to investigate the metabolic profile in pigs fed various cereal fractions with special attention to the metabolism of lignans using NMR and LC-MS based...

  12. EFFECT OF CORE TRAINING ON SELECTED HEMATOLOGICAL VARIABLES AMONG BASKETBALL PLAYERS

    OpenAIRE

    K. Rejinadevi; Dr. C. Ramesh

    2017-01-01

    The purpose of the study was to find out the effect of core training on selected haematological variables among basketball players. For the purpose of the study forty men basketball players were selected as subjects from S.V.N College and Arul Anandar College, Madurai, Tamilnadu at random and their age ranged from 18 to 25 years. The selected subjects are divided in to two groups of twenty subjects each. Group I acted as core training group and Group II acted as control group. The experimenta...

  13. Pulse timing for cataclysmic variables

    International Nuclear Information System (INIS)

    Chester, T.J.

    1979-01-01

    It is shown that present pulse timing measurements of cataclysmic variables can be explained by models of accretion disks in these systems, and thus such measurements can constrain disk models. The model for DQ Her correctly predicts the amplitude variation of the continuum pulsation and can also perhaps explain the asymmetric amplitude of the pulsed lambda4686 emission line. Several other predictions can be made from the model. In particular, if pulse timing measurements that resolve emission lines both in wavelength and in binary phase can be made, the projected orbital radius of the white dwarf could be deduced

  14. DYNAMIC RESPONSE OF THICK PLATES ON TWO PARAMETER ELASTIC FOUNDATION UNDER TIME VARIABLE LOADING

    OpenAIRE

    Ozgan, Korhan; Daloglu, Ayse T.

    2014-01-01

    In this paper, behavior of foundation plates with transverse shear deformation under time variable loading is presented using modified Vlasov foundation model. Finite element formulation of thick plates on elastic foundation is derived by using an 8-noded finite element based on Mindlin plate theory. Selective reduced integration technique is used to avoid shear locking problem which arises when smaller plate thickness is considered for the evaluation of the stiffness matrices. After comparis...

  15. Countermovement jump height: gender and sport-specific differences in the force-time variables.

    Science.gov (United States)

    Laffaye, Guillaume; Wagner, Phillip P; Tombleson, Tom I L

    2014-04-01

    The goal of this study was to assess (a) the eccentric rate of force development, the concentric force, and selected time variables on vertical performance during countermovement jump, (b) the existence of gender differences in these variables, and (c) the sport-specific differences. The sample was composed of 189 males and 84 females, all elite athletes involved in college and professional sports (primarily football, basketball, baseball, and volleyball). The subjects performed a series of 6 countermovement jumps on a force plate (500 Hz). Average eccentric rate of force development (ECC-RFD), total time (TIME), eccentric time (ECC-T), Ratio between eccentric and total time (ECC-T:T) and average force (CON-F) were extracted from force-time curves and the vertical jumping performance, measured by impulse momentum. Results show that CON-F (r = 0.57; p differ between both sexes (p differ, showing a similar temporal structure. The best way to jump high is to increase CON-F and ECC-RFD thus minimizing the ECC-T. Principal component analysis (PCA) accounted for 76.8% of the JH variance and revealed that JH is predicted by a temporal and a force component. Furthermore, the PCA comparison made among athletes revealed sport-specific signatures: volleyball players revealed a temporal-prevailing profile, a weak-force with large ECC-T:T for basketball players and explosive and powerful profiles for football and baseball players.

  16. STEPWISE SELECTION OF VARIABLES IN DEA USING CONTRIBUTION LOADS

    Directory of Open Access Journals (Sweden)

    Fernando Fernandez-Palacin

    Full Text Available ABSTRACT In this paper, we propose a new methodology for variable selection in Data Envelopment Analysis (DEA. The methodology is based on an internal measure which evaluates the contribution of each variable in the calculation of the efficiency scores of DMUs. In order to apply the proposed method, an algorithm, known as “ADEA”, was developed and implemented in R. Step by step, the algorithm maximizes the load of the variable (input or output which contribute least to the calculation of the efficiency scores, redistributing the weights of the variables without altering the efficiency scores of the DMUs. Once the weights have been redistributed, if the lower contribution does not reach a previously given critical value, a variable with minimum contribution will be removed from the model and, as a result, the DEA will be solved again. The algorithm will stop when all variables reach a given contribution load to the DEA or until no more variables can be removed. In this way and contrary to what is usual, the algorithm provides a clear stop rule. In both cases, the efficiencies obtained from the DEA will be considered suitable and rightly interpreted in terms of the remaining variables, indicating the load themselves; moreover, the algorithm will provide a sequence of alternative nested models - potential solutions - that could be evaluated according to external criterion. To illustrate the procedure, we have applied the methodology proposed to obtain a research ranking of Spanish public universities. In this case, at each step of the algorithm, the critical value is obtained based on a simulation study.

  17. Variable selection in PLSR and extensions to a multi-block setting for metabolomics data

    DEFF Research Database (Denmark)

    Karaman, İbrahim; Hedemann, Mette Skou; Knudsen, Knud Erik Bach

    When applying LC-MS or NMR spectroscopy in metabolomics studies, high-dimensional data are generated and effective tools for variable selection are needed in order to detect the important metabolites. Methods based on sparsity combined with PLSR have recently attracted attention in the field...... of genomics [1]. They became quickly well established in the field of statistics because a close relationship to elastic net has been established. In sparse variable selection combined with PLSR, a soft thresholding is applied on each loading weight separately. In the field of chemometrics Jack-knifing has...... been introduced for variable selection in PLSR [2]. Jack-knifing has been frequently applied in the field of spectroscopy and is implemented in software tools like The Unscrambler. In Jack-knifing uncertainty estimates of regression coefficients are estimated and a t-test is applied on these estimates...

  18. Online Monitoring of Copper Damascene Electroplating Bath by Voltammetry: Selection of Variables for Multiblock and Hierarchical Chemometric Analysis of Voltammetric Data

    Directory of Open Access Journals (Sweden)

    Aleksander Jaworski

    2017-01-01

    Full Text Available The Real Time Analyzer (RTA utilizing DC- and AC-voltammetric techniques is an in situ, online monitoring system that provides a complete chemical analysis of different electrochemical deposition solutions. The RTA employs multivariate calibration when predicting concentration parameters from a multivariate data set. Although the hierarchical and multiblock Principal Component Regression- (PCR- and Partial Least Squares- (PLS- based methods can handle data sets even when the number of variables significantly exceeds the number of samples, it can be advantageous to reduce the number of variables to obtain improvement of the model predictions and better interpretation. This presentation focuses on the introduction of a multistep, rigorous method of data-selection-based Least Squares Regression, Simple Modeling of Class Analogy modeling power, and, as a novel application in electroanalysis, Uninformative Variable Elimination by PLS and by PCR, Variable Importance in the Projection coupled with PLS, Interval PLS, Interval PCR, and Moving Window PLS. Selection criteria of the optimum decomposition technique for the specific data are also demonstrated. The chief goal of this paper is to introduce to the community of electroanalytical chemists numerous variable selection methods which are well established in spectroscopy and can be successfully applied to voltammetric data analysis.

  19. Improving the Classification Accuracy for Near-Infrared Spectroscopy of Chinese Salvia miltiorrhiza Using Local Variable Selection

    Directory of Open Access Journals (Sweden)

    Lianqing Zhu

    2018-01-01

    Full Text Available In order to improve the classification accuracy of Chinese Salvia miltiorrhiza using near-infrared spectroscopy, a novel local variable selection strategy is thus proposed. Combining the strengths of the local algorithm and interval partial least squares, the spectra data have firstly been divided into several pairs of classes in sample direction and equidistant subintervals in variable direction. Then, a local classification model has been built, and the most proper spectral region has been selected based on the new evaluation criterion considering both classification error rate and best predictive ability under the leave-one-out cross validation scheme for each pair of classes. Finally, each observation can be assigned to belong to the class according to the statistical analysis of classification results of the local classification model built on selected variables. The performance of the proposed method was demonstrated through near-infrared spectra of cultivated or wild Salvia miltiorrhiza, which are collected from 8 geographical origins in 5 provinces of China. For comparison, soft independent modelling of class analogy and partial least squares discriminant analysis methods are, respectively, employed as the classification model. Experimental results showed that classification performance of the classification model with local variable selection was obvious better than that without variable selection.

  20. Travel time variability and airport accessibility

    OpenAIRE

    Koster, P.R.; Kroes, E.P.; Verhoef, E.T.

    2010-01-01

    This discussion paper resulted in a publication in Transportation Research Part B: Methodological (2011). Vol. 45(10), pages 1545-1559. This paper analyses the cost of access travel time variability for air travelers. Reliable access to airports is important since it is likely that the cost of missing a flight is high. First, the determinants of the preferred arrival times at airports are analyzed, including trip purpose, type of airport, flight characteristics, travel experience, type of che...

  1. The Effects of Variability and Risk in Selection Utility Analysis: An Empirical Comparison.

    Science.gov (United States)

    Rich, Joseph R.; Boudreau, John W.

    1987-01-01

    Investigated utility estimate variability for the selection utility of using the Programmer Aptitude Test to select computer programmers. Comparison of Monte Carlo results to other risk assessment approaches (sensitivity analysis, break-even analysis, algebraic derivation of the distribtion) suggests that distribution information provided by Monte…

  2. Mahalanobis distance and variable selection to optimize dose response

    International Nuclear Information System (INIS)

    Moore, D.H. II; Bennett, D.E.; Wyrobek, A.J.; Kranzler, D.

    1979-01-01

    A battery of statistical techniques are combined to improve detection of low-level dose response. First, Mahalanobis distances are used to classify objects as normal or abnormal. Then the proportion classified abnormal is regressed on dose. Finally, a subset of regressor variables is selected which maximizes the slope of the dose response line. Use of the techniques is illustrated by application to mouse sperm damaged by low doses of x-rays

  3. r2VIM: A new variable selection method for random forests in genome-wide association studies.

    Science.gov (United States)

    Szymczak, Silke; Holzinger, Emily; Dasgupta, Abhijit; Malley, James D; Molloy, Anne M; Mills, James L; Brody, Lawrence C; Stambolian, Dwight; Bailey-Wilson, Joan E

    2016-01-01

    Machine learning methods and in particular random forests (RFs) are a promising alternative to standard single SNP analyses in genome-wide association studies (GWAS). RFs provide variable importance measures (VIMs) to rank SNPs according to their predictive power. However, in contrast to the established genome-wide significance threshold, no clear criteria exist to determine how many SNPs should be selected for downstream analyses. We propose a new variable selection approach, recurrent relative variable importance measure (r2VIM). Importance values are calculated relative to an observed minimal importance score for several runs of RF and only SNPs with large relative VIMs in all of the runs are selected as important. Evaluations on simulated GWAS data show that the new method controls the number of false-positives under the null hypothesis. Under a simple alternative hypothesis with several independent main effects it is only slightly less powerful than logistic regression. In an experimental GWAS data set, the same strong signal is identified while the approach selects none of the SNPs in an underpowered GWAS. The novel variable selection method r2VIM is a promising extension to standard RF for objectively selecting relevant SNPs in GWAS while controlling the number of false-positive results.

  4. Identification of solid state fermentation degree with FT-NIR spectroscopy: Comparison of wavelength variable selection methods of CARS and SCARS

    Science.gov (United States)

    Jiang, Hui; Zhang, Hang; Chen, Quansheng; Mei, Congli; Liu, Guohai

    2015-10-01

    The use of wavelength variable selection before partial least squares discriminant analysis (PLS-DA) for qualitative identification of solid state fermentation degree by FT-NIR spectroscopy technique was investigated in this study. Two wavelength variable selection methods including competitive adaptive reweighted sampling (CARS) and stability competitive adaptive reweighted sampling (SCARS) were employed to select the important wavelengths. PLS-DA was applied to calibrate identified model using selected wavelength variables by CARS and SCARS for identification of solid state fermentation degree. Experimental results showed that the number of selected wavelength variables by CARS and SCARS were 58 and 47, respectively, from the 1557 original wavelength variables. Compared with the results of full-spectrum PLS-DA, the two wavelength variable selection methods both could enhance the performance of identified models. Meanwhile, compared with CARS-PLS-DA model, the SCARS-PLS-DA model achieved better results with the identification rate of 91.43% in the validation process. The overall results sufficiently demonstrate the PLS-DA model constructed using selected wavelength variables by a proper wavelength variable method can be more accurate identification of solid state fermentation degree.

  5. Sources of variability and systematic error in mouse timing behavior.

    Science.gov (United States)

    Gallistel, C R; King, Adam; McDonald, Robert

    2004-01-01

    In the peak procedure, starts and stops in responding bracket the target time at which food is expected. The variability in start and stop times is proportional to the target time (scalar variability), as is the systematic error in the mean center (scalar error). The authors investigated the source of the error and the variability, using head poking in the mouse, with target intervals of 5 s, 15 s, and 45 s, in the standard procedure, and in a variant with 3 different target intervals at 3 different locations in a single trial. The authors conclude that the systematic error is due to the asymmetric location of start and stop decision criteria, and the scalar variability derives primarily from sources other than memory.

  6. Variability in reaction time performance of younger and older adults.

    Science.gov (United States)

    Hultsch, David F; MacDonald, Stuart W S; Dixon, Roger A

    2002-03-01

    Age differences in three basic types of variability were examined: variability between persons (diversity), variability within persons across tasks (dispersion), and variability within persons across time (inconsistency). Measures of variability were based on latency performance from four measures of reaction time (RT) performed by a total of 99 younger adults (ages 17--36 years) and 763 older adults (ages 54--94 years). Results indicated that all three types of variability were greater in older compared with younger participants even when group differences in speed were statistically controlled. Quantile-quantile plots showed age and task differences in the shape of the inconsistency distributions. Measures of within-person variability (dispersion and inconsistency) were positively correlated. Individual differences in RT inconsistency correlated negatively with level of performance on measures of perceptual speed, working memory, episodic memory, and crystallized abilities. Partial set correlation analyses indicated that inconsistency predicted cognitive performance independent of level of performance. The results indicate that variability of performance is an important indicator of cognitive functioning and aging.

  7. Isoenzymatic variability in tropical maize populations under reciprocal recurrent selection

    Directory of Open Access Journals (Sweden)

    Pinto Luciana Rossini

    2003-01-01

    Full Text Available Maize (Zea mays L. is one of the crops in which the genetic variability has been extensively studied at isoenzymatic loci. The genetic variability of the maize populations BR-105 and BR-106, and the synthetics IG-3 and IG-4, obtained after one cycle of a high-intensity reciprocal recurrent selection (RRS, was investigated at seven isoenzymatic loci. A total of twenty alleles were identified, and most of the private alleles were found in the BR-106 population. One cycle of reciprocal recurrent selection (RRS caused reductions of 12% in the number of alleles in both populations. Changes in allele frequencies were also observed between populations and synthetics, mainly for the Est 2 locus. Populations presented similar values for the number of alleles per locus, percentage of polymorphic loci, and observed and expected heterozygosities. A decrease of the genetic variation values was observed for the synthetics as a consequence of genetic drift effects and reduction of the effective population sizes. The distribution of the genetic diversity within and between populations revealed that most of the diversity was maintained within them, i.e. BR-105 x BR-106 (G ST = 3.5% and IG-3 x IG-4 (G ST = 4.0%. The genetic distances between populations and synthetics increased approximately 21%. An increase in the genetic divergence between the populations occurred without limiting new selection procedures.

  8. Sparse Reduced-Rank Regression for Simultaneous Dimension Reduction and Variable Selection

    KAUST Repository

    Chen, Lisha; Huang, Jianhua Z.

    2012-01-01

    and hence improves predictive accuracy. We propose to select relevant variables for reduced-rank regression by using a sparsity-inducing penalty. We apply a group-lasso type penalty that treats each row of the matrix of the regression coefficients as a group

  9. The Use of Variable Q1 Isolation Windows Improves Selectivity in LC-SWATH-MS Acquisition.

    Science.gov (United States)

    Zhang, Ying; Bilbao, Aivett; Bruderer, Tobias; Luban, Jeremy; Strambio-De-Castillia, Caterina; Lisacek, Frédérique; Hopfgartner, Gérard; Varesio, Emmanuel

    2015-10-02

    As tryptic peptides and metabolites are not equally distributed along the mass range, the probability of cross fragment ion interference is higher in certain windows when fixed Q1 SWATH windows are applied. We evaluated the benefits of utilizing variable Q1 SWATH windows with regards to selectivity improvement. Variable windows based on equalizing the distribution of either the precursor ion population (PIP) or the total ion current (TIC) within each window were generated by an in-house software, swathTUNER. These two variable Q1 SWATH window strategies outperformed, with respect to quantification and identification, the basic approach using a fixed window width (FIX) for proteomic profiling of human monocyte-derived dendritic cells (MDDCs). Thus, 13.8 and 8.4% additional peptide precursors, which resulted in 13.1 and 10.0% more proteins, were confidently identified by SWATH using the strategy PIP and TIC, respectively, in the MDDC proteomic sample. On the basis of the spectral library purity score, some improvement warranted by variable Q1 windows was also observed, albeit to a lesser extent, in the metabolomic profiling of human urine. We show that the novel concept of "scheduled SWATH" proposed here, which incorporates (i) variable isolation windows and (ii) precursor retention time segmentation further improves both peptide and metabolite identifications.

  10. Can time be a discrete dynamical variable

    International Nuclear Information System (INIS)

    Lee, T.D.

    1983-01-01

    The possibility that time can be regarded as a discrete dynamical variable is examined through all phases of mechanics: from classical mechanics to nonrelativistic quantum mechanics, and to relativistic quantum field theories. (orig.)

  11. Penalized regression procedures for variable selection in the potential outcomes framework.

    Science.gov (United States)

    Ghosh, Debashis; Zhu, Yeying; Coffman, Donna L

    2015-05-10

    A recent topic of much interest in causal inference is model selection. In this article, we describe a framework in which to consider penalized regression approaches to variable selection for causal effects. The framework leads to a simple 'impute, then select' class of procedures that is agnostic to the type of imputation algorithm as well as penalized regression used. It also clarifies how model selection involves a multivariate regression model for causal inference problems and that these methods can be applied for identifying subgroups in which treatment effects are homogeneous. Analogies and links with the literature on machine learning methods, missing data, and imputation are drawn. A difference least absolute shrinkage and selection operator algorithm is defined, along with its multiple imputation analogs. The procedures are illustrated using a well-known right-heart catheterization dataset. Copyright © 2015 John Wiley & Sons, Ltd.

  12. Multivariate fault isolation of batch processes via variable selection in partial least squares discriminant analysis.

    Science.gov (United States)

    Yan, Zhengbing; Kuang, Te-Hui; Yao, Yuan

    2017-09-01

    In recent years, multivariate statistical monitoring of batch processes has become a popular research topic, wherein multivariate fault isolation is an important step aiming at the identification of the faulty variables contributing most to the detected process abnormality. Although contribution plots have been commonly used in statistical fault isolation, such methods suffer from the smearing effect between correlated variables. In particular, in batch process monitoring, the high autocorrelations and cross-correlations that exist in variable trajectories make the smearing effect unavoidable. To address such a problem, a variable selection-based fault isolation method is proposed in this research, which transforms the fault isolation problem into a variable selection problem in partial least squares discriminant analysis and solves it by calculating a sparse partial least squares model. As different from the traditional methods, the proposed method emphasizes the relative importance of each process variable. Such information may help process engineers in conducting root-cause diagnosis. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  13. Data re-arranging techniques leading to proper variable selections in high energy physics

    Science.gov (United States)

    Kůs, Václav; Bouř, Petr

    2017-12-01

    We introduce a new data based approach to homogeneity testing and variable selection carried out in high energy physics experiments, where one of the basic tasks is to test the homogeneity of weighted samples, mainly the Monte Carlo simulations (weighted) and real data measurements (unweighted). This technique is called ’data re-arranging’ and it enables variable selection performed by means of the classical statistical homogeneity tests such as Kolmogorov-Smirnov, Anderson-Darling, or Pearson’s chi-square divergence test. P-values of our variants of homogeneity tests are investigated and the empirical verification through 46 dimensional high energy particle physics data sets is accomplished under newly proposed (equiprobable) quantile binning. Particularly, the procedure of homogeneity testing is applied to re-arranged Monte Carlo samples and real DATA sets measured at the particle accelerator Tevatron in Fermilab at DØ experiment originating from top-antitop quark pair production in two decay channels (electron, muon) with 2, 3, or 4+ jets detected. Finally, the variable selections in the electron and muon channels induced by the re-arranging procedure for homogeneity testing are provided for Tevatron top-antitop quark data sets.

  14. Increased timing variability in schizophrenia and bipolar disorder.

    Directory of Open Access Journals (Sweden)

    Amanda R Bolbecker

    Full Text Available Theoretical and empirical evidence suggests that impaired time perception and the neural circuitry underlying internal timing mechanisms may contribute to severe psychiatric disorders, including psychotic and mood disorders. The degree to which alterations in temporal perceptions reflect deficits that exist across psychosis-related phenotypes and the extent to which mood symptoms contribute to these deficits is currently unknown. In addition, compared to schizophrenia, where timing deficits have been more extensively investigated, sub-second timing has been studied relatively infrequently in bipolar disorder. The present study compared sub-second duration estimates of schizophrenia (SZ, schizoaffective disorder (SA, non-psychotic bipolar disorder (BDNP, bipolar disorder with psychotic features (BDP, and healthy non-psychiatric controls (HC on a well-established time perception task using sub-second durations. Participants included 66 SZ, 37 BDNP, 34 BDP, 31 SA, and 73 HC who participated in a temporal bisection task that required temporal judgements about auditory durations ranging from 300 to 600 milliseconds. Timing variability was significantly higher in SZ, BDP, and BDNP groups compared to healthy controls. The bisection point did not differ across groups. These findings suggest that both psychotic and mood symptoms may be associated with disruptions in internal timing mechanisms. Yet unexpected findings emerged. Specifically, the BDNP group had significantly increased variability compared to controls, but the SA group did not. In addition, these deficits appeared to exist independent of current symptom status. The absence of between group differences in bisection point suggests that increased variability in the SZ and bipolar disorder groups are due to alterations in perceptual timing in the sub-second range, possibly mediated by the cerebellum, rather than cognitive deficits.

  15. Time-scales of stellar rotational variability and starspot diagnostics

    Science.gov (United States)

    Arkhypov, Oleksiy V.; Khodachenko, Maxim L.; Lammer, Helmut; Güdel, Manuel; Lüftinger, Teresa; Johnstone, Colin P.

    2018-01-01

    The difference in stability of starspot distribution on the global and hemispherical scales is studied in the rotational spot variability of 1998 main-sequence stars observed by Kepler mission. It is found that the largest patterns are much more stable than smaller ones for cool, slow rotators, whereas the difference is less pronounced for hotter stars and/or faster rotators. This distinction is interpreted in terms of two mechanisms: (1) the diffusive decay of long-living spots in activity complexes of stars with saturated magnetic dynamos, and (2) the spot emergence, which is modulated by gigantic turbulent flows in convection zones of stars with a weaker magnetism. This opens a way for investigation of stellar deep convection, which is yet inaccessible for asteroseismology. Moreover, a subdiffusion in stellar photospheres was revealed from observations for the first time. A diagnostic diagram was proposed that allows differentiation and selection of stars for more detailed studies of these phenomena.

  16. Space and time evolution of two nonlinearly coupled variables

    International Nuclear Information System (INIS)

    Obayashi, H.; Totsuji, H.; Wilhelmsson, H.

    1976-12-01

    The system of two coupled linear differential equations are studied assuming that the coupling terms are proportional to the product of the dependent variables, representing e.g. intensities or populations. It is furthermore assumed that these variables experience different linear dissipation or growth. The derivations account for space as well as time dependence of the variables. It is found that certain particular solutions can be obtained to this system, whereas a full solution in space and time as an initial value problem is outside the scope of the present paper. The system has a nonlinear equilibrium solution for which the nonlinear coupling terms balance the terms of linear dissipation. The case of space and time evolution of a small perturbation of the nonlinear equilibrium state, given the initial one-dimensional spatial distribution of the perturbation, is also considered in some detail. (auth.)

  17. Development of a time-variable nuclear pulser for half life measurements

    International Nuclear Information System (INIS)

    Zahn, Guilherme S.; Domienikan, Claudio; Carvalhaes, Roberto P. M.; Genezini, Frederico A.

    2013-01-01

    In this work a time-variable pulser system with an exponentially-decaying pulse frequency is presented, which was developed using the low-cost, open-source Arduino microcontroler plataform. In this system, the microcontroller produces a TTL signal in the selected rate and a pulse shaper board adjusts it to be entered in an amplifier as a conventional pulser signal; both the decay constant and the initial pulse rate can be adjusted using a user-friendly control software, and the pulse amplitude can be adjusted using a potentiometer in the pulse shaper board. The pulser was tested using several combinations of initial pulse rate and decay constant, and the results show that the system is stable and reliable, and is suitable to be used in half-life measurements.

  18. Development of a time-variable nuclear pulser for half life measurements

    Energy Technology Data Exchange (ETDEWEB)

    Zahn, Guilherme S.; Domienikan, Claudio; Carvalhaes, Roberto P. M.; Genezini, Frederico A. [Instituto de Pesquisas Energeticas e Nucleares - IPEN-CNEN/SP. P.O. Box 11049, Sao Paulo, 05422-970 (Brazil)

    2013-05-06

    In this work a time-variable pulser system with an exponentially-decaying pulse frequency is presented, which was developed using the low-cost, open-source Arduino microcontroler plataform. In this system, the microcontroller produces a TTL signal in the selected rate and a pulse shaper board adjusts it to be entered in an amplifier as a conventional pulser signal; both the decay constant and the initial pulse rate can be adjusted using a user-friendly control software, and the pulse amplitude can be adjusted using a potentiometer in the pulse shaper board. The pulser was tested using several combinations of initial pulse rate and decay constant, and the results show that the system is stable and reliable, and is suitable to be used in half-life measurements.

  19. Seleção de variáveis em QSAR Variable selection in QSAR

    Directory of Open Access Journals (Sweden)

    Márcia Miguel Castro Ferreira

    2002-05-01

    Full Text Available The process of building mathematical models in quantitative structure-activity relationship (QSAR studies is generally limited by the size of the dataset used to select variables from. For huge datasets, the task of selecting a given number of variables that produces the best linear model can be enormous, if not unfeasible. In this case, some methods can be used to separate good parameter combinations from the bad ones. In this paper three methodologies are analyzed: systematic search, genetic algorithm and chemometric methods. These methods have been exposed and discussed through practical examples.

  20. Valuing travel time variability: Characteristics of the travel time distribution on an urban road

    DEFF Research Database (Denmark)

    Fosgerau, Mogens; Fukuda, Daisuke

    2012-01-01

    This paper provides a detailed empirical investigation of the distribution of travel times on an urban road for valuation of travel time variability. Our investigation is premised on the use of a theoretical model with a number of desirable properties. The definition of the value of travel time...... variability depends on certain properties of the distribution of random travel times that require empirical verification. Applying a range of nonparametric statistical techniques to data giving minute-by-minute travel times for a congested urban road over a period of five months, we show that the standardized...... travel time is roughly independent of the time of day as required by the theory. Except for the extreme right tail, a stable distribution seems to fit the data well. The travel time distributions on consecutive links seem to share a common stability parameter such that the travel time distribution...

  1. Is Reaction Time Variability in ADHD Mainly at Low Frequencies?

    Science.gov (United States)

    Karalunas, Sarah L.; Huang-Pollock, Cynthia L.; Nigg, Joel T.

    2013-01-01

    Background: Intraindividual variability in reaction times (RT variability) has garnered increasing interest as an indicator of cognitive and neurobiological dysfunction in children with attention deficit hyperactivity disorder (ADHD). Recent theory and research has emphasized specific low-frequency patterns of RT variability. However, whether…

  2. Long time scale hard X-ray variability in Seyfert 1 galaxies

    Science.gov (United States)

    Markowitz, Alex Gary

    This dissertation examines the relationship between long-term X-ray variability characteristics, black hole mass, and luminosity of Seyfert 1 Active Galactic Nuclei. High dynamic range power spectral density functions (PSDs) have been constructed for six Seyfert 1 galaxies. These PSDs show "breaks" or characteristic time scales, typically on the order of a few days. There is resemblance to PSDs of lower-mass Galactic X-ray binaries (XRBs), with the ratios of putative black hole masses and variability time scales approximately the same (106--7) between the two classes of objects. The data are consistent with a linear correlation between Seyfert PSD break time scale and black hole mass estimate; the relation extrapolates reasonably well over 6--7 orders of magnitude to XRBs. All of this strengthens the case for a physical similarity between Seyfert galaxies and XRBs. The first six years of RXTE monitoring of Seyfert 1s have been systematically analyzed to probe hard X-ray variability on multiple time scales in a total of 19 Seyfert is in an expansion of the survey of Markowitz & Edelson (2001). Correlations between variability amplitude, luminosity, and black hole mass are explored, the data support the model of PSD movement with black hole mass suggested by the PSD survey. All of the continuum variability results are consistent with relatively more massive black holes hosting larger X-ray emission regions, resulting in 'slower' observed variability. Nearly all sources in the sample exhibit stronger variability towards softer energies, consistent with softening as they brighten. Direct time-resolved spectral fitting has been performed on continuous RXTE monitoring of seven Seyfert is to study long-term spectral variability and Fe Kalpha variability characteristics. The Fe Kalpha line displays a wide range of behavior but varies less strongly than the broadband continuum. Overall, however, there is no strong evidence for correlated variability between the line and

  3. Multi-omics facilitated variable selection in Cox-regression model for cancer prognosis prediction.

    Science.gov (United States)

    Liu, Cong; Wang, Xujun; Genchev, Georgi Z; Lu, Hui

    2017-07-15

    New developments in high-throughput genomic technologies have enabled the measurement of diverse types of omics biomarkers in a cost-efficient and clinically-feasible manner. Developing computational methods and tools for analysis and translation of such genomic data into clinically-relevant information is an ongoing and active area of investigation. For example, several studies have utilized an unsupervised learning framework to cluster patients by integrating omics data. Despite such recent advances, predicting cancer prognosis using integrated omics biomarkers remains a challenge. There is also a shortage of computational tools for predicting cancer prognosis by using supervised learning methods. The current standard approach is to fit a Cox regression model by concatenating the different types of omics data in a linear manner, while penalty could be added for feature selection. A more powerful approach, however, would be to incorporate data by considering relationships among omics datatypes. Here we developed two methods: a SKI-Cox method and a wLASSO-Cox method to incorporate the association among different types of omics data. Both methods fit the Cox proportional hazards model and predict a risk score based on mRNA expression profiles. SKI-Cox borrows the information generated by these additional types of omics data to guide variable selection, while wLASSO-Cox incorporates this information as a penalty factor during model fitting. We show that SKI-Cox and wLASSO-Cox models select more true variables than a LASSO-Cox model in simulation studies. We assess the performance of SKI-Cox and wLASSO-Cox using TCGA glioblastoma multiforme and lung adenocarcinoma data. In each case, mRNA expression, methylation, and copy number variation data are integrated to predict the overall survival time of cancer patients. Our methods achieve better performance in predicting patients' survival in glioblastoma and lung adenocarcinoma. Copyright © 2017. Published by Elsevier

  4. Selection of controlled variables in bioprocesses. Application to a SHARON-Anammox process for autotrophic nitrogen removal

    DEFF Research Database (Denmark)

    Mauricio Iglesias, Miguel; Valverde Perez, Borja; Sin, Gürkan

    Selecting the right controlled variables in a bioprocess is challenging since the objectives of the process (yields, product or substrate concentration) are difficult to relate with a given actuator. We apply here process control tools that can be used to assist in the selection of controlled var...... variables to the case of the SHARON-Anammox process for autotrophic nitrogen removal....

  5. Verification of models for ballistic movement time and endpoint variability.

    Science.gov (United States)

    Lin, Ray F; Drury, Colin G

    2013-01-01

    A hand control movement is composed of several ballistic movements. The time required in performing a ballistic movement and its endpoint variability are two important properties in developing movement models. The purpose of this study was to test potential models for predicting these two properties. Twelve participants conducted ballistic movements of specific amplitudes using a drawing tablet. The measured data of movement time and endpoint variability were then used to verify the models. This study was successful with Hoffmann and Gan's movement time model (Hoffmann, 1981; Gan and Hoffmann 1988) predicting more than 90.7% data variance for 84 individual measurements. A new theoretically developed ballistic movement variability model, proved to be better than Howarth, Beggs, and Bowden's (1971) model, predicting on average 84.8% of stopping-variable error and 88.3% of aiming-variable errors. These two validated models will help build solid theoretical movement models and evaluate input devices. This article provides better models for predicting end accuracy and movement time of ballistic movements that are desirable in rapid aiming tasks, such as keying in numbers on a smart phone. The models allow better design of aiming tasks, for example button sizes on mobile phones for different user populations.

  6. Comparison of Sparse and Jack-knife partial least squares regression methods for variable selection

    DEFF Research Database (Denmark)

    Karaman, Ibrahim; Qannari, El Mostafa; Martens, Harald

    2013-01-01

    The objective of this study was to compare two different techniques of variable selection, Sparse PLSR and Jack-knife PLSR, with respect to their predictive ability and their ability to identify relevant variables. Sparse PLSR is a method that is frequently used in genomics, whereas Jack-knife PL...

  7. X-ray spectra and time variability of active galactic nuclei

    International Nuclear Information System (INIS)

    Mushotzky, R.F.

    1984-02-01

    The X-ray spectra of broad line active galactic nuclei (AGN) of all types (Seyfert I's, NELG's, broadline radio galaxies) are well fit by a power law in the .5 to 100 keV band of man energy slope alpha .68 + or - .15. There is, as yet, no strong evidence for time variability of this slope in a given object. The constraints that this places on simple models of the central energy source are discussed. BL Lac objects have quite different X-ray spectral properties and show pronounced X-ray spectral variability. On time scales longer than 12 hours most radio quiet AGN do not show strong, delta I/I .5, variability. The probability of variability of these AGN seems to be inversely related to their luminosity. However characteristics timescales for variability have not been measured for many objects. This general lack of variability may imply that most AGN are well below the Eddington limit. Radio bright AGN tend to be more variable than radio quiet AGN on long, tau approx 6 month, timescales

  8. A stochastic analysis approach on the cost-time profile for selecting the best future state MA

    Directory of Open Access Journals (Sweden)

    Seyedhosseini, Seyed Mohammad

    2015-05-01

    Full Text Available In the literature on value stream mapping (VSM, the only basis for choosing the best future state map (FSM among the proposed alternatives is the time factor. As a result, the FSM is selected as the best option because it has the least amount of total production lead time (TPLT. In this paper, the cost factor is considered in the FSM selection process, in addition to the time factor. Thus, for each of the proposed FSMs, the cost-time profile (CTP is used. Two factors that are of particular importance for the customer and the manufacturer – the TPLT and the direct cost of the product – are reviewed and analysed by calculating the sub-area of the CTP curve, called the cost-time investment (CTI. In addition, variability in the generated data has been studied in each of the CTPs in order to choose the best FSM more precisely and accurately. Based on a proposed step-by-step stochastic analysis method, and also by using non-parametric Kernel estimation methods for estimating the probability density function of CTIs, the process of choosing the best FSM has been carried out, based not only on the minimum expected CTI, but also on the minimum expected variability amount in CTIs among proposed alternatives. By implementing this method during the process of choosing the best FSM, the manufacturing organisations will consider both the cost factor and the variability in the generated data, in addition to the time factor. Accordingly, the decision-making process proceeds more easily and logically than do traditional methods. Finally, to describe the effectiveness and applicability of the proposed method in this paper, it is applied to a case study on an industrial parts manufacturing company in Iran.

  9. First-Passage-Time Distribution for Variable-Diffusion Processes

    Science.gov (United States)

    Barney, Liberty; Gunaratne, Gemunu H.

    2017-05-01

    First-passage-time distribution, which presents the likelihood of a stock reaching a pre-specified price at a given time, is useful in establishing the value of financial instruments and in designing trading strategies. First-passage-time distribution for Wiener processes has a single peak, while that for stocks exhibits a notable second peak within a trading day. This feature has only been discussed sporadically—often dismissed as due to insufficient/incorrect data or circumvented by conversion to tick time—and to the best of our knowledge has not been explained in terms of the underlying stochastic process. It was shown previously that intra-day variations in the market can be modeled by a stochastic process containing two variable-diffusion processes (Hua et al. in, Physica A 419:221-233, 2015). We show here that the first-passage-time distribution of this two-stage variable-diffusion model does exhibit a behavior similar to the empirical observation. In addition, we find that an extended model incorporating overnight price fluctuations exhibits intra- and inter-day behavior similar to those of empirical first-passage-time distributions.

  10. Not accounting for interindividual variability can mask habitat selection patterns: a case study on black bears.

    Science.gov (United States)

    Lesmerises, Rémi; St-Laurent, Martin-Hugues

    2017-11-01

    Habitat selection studies conducted at the population scale commonly aim to describe general patterns that could improve our understanding of the limiting factors in species-habitat relationships. Researchers often consider interindividual variation in selection patterns to control for its effects and avoid pseudoreplication by using mixed-effect models that include individuals as random factors. Here, we highlight common pitfalls and possible misinterpretations of this strategy by describing habitat selection of 21 black bears Ursus americanus. We used Bayesian mixed-effect models and compared results obtained when using random intercept (i.e., population level) versus calculating individual coefficients for each independent variable (i.e., individual level). We then related interindividual variability to individual characteristics (i.e., age, sex, reproductive status, body condition) in a multivariate analysis. The assumption of comparable behavior among individuals was verified only in 40% of the cases in our seasonal best models. Indeed, we found strong and opposite responses among sampled bears and individual coefficients were linked to individual characteristics. For some covariates, contrasted responses canceled each other out at the population level. In other cases, interindividual variability was concealed by the composition of our sample, with the majority of the bears (e.g., old individuals and bears in good physical condition) driving the population response (e.g., selection of young forest cuts). Our results stress the need to consider interindividual variability to avoid misinterpretation and uninformative results, especially for a flexible and opportunistic species. This study helps to identify some ecological drivers of interindividual variability in bear habitat selection patterns.

  11. Competency-Based, Time-Variable Education in the Health Professions: Crossroads.

    Science.gov (United States)

    Lucey, Catherine R; Thibault, George E; Ten Cate, Olle

    2018-03-01

    Health care systems around the world are transforming to align with the needs of 21st-century patients and populations. Transformation must also occur in the educational systems that prepare the health professionals who deliver care, advance discovery, and educate the next generation of physicians in these evolving systems. Competency-based, time-variable education, a comprehensive educational strategy guided by the roles and responsibilities that health professionals must assume to meet the needs of contemporary patients and communities, has the potential to catalyze optimization of educational and health care delivery systems. By designing educational and assessment programs that require learners to meet specific competencies before transitioning between the stages of formal education and into practice, this framework assures the public that every physician is capable of providing high-quality care. By engaging learners as partners in assessment, competency-based, time-variable education prepares graduates for careers as lifelong learners. While the medical education community has embraced the notion of competencies as a guiding framework for educational institutions, the structure and conduct of formal educational programs remain more aligned with a time-based, competency-variable paradigm.The authors outline the rationale behind this recommended shift to a competency-based, time-variable education system. They then introduce the other articles included in this supplement to Academic Medicine, which summarize the history of, theories behind, examples demonstrating, and challenges associated with competency-based, time-variable education in the health professions.

  12. A modification of the successive projections algorithm for spectral variable selection in the presence of unknown interferents.

    Science.gov (United States)

    Soares, Sófacles Figueredo Carreiro; Galvão, Roberto Kawakami Harrop; Araújo, Mário César Ugulino; da Silva, Edvan Cirino; Pereira, Claudete Fernandes; de Andrade, Stéfani Iury Evangelista; Leite, Flaviano Carvalho

    2011-03-09

    This work proposes a modification to the successive projections algorithm (SPA) aimed at selecting spectral variables for multiple linear regression (MLR) in the presence of unknown interferents not included in the calibration data set. The modified algorithm favours the selection of variables in which the effect of the interferent is less pronounced. The proposed procedure can be regarded as an adaptive modelling technique, because the spectral features of the samples to be analyzed are considered in the variable selection process. The advantages of this new approach are demonstrated in two analytical problems, namely (1) ultraviolet-visible spectrometric determination of tartrazine, allure red and sunset yellow in aqueous solutions under the interference of erythrosine, and (2) near-infrared spectrometric determination of ethanol in gasoline under the interference of toluene. In these case studies, the performance of conventional MLR-SPA models is substantially degraded by the presence of the interferent. This problem is circumvented by applying the proposed Adaptive MLR-SPA approach, which results in prediction errors smaller than those obtained by three other multivariate calibration techniques, namely stepwise regression, full-spectrum partial-least-squares (PLS) and PLS with variables selected by a genetic algorithm. An inspection of the variable selection results reveals that the Adaptive approach successfully avoids spectral regions in which the interference is more intense. Copyright © 2011 Elsevier B.V. All rights reserved.

  13. Predicting travel time variability for cost-benefit analysis

    NARCIS (Netherlands)

    Peer, S.; Koopmans, C.; Verhoef, E.T.

    2010-01-01

    Unreliable travel times cause substantial costs to travelers. Nevertheless, they are not taken into account in many cost-benefit-analyses (CBA), or only in very rough ways. This paper aims at providing simple rules on how variability can be predicted, based on travel time data from Dutch highways.

  14. [Application of characteristic NIR variables selection in portable detection of soluble solids content of apple by near infrared spectroscopy].

    Science.gov (United States)

    Fan, Shu-Xiang; Huang, Wen-Qian; Li, Jiang-Bo; Guo, Zhi-Ming; Zhaq, Chun-Jiang

    2014-10-01

    In order to detect the soluble solids content(SSC)of apple conveniently and rapidly, a ring fiber probe and a portable spectrometer were applied to obtain the spectroscopy of apple. Different wavelength variable selection methods, including unin- formative variable elimination (UVE), competitive adaptive reweighted sampling (CARS) and genetic algorithm (GA) were pro- posed to select effective wavelength variables of the NIR spectroscopy of the SSC in apple based on PLS. The back interval LS- SVM (BiLS-SVM) and GA were used to select effective wavelength variables based on LS-SVM. Selected wavelength variables and full wavelength range were set as input variables of PLS model and LS-SVM model, respectively. The results indicated that PLS model built using GA-CARS on 50 characteristic variables selected from full-spectrum which had 1512 wavelengths achieved the optimal performance. The correlation coefficient (Rp) and root mean square error of prediction (RMSEP) for prediction sets were 0.962, 0.403°Brix respectively for SSC. The proposed method of GA-CARS could effectively simplify the portable detection model of SSC in apple based on near infrared spectroscopy and enhance the predictive precision. The study can provide a reference for the development of portable apple soluble solids content spectrometer.

  15. Predictor Variables for Marathon Race Time in Recreational Female Runners

    OpenAIRE

    Schmid, Wiebke; Knechtle, Beat; Knechtle, Patrizia; Barandun, Ursula; Rüst, Christoph Alexander; Rosemann, Thomas; Lepers, Romuald

    2012-01-01

    Purpose We intended to determine predictor variables of anthropometry and training for marathon race time in recreational female runners in order to predict marathon race time for future novice female runners. Methods Anthropometric characteristics such as body mass, body height, body mass index, circumferences of limbs, thicknesses of skin-folds and body fat as well as training variables such as volume and speed in running training were related to marathon race time using bi- and multi-varia...

  16. Dissociable effects of practice variability on learning motor and timing skills.

    Science.gov (United States)

    Caramiaux, Baptiste; Bevilacqua, Frédéric; Wanderley, Marcelo M; Palmer, Caroline

    2018-01-01

    Motor skill acquisition inherently depends on the way one practices the motor task. The amount of motor task variability during practice has been shown to foster transfer of the learned skill to other similar motor tasks. In addition, variability in a learning schedule, in which a task and its variations are interweaved during practice, has been shown to help the transfer of learning in motor skill acquisition. However, there is little evidence on how motor task variations and variability schedules during practice act on the acquisition of complex motor skills such as music performance, in which a performer learns both the right movements (motor skill) and the right time to perform them (timing skill). This study investigated the impact of rate (tempo) variability and the schedule of tempo change during practice on timing and motor skill acquisition. Complete novices, with no musical training, practiced a simple musical sequence on a piano keyboard at different rates. Each novice was assigned to one of four learning conditions designed to manipulate the amount of tempo variability across trials (large or small tempo set) and the schedule of tempo change (randomized or non-randomized order) during practice. At test, the novices performed the same musical sequence at a familiar tempo and at novel tempi (testing tempo transfer), as well as two novel (but related) sequences at a familiar tempo (testing spatial transfer). We found that practice conditions had little effect on learning and transfer performance of timing skill. Interestingly, practice conditions influenced motor skill learning (reduction of movement variability): lower temporal variability during practice facilitated transfer to new tempi and new sequences; non-randomized learning schedule improved transfer to new tempi and new sequences. Tempo (rate) and the sequence difficulty (spatial manipulation) affected performance variability in both timing and movement. These findings suggest that there is a

  17. Essential Oil Variability and Biological Activities of Tetraclinis articulata (Vahl) Mast. Wood According to the Extraction Time.

    Science.gov (United States)

    Djouahri, Abderrahmane; Saka, Boualem; Boudarene, Lynda; Baaliouamer, Aoumeur

    2016-12-01

    In the present work, the hydrodistillation (HD) and microwave-assisted hydrodistillation (MAHD) kinetics of essential oil (EO) extracted from Tetraclinis articulata (Vahl) Mast. wood was conducted, in order to assess the impact of extraction time and technique on chemical composition and biological activities. Gas chromatography (GC) and GC/mass spectrometry analyses showed significant differences between the extracted EOs, where each family class or component presents a specific kinetic according to extraction time, technique and especially for the major components: camphene, linalool, cedrol, carvacrol and α-acorenol. Furthermore, our findings showed a high variability for both antioxidant and anti-inflammatory activities, where each activity has a specific effect according to extraction time and technique. The highlighted variability reflects the high impact of extraction time and technique on chemical composition and biological activities, which led to conclude that we should select EOs to be investigated carefully depending on extraction time and technique, in order to isolate the bioactive components or to have the best quality of EO in terms of biological activities and preventive effects in food. © 2016 Wiley-VHCA AG, Zurich, Switzerland.

  18. Predictor variables for marathon race time in recreational female runners.

    Science.gov (United States)

    Schmid, Wiebke; Knechtle, Beat; Knechtle, Patrizia; Barandun, Ursula; Rüst, Christoph Alexander; Rosemann, Thomas; Lepers, Romuald

    2012-06-01

    We intended to determine predictor variables of anthropometry and training for marathon race time in recreational female runners in order to predict marathon race time for future novice female runners. Anthropometric characteristics such as body mass, body height, body mass index, circumferences of limbs, thicknesses of skin-folds and body fat as well as training variables such as volume and speed in running training were related to marathon race time using bi- and multi-variate analysis in 29 female runners. The marathoners completed the marathon distance within 251 (26) min, running at a speed of 10.2 (1.1) km/h. Body mass (r=0.37), body mass index (r=0.46), the circumferences of thigh (r=0.51) and calf (r=0.41), the skin-fold thicknesses of front thigh (r=0.38) and of medial calf (r=0.40), the sum of eight skin-folds (r=0.44) and body fat percentage (r=0.41) were related to marathon race time. For the variables of training, maximal distance ran per week (r=- 0.38), number of running training sessions per week (r=- 0.46) and the speed of the training sessions (r= - 0.60) were related to marathon race time. In the multi-variate analysis, the circumference of calf (P=0.02) and the speed of the training sessions (P=0.0014) were related to marathon race time. Marathon race time might be partially (r(2)=0.50) predicted by the following equation: Race time (min)=184.4 + 5.0 x (circumference calf, cm) -11.9 x (speed in running during training, km/h) for recreational female marathoners. Variables of both anthropometry and training were related to marathon race time in recreational female marathoners and cannot be reduced to one single predictor variable. For practical applications, a low circumference of calf and a high running speed in training are associated with a fast marathon race time in recreational female runners.

  19. Public transport travel time and its variability

    OpenAIRE

    Mazloumi Shomali, Ehsan

    2017-01-01

    Executive Summary Public transport agencies around the world are constantly trying to improve the performance of their service, and to provide passengers with a more reliable service. Two major measures to evaluate the performance of a transit system include travel time and travel time variability. Information on these two measures provides operators with a capacity to identify the problematic locations in a transport system and improve operating plans. Likewise, users can benefit through...

  20. Complexity Variability Assessment of Nonlinear Time-Varying Cardiovascular Control

    Science.gov (United States)

    Valenza, Gaetano; Citi, Luca; Garcia, Ronald G.; Taylor, Jessica Noggle; Toschi, Nicola; Barbieri, Riccardo

    2017-02-01

    The application of complex systems theory to physiology and medicine has provided meaningful information about the nonlinear aspects underlying the dynamics of a wide range of biological processes and their disease-related aberrations. However, no studies have investigated whether meaningful information can be extracted by quantifying second-order moments of time-varying cardiovascular complexity. To this extent, we introduce a novel mathematical framework termed complexity variability, in which the variance of instantaneous Lyapunov spectra estimated over time serves as a reference quantifier. We apply the proposed methodology to four exemplary studies involving disorders which stem from cardiology, neurology and psychiatry: Congestive Heart Failure (CHF), Major Depression Disorder (MDD), Parkinson’s Disease (PD), and Post-Traumatic Stress Disorder (PTSD) patients with insomnia under a yoga training regime. We show that complexity assessments derived from simple time-averaging are not able to discern pathology-related changes in autonomic control, and we demonstrate that between-group differences in measures of complexity variability are consistent across pathologies. Pathological states such as CHF, MDD, and PD are associated with an increased complexity variability when compared to healthy controls, whereas wellbeing derived from yoga in PTSD is associated with lower time-variance of complexity.

  1. Effects of environmental variables on invasive amphibian activity: Using model selection on quantiles for counts

    Science.gov (United States)

    Muller, Benjamin J.; Cade, Brian S.; Schwarzkoph, Lin

    2018-01-01

    Many different factors influence animal activity. Often, the value of an environmental variable may influence significantly the upper or lower tails of the activity distribution. For describing relationships with heterogeneous boundaries, quantile regressions predict a quantile of the conditional distribution of the dependent variable. A quantile count model extends linear quantile regression methods to discrete response variables, and is useful if activity is quantified by trapping, where there may be many tied (equal) values in the activity distribution, over a small range of discrete values. Additionally, different environmental variables in combination may have synergistic or antagonistic effects on activity, so examining their effects together, in a modeling framework, is a useful approach. Thus, model selection on quantile counts can be used to determine the relative importance of different variables in determining activity, across the entire distribution of capture results. We conducted model selection on quantile count models to describe the factors affecting activity (numbers of captures) of cane toads (Rhinella marina) in response to several environmental variables (humidity, temperature, rainfall, wind speed, and moon luminosity) over eleven months of trapping. Environmental effects on activity are understudied in this pest animal. In the dry season, model selection on quantile count models suggested that rainfall positively affected activity, especially near the lower tails of the activity distribution. In the wet season, wind speed limited activity near the maximum of the distribution, while minimum activity increased with minimum temperature. This statistical methodology allowed us to explore, in depth, how environmental factors influenced activity across the entire distribution, and is applicable to any survey or trapping regime, in which environmental variables affect activity.

  2. Quantification of glutathione transverse relaxation time T2 using echo time extension with variable refocusing selectivity and symmetry in the human brain at 7 Tesla

    Science.gov (United States)

    Swanberg, Kelley M.; Prinsen, Hetty; Coman, Daniel; de Graaf, Robin A.; Juchem, Christoph

    2018-05-01

    Glutathione (GSH) is an endogenous antioxidant implicated in numerous biological processes, including those associated with multiple sclerosis, aging, and cancer. Spectral editing techniques have greatly facilitated the acquisition of glutathione signal in living humans via proton magnetic resonance spectroscopy, but signal quantification at 7 Tesla is still hampered by uncertainty about the glutathione transverse decay rate T2 relative to those of commonly employed quantitative references like N-acetyl aspartate (NAA), total creatine, or water. While the T2 of uncoupled singlets can be derived in a straightforward manner from exponential signal decay as a function of echo time, similar estimation of signal decay in GSH is complicated by a spin system that involves both weak and strong J-couplings as well as resonances that overlap those of several other metabolites and macromolecules. Here, we extend a previously published method for quantifying the T2 of GABA, a weakly coupled system, to quantify T2 of the strongly coupled spin system glutathione in the human brain at 7 Tesla. Using full density matrix simulation of glutathione signal behavior, we selected an array of eight optimized echo times between 72 and 322 ms for glutathione signal acquisition by J-difference editing (JDE). We varied the selectivity and symmetry parameters of the inversion pulses used for echo time extension to further optimize the intensity, simplicity, and distinctiveness of glutathione signals at chosen echo times. Pairs of selective adiabatic inversion pulses replaced nonselective pulses at three extended echo times, and symmetry of the time intervals between the two extension pulses was adjusted at one extended echo time to compensate for J-modulation, thereby resulting in appreciable signal-to-noise ratio and quantifiable signal shapes at all measured points. Glutathione signal across all echo times fit smooth monoexponential curves over ten scans of occipital cortex voxels in nine

  3. Improving breast cancer classification with mammography, supported on an appropriate variable selection analysis

    Science.gov (United States)

    Pérez, Noel; Guevara, Miguel A.; Silva, Augusto

    2013-02-01

    This work addresses the issue of variable selection within the context of breast cancer classification with mammography. A comprehensive repository of feature vectors was used including a hybrid subset gathering image-based and clinical features. It aimed to gather experimental evidence of variable selection in terms of cardinality, type and find a classification scheme that provides the best performance over the Area Under Receiver Operating Characteristics Curve (AUC) scores using the ranked features subset. We evaluated and classified a total of 300 subsets of features formed by the application of Chi-Square Discretization, Information-Gain, One-Rule and RELIEF methods in association with Feed-Forward Backpropagation Neural Network (FFBP), Support Vector Machine (SVM) and Decision Tree J48 (DTJ48) Machine Learning Algorithms (MLA) for a comparative performance evaluation based on AUC scores. A variable selection analysis was performed for Single-View Ranking and Multi-View Ranking groups of features. Features subsets representing Microcalcifications (MCs), Masses and both MCs and Masses lesions achieved AUC scores of 0.91, 0.954 and 0.934 respectively. Experimental evidence demonstrated that classification performance was improved by combining image-based and clinical features. The most important clinical and image-based features were StromaDistortion and Circularity respectively. Other less important but worth to use due to its consistency were Contrast, Perimeter, Microcalcification, Correlation and Elongation.

  4. Short time-scale optical variability properties of the largest AGN sample observed with Kepler/K2

    Science.gov (United States)

    Aranzana, E.; Körding, E.; Uttley, P.; Scaringi, S.; Bloemen, S.

    2018-05-01

    We present the first short time-scale (˜hours to days) optical variability study of a large sample of active galactic nuclei (AGNs) observed with the Kepler/K2 mission. The sample contains 252 AGN observed over four campaigns with ˜30 min cadence selected from the Million Quasar Catalogue with R magnitude <19. We performed time series analysis to determine their variability properties by means of the power spectral densities (PSDs) and applied Monte Carlo techniques to find the best model parameters that fit the observed power spectra. A power-law model is sufficient to describe all the PSDs of our sample. A variety of power-law slopes were found indicating that there is not a universal slope for all AGNs. We find that the rest-frame amplitude variability in the frequency range of 6 × 10-6-10-4 Hz varies from 1to10 per cent with an average of 1.7 per cent. We explore correlations between the variability amplitude and key parameters of the AGN, finding a significant correlation of rest-frame short-term variability amplitude with redshift. We attribute this effect to the known `bluer when brighter' variability of quasars combined with the fixed bandpass of Kepler data. This study also enables us to distinguish between Seyferts and blazars and confirm AGN candidates. For our study, we have compared results obtained from light curves extracted using different aperture sizes and with and without detrending. We find that limited detrending of the optimal photometric precision light curve is the best approach, although some systematic effects still remain present.

  5. Sparse supervised principal component analysis (SSPCA) for dimension reduction and variable selection

    DEFF Research Database (Denmark)

    Sharifzadeh, Sara; Ghodsi, Ali; Clemmensen, Line H.

    2017-01-01

    Principal component analysis (PCA) is one of the main unsupervised pre-processing methods for dimension reduction. When the training labels are available, it is worth using a supervised PCA strategy. In cases that both dimension reduction and variable selection are required, sparse PCA (SPCA...

  6. Describing temporal variability of the mean Estonian precipitation series in climate time scale

    Science.gov (United States)

    Post, P.; Kärner, O.

    2009-04-01

    Applicability of the random walk type models to represent the temporal variability of various atmospheric temperature series has been successfully demonstrated recently (e.g. Kärner, 2002). Main problem in the temperature modeling is connected to the scale break in the generally self similar air temperature anomaly series (Kärner, 2005). The break separates short-range strong non-stationarity from nearly stationary longer range variability region. This is an indication of the fact that several geophysical time series show a short-range non-stationary behaviour and a stationary behaviour in longer range (Davis et al., 1996). In order to model series like that the choice of time step appears to be crucial. To characterize the long-range variability we can neglect the short-range non-stationary fluctuations, provided that we are able to model properly the long-range tendencies. The structure function (Monin and Yaglom, 1975) was used to determine an approximate segregation line between the short and the long scale in terms of modeling. The longer scale can be called climate one, because such models are applicable in scales over some decades. In order to get rid of the short-range fluctuations in daily series the variability can be examined using sufficiently long time step. In the present paper, we show that the same philosophy is useful to find a model to represent a climate-scale temporal variability of the Estonian daily mean precipitation amount series over 45 years (1961-2005). Temporal variability of the obtained daily time series is examined by means of an autoregressive and integrated moving average (ARIMA) family model of the type (0,1,1). This model is applicable for daily precipitation simulating if to select an appropriate time step that enables us to neglet the short-range non-stationary fluctuations. A considerably longer time step than one day (30 days) is used in the current paper to model the precipitation time series variability. Each ARIMA (0

  7. Evaluating Varied Label Designs for Use with Medical Devices: Optimized Labels Outperform Existing Labels in the Correct Selection of Devices and Time to Select.

    Directory of Open Access Journals (Sweden)

    Laura Bix

    Full Text Available Effective standardization of medical device labels requires objective study of varied designs. Insufficient empirical evidence exists regarding how practitioners utilize and view labeling.Measure the effect of graphic elements (boxing information, grouping information, symbol use and color-coding to optimize a label for comparison with those typical of commercial medical devices.Participants viewed 54 trials on a computer screen. Trials were comprised of two labels that were identical with regard to graphics, but differed in one aspect of information (e.g., one had latex, the other did not. Participants were instructed to select the label along a given criteria (e.g., latex containing as quickly as possible. Dependent variables were binary (correct selection and continuous (time to correct selection.Eighty-nine healthcare professionals were recruited at Association of Surgical Technologists (AST conferences, and using a targeted e-mail of AST members.Symbol presence, color coding and grouping critical pieces of information all significantly improved selection rates and sped time to correct selection (α = 0.05. Conversely, when critical information was graphically boxed, probability of correct selection and time to selection were impaired (α = 0.05. Subsequently, responses from trials containing optimal treatments (color coded, critical information grouped with symbols were compared to two labels created based on a review of those commercially available. Optimal labels yielded a significant positive benefit regarding the probability of correct choice ((P<0.0001 LSM; UCL, LCL: 97.3%; 98.4%, 95.5%, as compared to the two labels we created based on commercial designs (92.0%; 94.7%, 87.9% and 89.8%; 93.0%, 85.3% and time to selection.Our study provides data regarding design factors, namely: color coding, symbol use and grouping of critical information that can be used to significantly enhance the performance of medical device labels.

  8. Optically selected GRB afterglows, a real time analysis system at the CFHT

    International Nuclear Information System (INIS)

    Malacrino, F.; Atteia, J.-L.; Klotz, A.; Boer, M.; Kavelaars, J.J.; Cuillandre, J.-C.

    2005-01-01

    We attempt to detect optical GRB afterglows on images taken by the Canada France Hawaii Telescope for the Very Wide survey, component of the Legacy Survey. To do so, a Real Time Analysis System called Optically Selected GRB Afterglows has been installed on a dedicated computer in Hawaii. This pipeline automatically and quickly analyzes Mega cam images and extracts from them a list of variable objects which is displayed on a web page far validation by a member of the collaboration. The Very Wide survey covers 1200 square degrees down to i 1 = 23.5. This paper briefly explain the RTAS process

  9. Regional regression models of percentile flows for the contiguous United States: Expert versus data-driven independent variable selection

    Directory of Open Access Journals (Sweden)

    Geoffrey Fouad

    2018-06-01

    New hydrological insights for the region: A set of three variables selected based on an expert assessment of factors that influence percentile flows performed similarly to larger sets of variables selected using a data-driven method. Expert assessment variables included mean annual precipitation, potential evapotranspiration, and baseflow index. Larger sets of up to 37 variables contributed little, if any, additional predictive information. Variables used to describe the distribution of basin data (e.g. standard deviation were not useful, and average values were sufficient to characterize physical and climatic basin conditions. Effectiveness of the expert assessment variables may be due to the high degree of multicollinearity (i.e. cross-correlation among additional variables. A tool is provided in the Supplementary material to predict percentile flows based on the three expert assessment variables. Future work should develop new variables with a strong understanding of the processes related to percentile flows.

  10. Current Debates on Variability in Child Welfare Decision-Making: A Selected Literature Review

    Directory of Open Access Journals (Sweden)

    Emily Keddell

    2014-11-01

    Full Text Available This article considers selected drivers of decision variability in child welfare decision-making and explores current debates in relation to these drivers. Covering the related influences of national orientation, risk and responsibility, inequality and poverty, evidence-based practice, constructions of abuse and its causes, domestic violence and cognitive processes, it discusses the literature in regards to how each of these influences decision variability. It situates these debates in relation to the ethical issue of variability and the equity issues that variability raises. I propose that despite the ecological complexity that drives decision variability, that improving internal (within-country decision consistency is still a valid goal. It may be that the use of annotated case examples, kind learning systems, and continued commitments to the social justice issues of inequality and individualisation can contribute to this goal.

  11. Variations in Carabidae assemblages across the farmland habitats in relation to selected environmental variables including soil properties

    Directory of Open Access Journals (Sweden)

    Beáta Baranová

    2018-03-01

    Full Text Available The variations in ground beetles (Coleoptera: Carabidae assemblages across the three types of farmland habitats, arable land, meadows and woody vegetation were studied in relation to vegetation cover structure, intensity of agrotechnical interventions and selected soil properties. Material was pitfall trapped in 2010 and 2011 on twelve sites of the agricultural landscape in the Prešov town and its near vicinity, Eastern Slovakia. A total of 14,763 ground beetle individuals were entrapped. Material collection resulted into 92 Carabidae species, with the following six species dominating: Poecilus cupreus, Pterostichus melanarius, Pseudoophonus rufipes, Brachinus crepitans, Anchomenus dorsalis and Poecilus versicolor. Studied habitats differed significantly in the number of entrapped individuals, activity abundance as well as representation of the carabids according to their habitat preferences and ability to fly. However, no significant distinction was observed in the diversity, evenness neither dominance. The most significant environmental variables affecting Carabidae assemblages species variability were soil moisture and herb layer 0-20 cm. Another best variables selected by the forward selection were intensity of agrotechnical interventions, humus content and shrub vegetation. The other from selected soil properties seem to have just secondary meaning for the adult carabids. Environmental variables have the strongest effect on the habitat specialists, whereas ground beetles without special requirements to the habitat quality seem to be affected by the studied environmental variables just little.

  12. A moving mesh method with variable relaxation time

    OpenAIRE

    Soheili, Ali Reza; Stockie, John M.

    2006-01-01

    We propose a moving mesh adaptive approach for solving time-dependent partial differential equations. The motion of spatial grid points is governed by a moving mesh PDE (MMPDE) in which a mesh relaxation time \\tau is employed as a regularization parameter. Previously reported results on MMPDEs have invariably employed a constant value of the parameter \\tau. We extend this standard approach by incorporating a variable relaxation time that is calculated adaptively alongside the solution in orde...

  13. Timing variability in children with early-treated congenital hypothyroidism

    NARCIS (Netherlands)

    Kooistra, L.; Snijders, T.A.B.; Schellekens, J.M.H.; Kalverboer, A.F.; Geuze, R.H.

    This study reports on central and peripheral determinants of timing variability in self-paced tapping by children with early-treated congenital hypothyroidism (CH). A theoretical model of the timing of repetitive movements developed by Wing and Kristofferson was applied to estimate the central

  14. Variable selection based near infrared spectroscopy quantitative and qualitative analysis on wheat wet gluten

    Science.gov (United States)

    Lü, Chengxu; Jiang, Xunpeng; Zhou, Xingfan; Zhang, Yinqiao; Zhang, Naiqian; Wei, Chongfeng; Mao, Wenhua

    2017-10-01

    Wet gluten is a useful quality indicator for wheat, and short wave near infrared spectroscopy (NIRS) is a high performance technique with the advantage of economic rapid and nondestructive test. To study the feasibility of short wave NIRS analyzing wet gluten directly from wheat seed, 54 representative wheat seed samples were collected and scanned by spectrometer. 8 spectral pretreatment method and genetic algorithm (GA) variable selection method were used to optimize analysis. Both quantitative and qualitative model of wet gluten were built by partial least squares regression and discriminate analysis. For quantitative analysis, normalization is the optimized pretreatment method, 17 wet gluten sensitive variables are selected by GA, and GA model performs a better result than that of all variable model, with R2V=0.88, and RMSEV=1.47. For qualitative analysis, automatic weighted least squares baseline is the optimized pretreatment method, all variable models perform better results than those of GA models. The correct classification rates of 3 class of 30% wet gluten content are 95.45, 84.52, and 90.00%, respectively. The short wave NIRS technique shows potential for both quantitative and qualitative analysis of wet gluten for wheat seed.

  15. The Selection, Use, and Reporting of Control Variables in International Business Research

    DEFF Research Database (Denmark)

    Nielsen, Bo Bernhard; Raswant, Arpit

    2018-01-01

    This study explores the selection, use, and reporting of control variables in studies published in the leading international business (IB) research journals. We review a sample of 246 empirical studies published in the top five IB journals over the period 2012–2015 with particular emphasis...... on selection, use, and reporting of controls. Approximately 83% of studies included only half of what we consider Minimum Standard of Practice with regards to controls, whereas only 38% of the studies met the 75% threshold. We provide recommendations on how to effectively identify, use and report controls...

  16. Bayesian inference for the genetic control of water deficit tolerance in spring wheat by stochastic search variable selection.

    Science.gov (United States)

    Safari, Parviz; Danyali, Syyedeh Fatemeh; Rahimi, Mehdi

    2018-06-02

    Drought is the main abiotic stress seriously influencing wheat production. Information about the inheritance of drought tolerance is necessary to determine the most appropriate strategy to develop tolerant cultivars and populations. In this study, generation means analysis to identify the genetic effects controlling grain yield inheritance in water deficit and normal conditions was considered as a model selection problem in a Bayesian framework. Stochastic search variable selection (SSVS) was applied to identify the most important genetic effects and the best fitted models using different generations obtained from two crosses applying two water regimes in two growing seasons. The SSVS is used to evaluate the effect of each variable on the dependent variable via posterior variable inclusion probabilities. The model with the highest posterior probability is selected as the best model. In this study, the grain yield was controlled by the main effects (additive and non-additive effects) and epistatic. The results demonstrate that breeding methods such as recurrent selection and subsequent pedigree method and hybrid production can be useful to improve grain yield.

  17. ACTIVE LEARNING TO OVERCOME SAMPLE SELECTION BIAS: APPLICATION TO PHOTOMETRIC VARIABLE STAR CLASSIFICATION

    Energy Technology Data Exchange (ETDEWEB)

    Richards, Joseph W.; Starr, Dan L.; Miller, Adam A.; Bloom, Joshua S.; Butler, Nathaniel R.; Berian James, J. [Astronomy Department, University of California, Berkeley, CA 94720-7450 (United States); Brink, Henrik [Dark Cosmology Centre, Juliane Maries Vej 30, 2100 Copenhagen O (Denmark); Long, James P.; Rice, John, E-mail: jwrichar@stat.berkeley.edu [Statistics Department, University of California, Berkeley, CA 94720-7450 (United States)

    2012-01-10

    Despite the great promise of machine-learning algorithms to classify and predict astrophysical parameters for the vast numbers of astrophysical sources and transients observed in large-scale surveys, the peculiarities of the training data often manifest as strongly biased predictions on the data of interest. Typically, training sets are derived from historical surveys of brighter, more nearby objects than those from more extensive, deeper surveys (testing data). This sample selection bias can cause catastrophic errors in predictions on the testing data because (1) standard assumptions for machine-learned model selection procedures break down and (2) dense regions of testing space might be completely devoid of training data. We explore possible remedies to sample selection bias, including importance weighting, co-training, and active learning (AL). We argue that AL-where the data whose inclusion in the training set would most improve predictions on the testing set are queried for manual follow-up-is an effective approach and is appropriate for many astronomical applications. For a variable star classification problem on a well-studied set of stars from Hipparcos and Optical Gravitational Lensing Experiment, AL is the optimal method in terms of error rate on the testing data, beating the off-the-shelf classifier by 3.4% and the other proposed methods by at least 3.0%. To aid with manual labeling of variable stars, we developed a Web interface which allows for easy light curve visualization and querying of external databases. Finally, we apply AL to classify variable stars in the All Sky Automated Survey, finding dramatic improvement in our agreement with the ASAS Catalog of Variable Stars, from 65.5% to 79.5%, and a significant increase in the classifier's average confidence for the testing set, from 14.6% to 42.9%, after a few AL iterations.

  18. ACTIVE LEARNING TO OVERCOME SAMPLE SELECTION BIAS: APPLICATION TO PHOTOMETRIC VARIABLE STAR CLASSIFICATION

    International Nuclear Information System (INIS)

    Richards, Joseph W.; Starr, Dan L.; Miller, Adam A.; Bloom, Joshua S.; Butler, Nathaniel R.; Berian James, J.; Brink, Henrik; Long, James P.; Rice, John

    2012-01-01

    Despite the great promise of machine-learning algorithms to classify and predict astrophysical parameters for the vast numbers of astrophysical sources and transients observed in large-scale surveys, the peculiarities of the training data often manifest as strongly biased predictions on the data of interest. Typically, training sets are derived from historical surveys of brighter, more nearby objects than those from more extensive, deeper surveys (testing data). This sample selection bias can cause catastrophic errors in predictions on the testing data because (1) standard assumptions for machine-learned model selection procedures break down and (2) dense regions of testing space might be completely devoid of training data. We explore possible remedies to sample selection bias, including importance weighting, co-training, and active learning (AL). We argue that AL—where the data whose inclusion in the training set would most improve predictions on the testing set are queried for manual follow-up—is an effective approach and is appropriate for many astronomical applications. For a variable star classification problem on a well-studied set of stars from Hipparcos and Optical Gravitational Lensing Experiment, AL is the optimal method in terms of error rate on the testing data, beating the off-the-shelf classifier by 3.4% and the other proposed methods by at least 3.0%. To aid with manual labeling of variable stars, we developed a Web interface which allows for easy light curve visualization and querying of external databases. Finally, we apply AL to classify variable stars in the All Sky Automated Survey, finding dramatic improvement in our agreement with the ASAS Catalog of Variable Stars, from 65.5% to 79.5%, and a significant increase in the classifier's average confidence for the testing set, from 14.6% to 42.9%, after a few AL iterations.

  19. Active Learning to Overcome Sample Selection Bias: Application to Photometric Variable Star Classification

    Science.gov (United States)

    Richards, Joseph W.; Starr, Dan L.; Brink, Henrik; Miller, Adam A.; Bloom, Joshua S.; Butler, Nathaniel R.; James, J. Berian; Long, James P.; Rice, John

    2012-01-01

    Despite the great promise of machine-learning algorithms to classify and predict astrophysical parameters for the vast numbers of astrophysical sources and transients observed in large-scale surveys, the peculiarities of the training data often manifest as strongly biased predictions on the data of interest. Typically, training sets are derived from historical surveys of brighter, more nearby objects than those from more extensive, deeper surveys (testing data). This sample selection bias can cause catastrophic errors in predictions on the testing data because (1) standard assumptions for machine-learned model selection procedures break down and (2) dense regions of testing space might be completely devoid of training data. We explore possible remedies to sample selection bias, including importance weighting, co-training, and active learning (AL). We argue that AL—where the data whose inclusion in the training set would most improve predictions on the testing set are queried for manual follow-up—is an effective approach and is appropriate for many astronomical applications. For a variable star classification problem on a well-studied set of stars from Hipparcos and Optical Gravitational Lensing Experiment, AL is the optimal method in terms of error rate on the testing data, beating the off-the-shelf classifier by 3.4% and the other proposed methods by at least 3.0%. To aid with manual labeling of variable stars, we developed a Web interface which allows for easy light curve visualization and querying of external databases. Finally, we apply AL to classify variable stars in the All Sky Automated Survey, finding dramatic improvement in our agreement with the ASAS Catalog of Variable Stars, from 65.5% to 79.5%, and a significant increase in the classifier's average confidence for the testing set, from 14.6% to 42.9%, after a few AL iterations.

  20. Joint Bayesian variable and graph selection for regression models with network-structured predictors

    Science.gov (United States)

    Peterson, C. B.; Stingo, F. C.; Vannucci, M.

    2015-01-01

    In this work, we develop a Bayesian approach to perform selection of predictors that are linked within a network. We achieve this by combining a sparse regression model relating the predictors to a response variable with a graphical model describing conditional dependencies among the predictors. The proposed method is well-suited for genomic applications since it allows the identification of pathways of functionally related genes or proteins which impact an outcome of interest. In contrast to previous approaches for network-guided variable selection, we infer the network among predictors using a Gaussian graphical model and do not assume that network information is available a priori. We demonstrate that our method outperforms existing methods in identifying network-structured predictors in simulation settings, and illustrate our proposed model with an application to inference of proteins relevant to glioblastoma survival. PMID:26514925

  1. Exploratory Spectroscopy of Magnetic Cataclysmic Variables Candidates and Other Variable Objects

    Energy Technology Data Exchange (ETDEWEB)

    Oliveira, A. S.; Palhares, M. S. [IP and D, Universidade do Vale do Paraíba, 12244-000, São José dos Campos, SP (Brazil); Rodrigues, C. V.; Cieslinski, D.; Jablonski, F. J. [Divisão de Astrofísica, Instituto Nacional de Pesquisas Espaciais, 12227-010, São José dos Campos, SP (Brazil); Silva, K. M. G. [Gemini Observatory, Casilla 603, La Serena (Chile); Almeida, L. A. [Instituto de Astronomia, Geofísica e Ciências Atmosféricas, Universidade de São Paulo, 05508-900, São Paulo, SP (Brazil); Rodríguez-Ardila, A., E-mail: alexandre@univap.br [Laboratório Nacional de Astrofísica LNA/MCTI, 37504-364, Itajubá MG (Brazil)

    2017-04-01

    The increasing number of synoptic surveys made by small robotic telescopes, such as the photometric Catalina Real-Time Transient Survey (CRTS), provides a unique opportunity to discover variable sources and improves the statistical samples of such classes of objects. Our goal is the discovery of magnetic Cataclysmic Variables (mCVs). These are rare objects that probe interesting accretion scenarios controlled by the white-dwarf magnetic field. In particular, improved statistics of mCVs would help to address open questions on their formation and evolution. We performed an optical spectroscopy survey to search for signatures of magnetic accretion in 45 variable objects selected mostly from the CRTS. In this sample, we found 32 CVs, 22 being mCV candidates, 13 of which were previously unreported as such. If the proposed classifications are confirmed, it would represent an increase of 4% in the number of known polars and 12% in the number of known IPs. A fraction of our initial sample was classified as extragalactic sources or other types of variable stars by the inspection of the identification spectra. Despite the inherent complexity in identifying a source as an mCV, variability-based selection, followed by spectroscopic snapshot observations, has proved to be an efficient strategy for their discoveries, being a relatively inexpensive approach in terms of telescope time.

  2. Variable selection for confounder control, flexible modeling and Collaborative Targeted Minimum Loss-based Estimation in causal inference

    Science.gov (United States)

    Schnitzer, Mireille E.; Lok, Judith J.; Gruber, Susan

    2015-01-01

    This paper investigates the appropriateness of the integration of flexible propensity score modeling (nonparametric or machine learning approaches) in semiparametric models for the estimation of a causal quantity, such as the mean outcome under treatment. We begin with an overview of some of the issues involved in knowledge-based and statistical variable selection in causal inference and the potential pitfalls of automated selection based on the fit of the propensity score. Using a simple example, we directly show the consequences of adjusting for pure causes of the exposure when using inverse probability of treatment weighting (IPTW). Such variables are likely to be selected when using a naive approach to model selection for the propensity score. We describe how the method of Collaborative Targeted minimum loss-based estimation (C-TMLE; van der Laan and Gruber, 2010) capitalizes on the collaborative double robustness property of semiparametric efficient estimators to select covariates for the propensity score based on the error in the conditional outcome model. Finally, we compare several approaches to automated variable selection in low-and high-dimensional settings through a simulation study. From this simulation study, we conclude that using IPTW with flexible prediction for the propensity score can result in inferior estimation, while Targeted minimum loss-based estimation and C-TMLE may benefit from flexible prediction and remain robust to the presence of variables that are highly correlated with treatment. However, in our study, standard influence function-based methods for the variance underestimated the standard errors, resulting in poor coverage under certain data-generating scenarios. PMID:26226129

  3. Time dependent analysis of assay comparability: a novel approach to understand intra- and inter-site variability over time

    Science.gov (United States)

    Winiwarter, Susanne; Middleton, Brian; Jones, Barry; Courtney, Paul; Lindmark, Bo; Page, Ken M.; Clark, Alan; Landqvist, Claire

    2015-09-01

    We demonstrate here a novel use of statistical tools to study intra- and inter-site assay variability of five early drug metabolism and pharmacokinetics in vitro assays over time. Firstly, a tool for process control is presented. It shows the overall assay variability but allows also the following of changes due to assay adjustments and can additionally highlight other, potentially unexpected variations. Secondly, we define the minimum discriminatory difference/ratio to support projects to understand how experimental values measured at different sites at a given time can be compared. Such discriminatory values are calculated for 3 month periods and followed over time for each assay. Again assay modifications, especially assay harmonization efforts, can be noted. Both the process control tool and the variability estimates are based on the results of control compounds tested every time an assay is run. Variability estimates for a limited set of project compounds were computed as well and found to be comparable. This analysis reinforces the need to consider assay variability in decision making, compound ranking and in silico modeling.

  4. Selective visual scaling of time-scale processes facilitates broadband learning of isometric force frequency tracking.

    Science.gov (United States)

    King, Adam C; Newell, Karl M

    2015-10-01

    The experiment investigated the effect of selectively augmenting faster time scales of visual feedback information on the learning and transfer of continuous isometric force tracking tasks to test the generality of the self-organization of 1/f properties of force output. Three experimental groups tracked an irregular target pattern either under a standard fixed gain condition or with selectively enhancement in the visual feedback display of intermediate (4-8 Hz) or high (8-12 Hz) frequency components of the force output. All groups reduced tracking error over practice, with the error lowest in the intermediate scaling condition followed by the high scaling and fixed gain conditions, respectively. Selective visual scaling induced persistent changes across the frequency spectrum, with the strongest effect in the intermediate scaling condition and positive transfer to novel feedback displays. The findings reveal an interdependence of the timescales in the learning and transfer of isometric force output frequency structures consistent with 1/f process models of the time scales of motor output variability.

  5. Age-related change in renal corticomedullary differentiation: evaluation with noncontrast-enhanced steady-state free precession (SSFP) MRI with spatially selective inversion pulse using variable inversion time.

    Science.gov (United States)

    Noda, Yasufumi; Kanki, Akihiko; Yamamoto, Akira; Higashi, Hiroki; Tanimoto, Daigo; Sato, Tomohiro; Higaki, Atsushi; Tamada, Tsutomu; Ito, Katsuyoshi

    2014-07-01

    To evaluate age-related change in renal corticomedullary differentiation and renal cortical thickness by means of noncontrast-enhanced steady-state free precession (SSFP) magnetic resonance imaging (MRI) with spatially selective inversion recovery (IR) pulse. The Institutional Review Board of our hospital approved this retrospective study and patient informed consent was waived. This study included 48 patients without renal diseases who underwent noncontrast-enhanced SSFP MRI with spatially selective IR pulse using variable inversion times (TIs) (700-1500 msec). The signal intensity of renal cortex and medulla were measured to calculate renal corticomedullary contrast ratio. Additionally, renal cortical thickness was measured. The renal corticomedullary junction was clearly depicted in all patients. The mean cortical thickness was 3.9 ± 0.83 mm. The mean corticomedullary contrast ratio was 4.7 ± 1.4. There was a negative correlation between optimal TI for the best visualization of renal corticomedullary differentiation and age (r = -0.378; P = 0.001). However, there was no significant correlation between renal corticomedullary contrast ratio and age (r = 0.187; P = 0.20). Similarly, no significant correlation was observed between renal cortical thickness and age (r = 0.054; P = 0.712). In the normal kidney, noncontrast-enhanced SSFP MRI with spatially selective IR pulse can be used to assess renal corticomedullary differentiation and cortical thickness without the influence of aging, although optimal TI values for the best visualization of renal corticomedullary junction were shortened with aging. © 2013 Wiley Periodicals, Inc.

  6. Predictor variables for a half marathon race time in recreational male runners.

    Science.gov (United States)

    Rüst, Christoph Alexander; Knechtle, Beat; Knechtle, Patrizia; Barandun, Ursula; Lepers, Romuald; Rosemann, Thomas

    2011-01-01

    The aim of this study was to investigate predictor variables of anthropometry, training, and previous experience in order to predict a half marathon race time for future novice recreational male half marathoners. Eighty-four male finishers in the 'Half Marathon Basel' completed the race distance within (mean and standard deviation, SD) 103.9 (16.5) min, running at a speed of 12.7 (1.9) km/h. After multivariate analysis of the anthropometric characteristics, body mass index (r = 0.56), suprailiacal (r = 0.36) and medial calf skin fold (r = 0.53) were related to race time. For the variables of training and previous experience, speed in running of the training sessions (r = -0.54) were associated with race time. After multivariate analysis of both the significant anthropometric and training variables, body mass index (P = 0.0150) and speed in running during training (P = 0.0045) were related to race time. Race time in a half marathon might be partially predicted by the following equation (r(2) = 0.44): Race time (min) = 72.91 + 3.045 * (body mass index, kg/m(2)) -3.884 * (speed in running during training, km/h) for recreational male runners. To conclude, variables of both anthropometry and training were related to half marathon race time in recreational male half marathoners and cannot be reduced to one single predictor variable.

  7. Selection of variables for neural network analysis. Comparisons of several methods with high energy physics data

    International Nuclear Information System (INIS)

    Proriol, J.

    1994-01-01

    Five different methods are compared for selecting the most important variables with a view to classifying high energy physics events with neural networks. The different methods are: the F-test, Principal Component Analysis (PCA), a decision tree method: CART, weight evaluation, and Optimal Cell Damage (OCD). The neural networks use the variables selected with the different methods. We compare the percentages of events properly classified by each neural network. The learning set and the test set are the same for all the neural networks. (author)

  8. Online Synthesis for Operation Execution Time Variability on Digital Microfluidic Biochips

    DEFF Research Database (Denmark)

    Alistar, Mirela; Pop, Paul

    2014-01-01

    have assumed that each biochemical operation in an application is characterized by a worst-case execution time (wcet). However, during the execution of the application, due to variability and randomness in biochemical reactions, operations may finish earlier than their wcets. In this paper we propose...... an online synthesis strategy that re-synthesizes the application at runtime when operations experience variability in their execution time, obtaining thus shorter application execution times. The proposed strategy has been evaluated using several benchmarks....

  9. Quantifying Selection with Pool-Seq Time Series Data.

    Science.gov (United States)

    Taus, Thomas; Futschik, Andreas; Schlötterer, Christian

    2017-11-01

    Allele frequency time series data constitute a powerful resource for unraveling mechanisms of adaptation, because the temporal dimension captures important information about evolutionary forces. In particular, Evolve and Resequence (E&R), the whole-genome sequencing of replicated experimentally evolving populations, is becoming increasingly popular. Based on computer simulations several studies proposed experimental parameters to optimize the identification of the selection targets. No such recommendations are available for the underlying parameters selection strength and dominance. Here, we introduce a highly accurate method to estimate selection parameters from replicated time series data, which is fast enough to be applied on a genome scale. Using this new method, we evaluate how experimental parameters can be optimized to obtain the most reliable estimates for selection parameters. We show that the effective population size (Ne) and the number of replicates have the largest impact. Because the number of time points and sequencing coverage had only a minor effect, we suggest that time series analysis is feasible without major increase in sequencing costs. We anticipate that time series analysis will become routine in E&R studies. © The Author 2017. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution.

  10. Developing a spatial-statistical model and map of historical malaria prevalence in Botswana using a staged variable selection procedure

    Directory of Open Access Journals (Sweden)

    Mabaso Musawenkosi LH

    2007-09-01

    Full Text Available Abstract Background Several malaria risk maps have been developed in recent years, many from the prevalence of infection data collated by the MARA (Mapping Malaria Risk in Africa project, and using various environmental data sets as predictors. Variable selection is a major obstacle due to analytical problems caused by over-fitting, confounding and non-independence in the data. Testing and comparing every combination of explanatory variables in a Bayesian spatial framework remains unfeasible for most researchers. The aim of this study was to develop a malaria risk map using a systematic and practicable variable selection process for spatial analysis and mapping of historical malaria risk in Botswana. Results Of 50 potential explanatory variables from eight environmental data themes, 42 were significantly associated with malaria prevalence in univariate logistic regression and were ranked by the Akaike Information Criterion. Those correlated with higher-ranking relatives of the same environmental theme, were temporarily excluded. The remaining 14 candidates were ranked by selection frequency after running automated step-wise selection procedures on 1000 bootstrap samples drawn from the data. A non-spatial multiple-variable model was developed through step-wise inclusion in order of selection frequency. Previously excluded variables were then re-evaluated for inclusion, using further step-wise bootstrap procedures, resulting in the exclusion of another variable. Finally a Bayesian geo-statistical model using Markov Chain Monte Carlo simulation was fitted to the data, resulting in a final model of three predictor variables, namely summer rainfall, mean annual temperature and altitude. Each was independently and significantly associated with malaria prevalence after allowing for spatial correlation. This model was used to predict malaria prevalence at unobserved locations, producing a smooth risk map for the whole country. Conclusion We have

  11. Quantum Stephani exact cosmological solutions and the selection of time variable

    International Nuclear Information System (INIS)

    Pedram, P; Jalalzadeh, S; Gousheh, S S

    2007-01-01

    We study a perfect fluid Stephani quantum cosmological model. In the present work, the Schutz's variational formalism which recovers the notion of time is applied. This gives rise to a Wheeler-DeWitt equation for the scale factor. We use the eigenfunctions in order to construct wave packets for each case. We study the time-dependent behavior of the expectation value of the scale factor, using many-worlds and de Broglie-Bohm interpretations of quantum mechanics

  12. Continuous performance task in ADHD: Is reaction time variability a key measure?

    Science.gov (United States)

    Levy, Florence; Pipingas, Andrew; Harris, Elizabeth V; Farrow, Maree; Silberstein, Richard B

    2018-01-01

    To compare the use of the Continuous Performance Task (CPT) reaction time variability (intraindividual variability or standard deviation of reaction time), as a measure of vigilance in attention-deficit hyperactivity disorder (ADHD), and stimulant medication response, utilizing a simple CPT X-task vs an A-X-task. Comparative analyses of two separate X-task vs A-X-task data sets, and subgroup analyses of performance on and off medication were conducted. The CPT X-task reaction time variability had a direct relationship to ADHD clinician severity ratings, unlike the CPT A-X-task. Variability in X-task performance was reduced by medication compared with the children's unmedicated performance, but this effect did not reach significance. When the coefficient of variation was applied, severity measures and medication response were significant for the X-task, but not for the A-X-task. The CPT-X-task is a useful clinical screening test for ADHD and medication response. In particular, reaction time variability is related to default mode interference. The A-X-task is less useful in this regard.

  13. Demographic Variables and Selective, Sustained Attention and Planning through Cognitive Tasks among Healthy Adults

    Directory of Open Access Journals (Sweden)

    Afsaneh Zarghi

    2011-04-01

    Full Text Available Introduction: Cognitive tasks are considered to be applicable and appropriate in assessing cognitive domains. The purpose of our study is to determine the relationship existence between variables of age, sex and education with selective, sustained attention and planning abilities by means of computerized cognitive tasks among healthy adults. Methods: A cross-sectional study was implemented during 6 months from June to November, 2010 on 84 healthy adults (42 male and 42 female. The whole participants performed computerized CPT, STROOP and TOL tests after being content and trained. Results: The obtained data indicate that there is a significant correlation coefficient between age, sex and education variables (p<0.05. Discussion: The above-mentioned tests can be used to assess selective, sustained attention and planning.

  14. NUMBER OF SUCCESSIVE CYCLES NECESSARY TO ACHIEVE STABILITY OF SELECTED GROUND REACTION FORCE VARIABLES DURING CONTINUOUS JUMPING

    Directory of Open Access Journals (Sweden)

    Jasmes M.W. Brownjohn

    2009-12-01

    Full Text Available Because of inherent variability in all human cyclical movements, such as walking, running and jumping, data collected across a single cycle might be atypical and potentially unable to represent an individual's generalized performance. The study described here was designed to determine the number of successive cycles due to continuous, repetitive countermovement jumping which a test subject should perform in a single experimental session to achieve stability of the mean of the corresponding continuously measured ground reaction force (GRF variables. Seven vertical GRF variables (period of jumping cycle, duration of contact phase, peak force amplitude and its timing, average rate of force development, average rate of force relaxation and impulse were extracted on the cycle-by-cycle basis from vertical jumping force time histories generated by twelve participants who were jumping in response to regular electronic metronome beats in the range 2-2.8 Hz. Stability of the selected GRF variables across successive jumping cycles was examined for three jumping rates (2, 2.4 and 2.8 Hz using two statistical methods: intra-class correlation (ICC analysis and segmental averaging technique (SAT. Results of the ICC analysis indicated that an average of four successive cycles (mean 4.5 ± 2.7 for 2 Hz; 3.9 ± 2.6 for 2.4 Hz; 3.3 ± 2.7 for 2.8 Hz were necessary to achieve maximum ICC values. Except for jumping period, maximum ICC values took values from 0.592 to 0.991 and all were significantly (p < 0.05 different from zero. Results of the SAT revealed that an average of ten successive cycles (mean 10.5 ± 3.5 for 2 Hz; 9.2 ± 3.8 for 2.4 Hz; 9.0 ± 3.9 for 2.8 Hz were necessary to achieve stability of the selected parameters using criteria previously reported in the literature. Using 10 reference trials, the SAT required standard deviation criterion values of 0.49, 0.41 and 0.55 for 2 Hz, 2.4 Hz and 2.8 Hz jumping rates, respectively, in order to approximate

  15. The cost of travel time variability: three measures with properties

    DEFF Research Database (Denmark)

    Engelson, Leonid; Fosgerau, Mogens

    2016-01-01

    This paper explores the relationships between three types of measures of the cost of travel time variability: measures based on scheduling preferences and implicit departure time choice, Bernoulli type measures based on a univariate function of travel time, and mean-dispersion measures. We...

  16. Time-dependence in relativistic collisionless shocks: theory of the variable

    Energy Technology Data Exchange (ETDEWEB)

    Spitkovsky, A

    2004-02-05

    We describe results from time-dependent numerical modeling of the collisionless reverse shock terminating the pulsar wind in the Crab Nebula. We treat the upstream relativistic wind as composed of ions and electron-positron plasma embedded in a toroidal magnetic field, flowing radially outward from the pulsar in a sector around the rotational equator. The relativistic cyclotron instability of the ion gyrational orbit downstream of the leading shock in the electron-positron pairs launches outward propagating magnetosonic waves. Because of the fresh supply of ions crossing the shock, this time-dependent process achieves a limit-cycle, in which the waves are launched with periodicity on the order of the ion Larmor time. Compressions in the magnetic field and pair density associated with these waves, as well as their propagation speed, semi-quantitatively reproduce the behavior of the wisp and ring features described in recent observations obtained using the Hubble Space Telescope and the Chandra X-Ray Observatory. By selecting the parameters of the ion orbits to fit the spatial separation of the wisps, we predict the period of time variability of the wisps that is consistent with the data. When coupled with a mechanism for non-thermal acceleration of the pairs, the compressions in the magnetic field and plasma density associated with the optical wisp structure naturally account for the location of X-ray features in the Crab. We also discuss the origin of the high energy ions and their acceleration in the equatorial current sheet of the pulsar wind.

  17. Effects of musical tempo on physiological, affective, and perceptual variables and performance of self-selected walking pace.

    Science.gov (United States)

    Almeida, Flávia Angélica Martins; Nunes, Renan Felipe Hartmann; Ferreira, Sandro Dos Santos; Krinski, Kleverton; Elsangedy, Hassan Mohamed; Buzzachera, Cosme Franklin; Alves, Ragami Chaves; Gregorio da Silva, Sergio

    2015-06-01

    [Purpose] This study investigated the effects of musical tempo on physiological, affective, and perceptual responses as well as the performance of self-selected walking pace. [Subjects] The study included 28 adult women between 29 and 51 years old. [Methods] The subjects were divided into three groups: no musical stimulation group (control), and 90 and 140 beats per minute musical tempo groups. Each subject underwent three experimental sessions: involved familiarization with the equipment, an incremental test to exhaustion, and a 30-min walk on a treadmill at a self-selected pace, respectively. During the self-selected walking session, physiological, perceptual, and affective variables were evaluated, and walking performance was evaluated at the end. [Results] There were no significant differences in physiological variables or affective response among groups. However, there were significant differences in perceptual response and walking performance among groups. [Conclusion] Fast music (140 beats per minute) promotes a higher rating of perceived exertion and greater performance in self-selected walking pace without significantly altering physiological variables or affective response.

  18. Quadratic time dependent Hamiltonians and separation of variables

    International Nuclear Information System (INIS)

    Anzaldo-Meneses, A.

    2017-01-01

    Time dependent quantum problems defined by quadratic Hamiltonians are solved using canonical transformations. The Green’s function is obtained and a comparison with the classical Hamilton–Jacobi method leads to important geometrical insights like exterior differential systems, Monge cones and time dependent Gaussian metrics. The Wei–Norman approach is applied using unitary transformations defined in terms of generators of the associated Lie groups, here the semi-direct product of the Heisenberg group and the symplectic group. A new explicit relation for the unitary transformations is given in terms of a finite product of elementary transformations. The sequential application of adequate sets of unitary transformations leads naturally to a new separation of variables method for time dependent Hamiltonians, which is shown to be related to the Inönü–Wigner contraction of Lie groups. The new method allows also a better understanding of interacting particles or coupled modes and opens an alternative way to analyze topological phases in driven systems. - Highlights: • Exact unitary transformation reducing time dependent quadratic quantum Hamiltonian to zero. • New separation of variables method and simultaneous uncoupling of modes. • Explicit examples of transformations for one to four dimensional problems. • New general evolution equation for quadratic form in the action, respectively Green’s function.

  19. Time-of-day effects in implicit racial in-group preferences are likely selection effects, not circadian rhythms

    Directory of Open Access Journals (Sweden)

    Timothy P. Schofield

    2016-04-01

    Full Text Available Time-of-day effects in human psychological functioning have been known of since the 1800s. However, outside of research specifically focused on the quantification of circadian rhythms, their study has largely been neglected. Moves toward online data collection now mean that psychological investigations take place around the clock, which affords researchers the ability to easily study time-of-day effects. Recent analyses have shown, for instance, that implicit attitudes have time-of-day effects. The plausibility that these effects indicate circadian rhythms rather than selection effects is considered in the current study. There was little evidence that the time-of-day effects in implicit attitudes shifted appropriately with factors known to influence the time of circadian rhythms. Moreover, even variables that cannot logically show circadian rhythms demonstrated stronger time-of-day effects than did implicit attitudes. Taken together, these results suggest that time-of-day effects in implicit attitudes are more likely to represent processes of selection rather than circadian rhythms, but do not rule out the latter possibility.

  20. Time-of-day effects in implicit racial in-group preferences are likely selection effects, not circadian rhythms.

    Science.gov (United States)

    Schofield, Timothy P

    2016-01-01

    Time-of-day effects in human psychological functioning have been known of since the 1800s. However, outside of research specifically focused on the quantification of circadian rhythms, their study has largely been neglected. Moves toward online data collection now mean that psychological investigations take place around the clock, which affords researchers the ability to easily study time-of-day effects. Recent analyses have shown, for instance, that implicit attitudes have time-of-day effects. The plausibility that these effects indicate circadian rhythms rather than selection effects is considered in the current study. There was little evidence that the time-of-day effects in implicit attitudes shifted appropriately with factors known to influence the time of circadian rhythms. Moreover, even variables that cannot logically show circadian rhythms demonstrated stronger time-of-day effects than did implicit attitudes. Taken together, these results suggest that time-of-day effects in implicit attitudes are more likely to represent processes of selection rather than circadian rhythms, but do not rule out the latter possibility.

  1. Variable Selection in Heterogeneous Datasets: A Truncated-rank Sparse Linear Mixed Model with Applications to Genome-wide Association Studies.

    Science.gov (United States)

    Wang, Haohan; Aragam, Bryon; Xing, Eric P

    2018-04-26

    A fundamental and important challenge in modern datasets of ever increasing dimensionality is variable selection, which has taken on renewed interest recently due to the growth of biological and medical datasets with complex, non-i.i.d. structures. Naïvely applying classical variable selection methods such as the Lasso to such datasets may lead to a large number of false discoveries. Motivated by genome-wide association studies in genetics, we study the problem of variable selection for datasets arising from multiple subpopulations, when this underlying population structure is unknown to the researcher. We propose a unified framework for sparse variable selection that adaptively corrects for population structure via a low-rank linear mixed model. Most importantly, the proposed method does not require prior knowledge of sample structure in the data and adaptively selects a covariance structure of the correct complexity. Through extensive experiments, we illustrate the effectiveness of this framework over existing methods. Further, we test our method on three different genomic datasets from plants, mice, and human, and discuss the knowledge we discover with our method. Copyright © 2018. Published by Elsevier Inc.

  2. Effects of implementing time-variable postgraduate training programmes on the organization of teaching hospital departments.

    Science.gov (United States)

    van Rossum, Tiuri R; Scheele, Fedde; Sluiter, Henk E; Paternotte, Emma; Heyligers, Ide C

    2018-01-31

    As competency-based education has gained currency in postgraduate medical education, it is acknowledged that trainees, having individual learning curves, acquire the desired competencies at different paces. To accommodate their different learning needs, time-variable curricula have been introduced making training no longer time-bound. This paradigm has many consequences and will, predictably, impact the organization of teaching hospitals. The purpose of this study was to determine the effects of time-variable postgraduate education on the organization of teaching hospital departments. We undertook exploratory case studies into the effects of time-variable training on teaching departments' organization. We held semi-structured interviews with clinical teachers and managers from various hospital departments. The analysis yielded six effects: (1) time-variable training requires flexible and individual planning, (2) learners must be active and engaged, (3) accelerated learning sometimes comes at the expense of clinical expertise, (4) fast-track training for gifted learners jeopardizes the continuity of care, (5) time-variable training demands more of supervisors, and hence, they need protected time for supervision, and (6) hospital boards should support time-variable training. Implementing time-variable education affects various levels within healthcare organizations, including stakeholders not directly involved in medical education. These effects must be considered when implementing time-variable curricula.

  3. Chaotic Dynamical State Variables Selection Procedure Based Image Encryption Scheme

    Directory of Open Access Journals (Sweden)

    Zia Bashir

    2017-12-01

    Full Text Available Nowadays, in the modern digital era, the use of computer technologies such as smartphones, tablets and the Internet, as well as the enormous quantity of confidential information being converted into digital form have resulted in raised security issues. This, in turn, has led to rapid developments in cryptography, due to the imminent need for system security. Low-dimensional chaotic systems have low complexity and key space, yet they achieve high encryption speed. An image encryption scheme is proposed that, without compromising the security, uses reasonable resources. We introduced a chaotic dynamic state variables selection procedure (CDSVSP to use all state variables of a hyper-chaotic four-dimensional dynamical system. As a result, less iterations of the dynamical system are required, and resources are saved, thus making the algorithm fast and suitable for practical use. The simulation results of security and other miscellaneous tests demonstrate that the suggested algorithm excels at robustness, security and high speed encryption.

  4. Cataclysmic variables from a ROSAT/2MASS selection - I. Four new intermediate polars

    NARCIS (Netherlands)

    Gänsicke, B.T.; Marsh, T.R.; Edge, A.; Rodríguez-Gil, P.; Steeghs, D.; Araujo-Betancor, S.; Harlaftis, E.; Giannakis, O.; Pyrzas, S.; Morales-Rueda, L.; Aungwerojwit, A.

    2005-01-01

    We report the first results from a new search for cataclysmic variables (CVs) using a combined X-ray (ROSAT)/infrared (2MASS) target selection that discriminates against background active galactic nuclei. Identification spectra were obtained at the Isaac Newton Telescope for a total of 174 targets,

  5. A Pareto-Based Adaptive Variable Neighborhood Search for Biobjective Hybrid Flow Shop Scheduling Problem with Sequence-Dependent Setup Time

    Directory of Open Access Journals (Sweden)

    Huixin Tian

    2016-01-01

    Full Text Available Different from most researches focused on the single objective hybrid flowshop scheduling (HFS problem, this paper investigates a biobjective HFS problem with sequence dependent setup time. The two objectives are the minimization of total weighted tardiness and the total setup time. To efficiently solve this problem, a Pareto-based adaptive biobjective variable neighborhood search (PABOVNS is developed. In the proposed PABOVNS, a solution is denoted as a sequence of all jobs and a decoding procedure is presented to obtain the corresponding complete schedule. In addition, the proposed PABOVNS has three major features that can guarantee a good balance of exploration and exploitation. First, an adaptive selection strategy of neighborhoods is proposed to automatically select the most promising neighborhood instead of the sequential selection strategy of canonical VNS. Second, a two phase multiobjective local search based on neighborhood search and path relinking is designed for each selected neighborhood. Third, an external archive with diversity maintenance is adopted to store the nondominated solutions and at the same time provide initial solutions for the local search. Computational results based on randomly generated instances show that the PABOVNS is efficient and even superior to some other powerful multiobjective algorithms in the literature.

  6. Feature Selection Criteria for Real Time EKF-SLAM Algorithm

    Directory of Open Access Journals (Sweden)

    Fernando Auat Cheein

    2010-02-01

    Full Text Available This paper presents a seletion procedure for environmet features for the correction stage of a SLAM (Simultaneous Localization and Mapping algorithm based on an Extended Kalman Filter (EKF. This approach decreases the computational time of the correction stage which allows for real and constant-time implementations of the SLAM. The selection procedure consists in chosing the features the SLAM system state covariance is more sensible to. The entire system is implemented on a mobile robot equipped with a range sensor laser. The features extracted from the environment correspond to lines and corners. Experimental results of the real time SLAM algorithm and an analysis of the processing-time consumed by the SLAM with the feature selection procedure proposed are shown. A comparison between the feature selection approach proposed and the classical sequential EKF-SLAM along with an entropy feature selection approach is also performed.

  7. Chaos synchronization in time-delayed systems with parameter mismatches and variable delay times

    International Nuclear Information System (INIS)

    Shahverdiev, E.M.; Nuriev, R.A.; Hashimov, R.H.; Shore, K.A.

    2004-06-01

    We investigate synchronization between two undirectionally linearly coupled chaotic nonidentical time-delayed systems and show that parameter mismatches are of crucial importance to achieve synchronization. We establish that independent of the relation between the delay time in the coupled systems and the coupling delay time, only retarded synchronization with the coupling delay time is obtained. We show that with parameter mismatch or without it neither complete nor anticipating synchronization occurs. We derive existence and stability conditions for the retarded synchronization manifold. We demonstrate our approach using examples of the Ikeda and Mackey Glass models. Also for the first time we investigate chaos synchronization in time-delayed systems with variable delay time and find both existence and sufficient stability conditions for the retarded synchronization manifold with the coupling-delay lag time. (author)

  8. The Steppengrille (Gryllus spec./assimilis: selective filters and signal mismatch on two time scales.

    Directory of Open Access Journals (Sweden)

    Matti Michael Rothbart

    Full Text Available In Europe, several species of crickets are available commercially as pet food. Here we investigated the calling song and phonotactic selectivity for sound patterns on the short and long time scales for one such a cricket, Gryllus spec., available as "Gryllus assimilis", the Steppengrille, originally from Ecuador. The calling song consisted of short chirps (2-3 pulses, carrier frequency: 5.0 kHz emitted with a pulse period of 30.2 ms and chirp rate of 0.43 per second. Females exhibited high selectivity on both time scales. The preference for pulse period peaked at 33 ms which was higher then the pulse period produced by males. Two consecutive pulses per chirp at the correct pulse period were already sufficient for positive phonotaxis. The preference for the chirp pattern was limited by selectivity for small chirp duty cycles and for chirp periods between 200 ms and 500 ms. The long chirp period of the songs of males was unattractive to females. On both time scales a mismatch between the song signal of the males and the preference of females was observed. The variability of song parameters as quantified by the coefficient of variation was below 50% for all temporal measures. Hence, there was not a strong indication for directional selection on song parameters by females which could account for the observed mismatch. The divergence of the chirp period and female preference may originate from a founder effect, when the Steppengrille was cultured. Alternatively the mismatch was a result of selection pressures exerted by commercial breeders on low singing activity, to satisfy customers with softly singing crickets. In the latter case the prominent divergence between male song and female preference was the result of domestication and may serve as an example of rapid evolution of song traits in acoustic communication systems.

  9. Simulating variable-density flows with time-consistent integration of Navier-Stokes equations

    Science.gov (United States)

    Lu, Xiaoyi; Pantano, Carlos

    2017-11-01

    In this talk, we present several features of a high-order semi-implicit variable-density low-Mach Navier-Stokes solver. A new formulation to solve pressure Poisson-like equation of variable-density flows is highlighted. With this formulation of the numerical method, we are able to solve all variables with a uniform order of accuracy in time (consistent with the time integrator being used). The solver is primarily designed to perform direct numerical simulations for turbulent premixed flames. Therefore, we also address other important elements, such as energy-stable boundary conditions, synthetic turbulence generation, and flame anchoring method. Numerical examples include classical non-reacting constant/variable-density flows, as well as turbulent premixed flames.

  10. Modelling accuracy and variability of motor timing in treated and untreated Parkinson’s disease and healthy controls

    Directory of Open Access Journals (Sweden)

    Catherine Rhian Gwyn Jones

    2011-12-01

    Full Text Available Parkinson’s disease (PD is characterised by difficulty with the timing of movements. Data collected using the synchronization-continuation paradigm, an established motor timing paradigm, have produced varying results but with most studies finding impairment. Some of this inconsistency comes from variation in the medication state tested, in the inter-stimulus intervals (ISI selected, and in changeable focus on either the synchronization (tapping in time with a tone or continuation (maintaining the rhythm in the absence of the tone phase. We sought to re-visit the paradigm by testing across four groups of participants: healthy controls, medication naïve de novo PD patients, and treated PD patients both ‘on’ and ‘off’ dopaminergic medication. Four finger tapping intervals (ISI were used: 250ms, 500ms, 1000ms and 2000ms. Categorical predictors (group, ISI, and phase were used to predict accuracy and variability using a linear mixed model. Accuracy was defined as the relative error of a tap, and variability as the deviation of the participant’s tap from group predicted relative error. Our primary finding is that the treated PD group (PD patients ‘on’ and ‘off’ dopaminergic therapy showed a significantly different pattern of accuracy compared to the de novo group and the healthy controls at the 250ms interval. At this interval, the treated PD patients performed ‘ahead’ of the beat whilst the other groups performed ‘behind’ the beat. We speculate that this ‘hastening’ relates to the clinical phenomenon of motor festination. Across all groups, variability was smallest for both phases at the 500 ms interval, suggesting an innate preference for finger tapping within this range. Tapping variability for the two phases became increasingly divergent at the longer intervals, with worse performance in the continuation phase. The data suggest that patients with PD can be best discriminated from healthy controls on measures of

  11. A practical procedure for the selection of time-to-failure models based on the assessment of trends in maintenance data

    International Nuclear Information System (INIS)

    Louit, D.M.; Pascual, R.; Jardine, A.K.S.

    2009-01-01

    Many times, reliability studies rely on false premises such as independent and identically distributed time between failures assumption (renewal process). This can lead to erroneous model selection for the time to failure of a particular component or system, which can in turn lead to wrong conclusions and decisions. A strong statistical focus, a lack of a systematic approach and sometimes inadequate theoretical background seem to have made it difficult for maintenance analysts to adopt the necessary stage of data testing before the selection of a suitable model. In this paper, a framework for model selection to represent the failure process for a component or system is presented, based on a review of available trend tests. The paper focuses only on single-time-variable models and is primarily directed to analysts responsible for reliability analyses in an industrial maintenance environment. The model selection framework is directed towards the discrimination between the use of statistical distributions to represent the time to failure ('renewal approach'); and the use of stochastic point processes ('repairable systems approach'), when there may be the presence of system ageing or reliability growth. An illustrative example based on failure data from a fleet of backhoes is included.

  12. Bayesian variable selection for post-analytic interrogation of susceptibility loci.

    Science.gov (United States)

    Chen, Siying; Nunez, Sara; Reilly, Muredach P; Foulkes, Andrea S

    2017-06-01

    Understanding the complex interplay among protein coding genes and regulatory elements requires rigorous interrogation with analytic tools designed for discerning the relative contributions of overlapping genomic regions. To this aim, we offer a novel application of Bayesian variable selection (BVS) for classifying genomic class level associations using existing large meta-analysis summary level resources. This approach is applied using the expectation maximization variable selection (EMVS) algorithm to typed and imputed SNPs across 502 protein coding genes (PCGs) and 220 long intergenic non-coding RNAs (lncRNAs) that overlap 45 known loci for coronary artery disease (CAD) using publicly available Global Lipids Gentics Consortium (GLGC) (Teslovich et al., 2010; Willer et al., 2013) meta-analysis summary statistics for low-density lipoprotein cholesterol (LDL-C). The analysis reveals 33 PCGs and three lncRNAs across 11 loci with >50% posterior probabilities for inclusion in an additive model of association. The findings are consistent with previous reports, while providing some new insight into the architecture of LDL-cholesterol to be investigated further. As genomic taxonomies continue to evolve, additional classes such as enhancer elements and splicing regions, can easily be layered into the proposed analysis framework. Moreover, application of this approach to alternative publicly available meta-analysis resources, or more generally as a post-analytic strategy to further interrogate regions that are identified through single point analysis, is straightforward. All coding examples are implemented in R version 3.2.1 and provided as supplemental material. © 2016, The International Biometric Society.

  13. Evidence for a time-invariant phase variable in human ankle control.

    Directory of Open Access Journals (Sweden)

    Robert D Gregg

    Full Text Available Human locomotion is a rhythmic task in which patterns of muscle activity are modulated by state-dependent feedback to accommodate perturbations. Two popular theories have been proposed for the underlying embodiment of phase in the human pattern generator: a time-dependent internal representation or a time-invariant feedback representation (i.e., reflex mechanisms. In either case the neuromuscular system must update or represent the phase of locomotor patterns based on the system state, which can include measurements of hundreds of variables. However, a much simpler representation of phase has emerged in recent designs for legged robots, which control joint patterns as functions of a single monotonic mechanical variable, termed a phase variable. We propose that human joint patterns may similarly depend on a physical phase variable, specifically the heel-to-toe movement of the Center of Pressure under the foot. We found that when the ankle is unexpectedly rotated to a position it would have encountered later in the step, the Center of Pressure also shifts forward to the corresponding later position, and the remaining portion of the gait pattern ensues. This phase shift suggests that the progression of the stance ankle is controlled by a biomechanical phase variable, motivating future investigations of phase variables in human locomotor control.

  14. Multivariate modeling of complications with data driven variable selection: Guarding against overfitting and effects of data set size

    International Nuclear Information System (INIS)

    Schaaf, Arjen van der; Xu Chengjian; Luijk, Peter van; Veld, Aart A. van’t; Langendijk, Johannes A.; Schilstra, Cornelis

    2012-01-01

    Purpose: Multivariate modeling of complications after radiotherapy is frequently used in conjunction with data driven variable selection. This study quantifies the risk of overfitting in a data driven modeling method using bootstrapping for data with typical clinical characteristics, and estimates the minimum amount of data needed to obtain models with relatively high predictive power. Materials and methods: To facilitate repeated modeling and cross-validation with independent datasets for the assessment of true predictive power, a method was developed to generate simulated data with statistical properties similar to real clinical data sets. Characteristics of three clinical data sets from radiotherapy treatment of head and neck cancer patients were used to simulate data with set sizes between 50 and 1000 patients. A logistic regression method using bootstrapping and forward variable selection was used for complication modeling, resulting for each simulated data set in a selected number of variables and an estimated predictive power. The true optimal number of variables and true predictive power were calculated using cross-validation with very large independent data sets. Results: For all simulated data set sizes the number of variables selected by the bootstrapping method was on average close to the true optimal number of variables, but showed considerable spread. Bootstrapping is more accurate in selecting the optimal number of variables than the AIC and BIC alternatives, but this did not translate into a significant difference of the true predictive power. The true predictive power asymptotically converged toward a maximum predictive power for large data sets, and the estimated predictive power converged toward the true predictive power. More than half of the potential predictive power is gained after approximately 200 samples. Our simulations demonstrated severe overfitting (a predicative power lower than that of predicting 50% probability) in a number of small

  15. Taking time seriously. A theory of socioemotional selectivity.

    Science.gov (United States)

    Carstensen, L L; Isaacowitz, D M; Charles, S T

    1999-03-01

    Socioemotional selectivity theory claims that the perception of time plays a fundamental role in the selection and pursuit of social goals. According to the theory, social motives fall into 1 of 2 general categories--those related to the acquisition of knowledge and those related to the regulation of emotion. When time is perceived as open-ended, knowledge-related goals are prioritized. In contrast, when time is perceived as limited, emotional goals assume primacy. The inextricable association between time left in life and chronological age ensures age-related differences in social goals. Nonetheless, the authors show that the perception of time is malleable, and social goals change in both younger and older people when time constraints are imposed. The authors argue that time perception is integral to human motivation and suggest potential implications for multiple subdisciplines and research interests in social, developmental, cultural, cognitive, and clinical psychology.

  16. Ultrahigh-dimensional variable selection method for whole-genome gene-gene interaction analysis

    Directory of Open Access Journals (Sweden)

    Ueki Masao

    2012-05-01

    Full Text Available Abstract Background Genome-wide gene-gene interaction analysis using single nucleotide polymorphisms (SNPs is an attractive way for identification of genetic components that confers susceptibility of human complex diseases. Individual hypothesis testing for SNP-SNP pairs as in common genome-wide association study (GWAS however involves difficulty in setting overall p-value due to complicated correlation structure, namely, the multiple testing problem that causes unacceptable false negative results. A large number of SNP-SNP pairs than sample size, so-called the large p small n problem, precludes simultaneous analysis using multiple regression. The method that overcomes above issues is thus needed. Results We adopt an up-to-date method for ultrahigh-dimensional variable selection termed the sure independence screening (SIS for appropriate handling of numerous number of SNP-SNP interactions by including them as predictor variables in logistic regression. We propose ranking strategy using promising dummy coding methods and following variable selection procedure in the SIS method suitably modified for gene-gene interaction analysis. We also implemented the procedures in a software program, EPISIS, using the cost-effective GPGPU (General-purpose computing on graphics processing units technology. EPISIS can complete exhaustive search for SNP-SNP interactions in standard GWAS dataset within several hours. The proposed method works successfully in simulation experiments and in application to real WTCCC (Wellcome Trust Case–control Consortium data. Conclusions Based on the machine-learning principle, the proposed method gives powerful and flexible genome-wide search for various patterns of gene-gene interaction.

  17. Stochastic weather inputs for improved urban water demand forecasting: application of nonlinear input variable selection and machine learning methods

    Science.gov (United States)

    Quilty, J.; Adamowski, J. F.

    2015-12-01

    Urban water supply systems are often stressed during seasonal outdoor water use as water demands related to the climate are variable in nature making it difficult to optimize the operation of the water supply system. Urban water demand forecasts (UWD) failing to include meteorological conditions as inputs to the forecast model may produce poor forecasts as they cannot account for the increase/decrease in demand related to meteorological conditions. Meteorological records stochastically simulated into the future can be used as inputs to data-driven UWD forecasts generally resulting in improved forecast accuracy. This study aims to produce data-driven UWD forecasts for two different Canadian water utilities (Montreal and Victoria) using machine learning methods by first selecting historical UWD and meteorological records derived from a stochastic weather generator using nonlinear input variable selection. The nonlinear input variable selection methods considered in this work are derived from the concept of conditional mutual information, a nonlinear dependency measure based on (multivariate) probability density functions and accounts for relevancy, conditional relevancy, and redundancy from a potential set of input variables. The results of our study indicate that stochastic weather inputs can improve UWD forecast accuracy for the two sites considered in this work. Nonlinear input variable selection is suggested as a means to identify which meteorological conditions should be utilized in the forecast.

  18. Determination of main fruits in adulterated nectars by ATR-FTIR spectroscopy combined with multivariate calibration and variable selection methods.

    Science.gov (United States)

    Miaw, Carolina Sheng Whei; Assis, Camila; Silva, Alessandro Rangel Carolino Sales; Cunha, Maria Luísa; Sena, Marcelo Martins; de Souza, Scheilla Vitorino Carvalho

    2018-07-15

    Grape, orange, peach and passion fruit nectars were formulated and adulterated by dilution with syrup, apple and cashew juices at 10 levels for each adulterant. Attenuated total reflectance Fourier transform mid infrared (ATR-FTIR) spectra were obtained. Partial least squares (PLS) multivariate calibration models allied to different variable selection methods, such as interval partial least squares (iPLS), ordered predictors selection (OPS) and genetic algorithm (GA), were used to quantify the main fruits. PLS improved by iPLS-OPS variable selection showed the highest predictive capacity to quantify the main fruit contents. The selected variables in the final models varied from 72 to 100; the root mean square errors of prediction were estimated from 0.5 to 2.6%; the correlation coefficients of prediction ranged from 0.948 to 0.990; and, the mean relative errors of prediction varied from 3.0 to 6.7%. All of the developed models were validated. Copyright © 2018 Elsevier Ltd. All rights reserved.

  19. Predictor variables for a half marathon race time in recreational male runners

    Directory of Open Access Journals (Sweden)

    Rüst CA

    2011-08-01

    Full Text Available Christoph Alexander Rüst1, Beat Knechtle1,2, Patrizia Knechtle2, Ursula Barandun1, Romuald Lepers3, Thomas Rosemann11Institute of General Practice and Health Services Research, University of Zurich, Zurich, Switzerland; 2Gesundheitszentrum St Gallen, St Gallen, Switzerland; 3INSERM U887, University of Burgundy, Faculty of Sport Sciences, Dijon, FranceAbstract: The aim of this study was to investigate predictor variables of anthropometry, training, and previous experience in order to predict a half marathon race time for future novice recreational male half marathoners. Eighty-four male finishers in the ‘Half Marathon Basel’ completed the race distance within (mean and standard deviation, SD 103.9 (16.5 min, running at a speed of 12.7 (1.9 km/h. After multivariate analysis of the anthropometric characteristics, body mass index (r = 0.56, suprailiacal (r = 0.36 and medial calf skin fold (r = 0.53 were related to race time. For the variables of training and previous experience, speed in running of the training sessions (r = –0.54 were associated with race time. After multivariate analysis of both the significant anthropometric and training variables, body mass index (P = 0.0150 and speed in running during training (P = 0.0045 were related to race time. Race time in a half marathon might be partially predicted by the following equation (r2 = 0.44: Race time (min = 72.91 + 3.045 * (body mass index, kg/m2 –3.884 * (speed in running during training, km/h for recreational male runners. To conclude, variables of both anthropometry and training were related to half marathon race time in recreational male half marathoners and cannot be reduced to one single predictor variable.Keywords: anthropometry, body fat, skin-folds, training, endurance

  20. Awareness of the Faculty Members at Al-Balqa' Applied University to the Concept of Time Management and Its Relation to Some Variables

    Science.gov (United States)

    Sabha, Raed Adel; Al-Assaf, Jamal Abdel-Fattah

    2012-01-01

    The study aims to investigate how extent is the time management awareness of the faculty members of the Al-Balqa' Applied university, and its relation to some variables. The study conducted on (150) teachers were selected randomly. For achieving the study goals an appropriate instrument has been built up based on the educational literature and…

  1. [Modelling the effect of local climatic variability on dengue transmission in Medellin (Colombia) by means of time series analysis].

    Science.gov (United States)

    Rúa-Uribe, Guillermo L; Suárez-Acosta, Carolina; Chauca, José; Ventosilla, Palmira; Almanza, Rita

    2013-09-01

    Dengue fever is a major impact on public health vector-borne disease, and its transmission is influenced by entomological, sociocultural and economic factors. Additionally, climate variability plays an important role in the transmission dynamics. A large scientific consensus has indicated that the strong association between climatic variables and disease could be used to develop models to explain the incidence of the disease. To develop a model that provides a better understanding of dengue transmission dynamics in Medellin and predicts increases in the incidence of the disease. The incidence of dengue fever was used as dependent variable, and weekly climatic factors (maximum, mean and minimum temperature, relative humidity and precipitation) as independent variables. Expert Modeler was used to develop a model to better explain the behavior of the disease. Climatic variables with significant association to the dependent variable were selected through ARIMA models. The model explains 34% of observed variability. Precipitation was the climatic variable showing statistically significant association with the incidence of dengue fever, but with a 20 weeks delay. In Medellin, the transmission of dengue fever was influenced by climate variability, especially precipitation. The strong association dengue fever/precipitation allowed the construction of a model to help understand dengue transmission dynamics. This information will be useful to develop appropriate and timely strategies for dengue control.

  2. Between-centre variability versus variability over time in DXA whole body measurements evaluated using a whole body phantom

    Energy Technology Data Exchange (ETDEWEB)

    Louis, Olivia [Department of Radiology, AZ-VUB, Vrije Universiteit Brussel, Laarbeeklaan 101, 1090 Brussel (Belgium)]. E-mail: olivia.louis@az.vub.ac.be; Verlinde, Siska [Belgian Study Group for Pediatric Endocrinology (Belgium); Thomas, Muriel [Belgian Study Group for Pediatric Endocrinology (Belgium); De Schepper, Jean [Department of Pediatrics, AZ-VUB, Vrije Universiteit Brussel, Laarbeeklaan 101, 1090 Brussel (Belgium)

    2006-06-15

    This study aimed to compare the variability of whole body measurements, using dual energy X-ray absorptiometry (DXA), among geographically distinct centres versus that over time in a given centre. A Hologic-designed 28 kg modular whole body phantom was used, including high density polyethylene, gray polyvinylchloride and aluminium. It was scanned on seven Hologic QDR 4500 DXA devices, located in seven centres and was also repeatedly (n = 18) scanned in the reference centre, over a time span of 5 months. The mean between-centre coefficient of variation (CV) ranged from 2.0 (lean mass) to 5.6% (fat mass) while the mean within-centre CV ranged from 0.3 (total mass) to 4.7% (total area). Between-centre variability compared well with within-centre variability for total area, bone mineral content and bone mineral density, but was significantly higher for fat (p < 0.001), lean (p < 0.005) and total mass (p < 0.001). Our results suggest that, even when using the same device, the between-centre variability remains a matter of concern, particularly where body composition is concerned.

  3. Stability of Delayed Hopfield Neural Networks with Variable-Time Impulses

    Directory of Open Access Journals (Sweden)

    Yangjun Pei

    2014-01-01

    Full Text Available In this paper the globally exponential stability criteria of delayed Hopfield neural networks with variable-time impulses are established. The proposed criteria can also be applied in Hopfield neural networks with fixed-time impulses. A numerical example is presented to illustrate the effectiveness of our theoretical results.

  4. Real-time flavor tagging selection in ATLAS

    CERN Document Server

    Madaffari, D

    2016-01-01

    In high-energy physics experiments the online selection is crucial to reject the overwhelming uninteresting collisions. In particular the ATLAS experiment includes b-jet selections in its trigger, in order to select final states with significant heavy-flavor content. Dedicated selections are developed to timely identifying fully hadronic final states containing b-jets and maintaining affordable trigger rates. ATLAS successfully operated b-jet trigger selections during both 2011 and 2012 Large Hadron Collider data-taking campaigns. Work is on-going now to improve the performance of online tagging algorithms to be deployed in Run 2 in 2015. An overview of the Run 1 ATLAS b-jet trigger strategy along with future prospects is presented in this paper. Data-driven techniques to extract the online b-tagging performance, a key ingredient for all analysis relying on such triggers, are also discussed and preliminary results presented.

  5. Calibration Variable Selection and Natural Zero Determination for Semispan and Canard Balances

    Science.gov (United States)

    Ulbrich, Norbert M.

    2013-01-01

    Independent calibration variables for the characterization of semispan and canard wind tunnel balances are discussed. It is shown that the variable selection for a semispan balance is determined by the location of the resultant normal and axial forces that act on the balance. These two forces are the first and second calibration variable. The pitching moment becomes the third calibration variable after the normal and axial forces are shifted to the pitch axis of the balance. Two geometric distances, i.e., the rolling and yawing moment arms, are the fourth and fifth calibration variable. They are traditionally substituted by corresponding moments to simplify the use of calibration data during a wind tunnel test. A canard balance is related to a semispan balance. It also only measures loads on one half of a lifting surface. However, the axial force and yawing moment are of no interest to users of a canard balance. Therefore, its calibration variable set is reduced to the normal force, pitching moment, and rolling moment. The combined load diagrams of the rolling and yawing moment for a semispan balance are discussed. They may be used to illustrate connections between the wind tunnel model geometry, the test section size, and the calibration load schedule. Then, methods are reviewed that may be used to obtain the natural zeros of a semispan or canard balance. In addition, characteristics of three semispan balance calibration rigs are discussed. Finally, basic requirements for a full characterization of a semispan balance are reviewed.

  6. Multi-Objective Flexible Flow Shop Scheduling Problem Considering Variable Processing Time due to Renewable Energy

    Directory of Open Access Journals (Sweden)

    Xiuli Wu

    2018-03-01

    Full Text Available Renewable energy is an alternative to non-renewable energy to reduce the carbon footprint of manufacturing systems. Finding out how to make an alternative energy-efficient scheduling solution when renewable and non-renewable energy drives production is of great importance. In this paper, a multi-objective flexible flow shop scheduling problem that considers variable processing time due to renewable energy (MFFSP-VPTRE is studied. First, the optimization model of the MFFSP-VPTRE is formulated considering the periodicity of renewable energy and the limitations of energy storage capacity. Then, a hybrid non-dominated sorting genetic algorithm with variable local search (HNSGA-II is proposed to solve the MFFSP-VPTRE. An operation and machine-based encoding method is employed. A low-carbon scheduling algorithm is presented. Besides the crossover and mutation, a variable local search is used to improve the offspring’s Pareto set. The offspring and the parents are combined and those that dominate more are selected to continue evolving. Finally, two groups of experiments are carried out. The results show that the low-carbon scheduling algorithm can effectively reduce the carbon footprint under the premise of makespan optimization and the HNSGA-II outperforms the traditional NSGA-II and can solve the MFFSP-VPTRE effectively and efficiently.

  7. Estimation and variable selection for generalized additive partial linear models

    KAUST Repository

    Wang, Li

    2011-08-01

    We study generalized additive partial linear models, proposing the use of polynomial spline smoothing for estimation of nonparametric functions, and deriving quasi-likelihood based estimators for the linear parameters. We establish asymptotic normality for the estimators of the parametric components. The procedure avoids solving large systems of equations as in kernel-based procedures and thus results in gains in computational simplicity. We further develop a class of variable selection procedures for the linear parameters by employing a nonconcave penalized quasi-likelihood, which is shown to have an asymptotic oracle property. Monte Carlo simulations and an empirical example are presented for illustration. © Institute of Mathematical Statistics, 2011.

  8. Selection for altruism through random drift in variable size populations

    Directory of Open Access Journals (Sweden)

    Houchmandzadeh Bahram

    2012-05-01

    Full Text Available Abstract Background Altruistic behavior is defined as helping others at a cost to oneself and a lowered fitness. The lower fitness implies that altruists should be selected against, which is in contradiction with their widespread presence is nature. Present models of selection for altruism (kin or multilevel show that altruistic behaviors can have ‘hidden’ advantages if the ‘common good’ produced by altruists is restricted to some related or unrelated groups. These models are mostly deterministic, or assume a frequency dependent fitness. Results Evolutionary dynamics is a competition between deterministic selection pressure and stochastic events due to random sampling from one generation to the next. We show here that an altruistic allele extending the carrying capacity of the habitat can win by increasing the random drift of “selfish” alleles. In other terms, the fixation probability of altruistic genes can be higher than those of a selfish ones, even though altruists have a smaller fitness. Moreover when populations are geographically structured, the altruists advantage can be highly amplified and the fixation probability of selfish genes can tend toward zero. The above results are obtained both by numerical and analytical calculations. Analytical results are obtained in the limit of large populations. Conclusions The theory we present does not involve kin or multilevel selection, but is based on the existence of random drift in variable size populations. The model is a generalization of the original Fisher-Wright and Moran models where the carrying capacity depends on the number of altruists.

  9. A practical procedure for the selection of time-to-failure models based on the assessment of trends in maintenance data

    Energy Technology Data Exchange (ETDEWEB)

    Louit, D.M. [Komatsu Chile, Av. Americo Vespucio 0631, Quilicura, Santiago (Chile)], E-mail: rpascual@ing.puc.cl; Pascual, R. [Centro de Mineria, Pontificia Universidad Catolica de Chile, Av. Vicuna Mackenna 4860, Santiago (Chile); Jardine, A.K.S. [Department of Mechanical and Industrial Engineering, University of Toronto, 5 King' s College Road, Toronto, Ont., M5S 3G8 (Canada)

    2009-10-15

    Many times, reliability studies rely on false premises such as independent and identically distributed time between failures assumption (renewal process). This can lead to erroneous model selection for the time to failure of a particular component or system, which can in turn lead to wrong conclusions and decisions. A strong statistical focus, a lack of a systematic approach and sometimes inadequate theoretical background seem to have made it difficult for maintenance analysts to adopt the necessary stage of data testing before the selection of a suitable model. In this paper, a framework for model selection to represent the failure process for a component or system is presented, based on a review of available trend tests. The paper focuses only on single-time-variable models and is primarily directed to analysts responsible for reliability analyses in an industrial maintenance environment. The model selection framework is directed towards the discrimination between the use of statistical distributions to represent the time to failure ('renewal approach'); and the use of stochastic point processes ('repairable systems approach'), when there may be the presence of system ageing or reliability growth. An illustrative example based on failure data from a fleet of backhoes is included.

  10. Long Pulse Integrator of Variable Integral Time Constant

    International Nuclear Information System (INIS)

    Wang Yong; Ji Zhenshan; Du Xiaoying; Wu Yichun; Li Shi; Luo Jiarong

    2010-01-01

    A kind of new long pulse integrator was designed based on the method of variable integral time constant and deducting integral drift by drift slope. The integral time constant can be changed by choosing different integral resistors, in order to improve the signal-to-noise ratio, and avoid output saturation; the slope of integral drift of a certain period of time can be calculated by digital signal processing, which can be used to deduct the drift of original integral signal in real time to reduce the integral drift. The tests show that this kind of long pulse integrator is good at reducing integral drift, which also can eliminate the effects of changing integral time constant. According to experiments, the integral time constant can be changed by remote control and manual adjustment of integral drift is avoided, which can improve the experiment efficiency greatly and can be used for electromagnetic measurement in Tokamak experiment. (authors)

  11. Automatic classification of time-variable X-ray sources

    Energy Technology Data Exchange (ETDEWEB)

    Lo, Kitty K.; Farrell, Sean; Murphy, Tara; Gaensler, B. M. [Sydney Institute for Astronomy, School of Physics, The University of Sydney, Sydney, NSW 2006 (Australia)

    2014-05-01

    To maximize the discovery potential of future synoptic surveys, especially in the field of transient science, it will be necessary to use automatic classification to identify some of the astronomical sources. The data mining technique of supervised classification is suitable for this problem. Here, we present a supervised learning method to automatically classify variable X-ray sources in the Second XMM-Newton Serendipitous Source Catalog (2XMMi-DR2). Random Forest is our classifier of choice since it is one of the most accurate learning algorithms available. Our training set consists of 873 variable sources and their features are derived from time series, spectra, and other multi-wavelength contextual information. The 10 fold cross validation accuracy of the training data is ∼97% on a 7 class data set. We applied the trained classification model to 411 unknown variable 2XMM sources to produce a probabilistically classified catalog. Using the classification margin and the Random Forest derived outlier measure, we identified 12 anomalous sources, of which 2XMM J180658.7–500250 appears to be the most unusual source in the sample. Its X-ray spectra is suggestive of a ultraluminous X-ray source but its variability makes it highly unusual. Machine-learned classification and anomaly detection will facilitate scientific discoveries in the era of all-sky surveys.

  12. Automatic classification of time-variable X-ray sources

    International Nuclear Information System (INIS)

    Lo, Kitty K.; Farrell, Sean; Murphy, Tara; Gaensler, B. M.

    2014-01-01

    To maximize the discovery potential of future synoptic surveys, especially in the field of transient science, it will be necessary to use automatic classification to identify some of the astronomical sources. The data mining technique of supervised classification is suitable for this problem. Here, we present a supervised learning method to automatically classify variable X-ray sources in the Second XMM-Newton Serendipitous Source Catalog (2XMMi-DR2). Random Forest is our classifier of choice since it is one of the most accurate learning algorithms available. Our training set consists of 873 variable sources and their features are derived from time series, spectra, and other multi-wavelength contextual information. The 10 fold cross validation accuracy of the training data is ∼97% on a 7 class data set. We applied the trained classification model to 411 unknown variable 2XMM sources to produce a probabilistically classified catalog. Using the classification margin and the Random Forest derived outlier measure, we identified 12 anomalous sources, of which 2XMM J180658.7–500250 appears to be the most unusual source in the sample. Its X-ray spectra is suggestive of a ultraluminous X-ray source but its variability makes it highly unusual. Machine-learned classification and anomaly detection will facilitate scientific discoveries in the era of all-sky surveys.

  13. Inter- and intra-observer variability of time-lapse annotations

    DEFF Research Database (Denmark)

    Sundvall, Linda; Ingerslev, Hans Jakob; Breth Knudsen, Ulla

    2013-01-01

    . This provides the basis for further investigation of embryo assessment and selection by time-lapse imaging in prospective trials. Study funding/competing interest(s): Research at the Fertility Clinic was funded by an unrestricted grant from Ferring and MSD. The authors have no competing interests to declare.......Study question: How consistent is the time-lapse annotation of dynamic and static morphologic parameters of embryo development, within and between observers? Summary answer: The assessment of dynamic parameters is characterized by almost perfect agreement within and between observers. What is known...... already: The commonly employed method used to assess embryos in IVF treatments is based on static evaluation of morphology in a microscope, but this is limited by substantial intra- and inter-observer variation. Time-lapse imaging has been proposed as a method to refine embryo selection by adding new...

  14. Variability of interconnected wind plants: correlation length and its dependence on variability time scale

    Science.gov (United States)

    St. Martin, Clara M.; Lundquist, Julie K.; Handschy, Mark A.

    2015-04-01

    The variability in wind-generated electricity complicates the integration of this electricity into the electrical grid. This challenge steepens as the percentage of renewably-generated electricity on the grid grows, but variability can be reduced by exploiting geographic diversity: correlations between wind farms decrease as the separation between wind farms increases. But how far is far enough to reduce variability? Grid management requires balancing production on various timescales, and so consideration of correlations reflective of those timescales can guide the appropriate spatial scales of geographic diversity grid integration. To answer ‘how far is far enough,’ we investigate the universal behavior of geographic diversity by exploring wind-speed correlations using three extensive datasets spanning continents, durations and time resolution. First, one year of five-minute wind power generation data from 29 wind farms span 1270 km across Southeastern Australia (Australian Energy Market Operator). Second, 45 years of hourly 10 m wind-speeds from 117 stations span 5000 km across Canada (National Climate Data Archive of Environment Canada). Finally, four years of five-minute wind-speeds from 14 meteorological towers span 350 km of the Northwestern US (Bonneville Power Administration). After removing diurnal cycles and seasonal trends from all datasets, we investigate dependence of correlation length on time scale by digitally high-pass filtering the data on 0.25-2000 h timescales and calculating correlations between sites for each high-pass filter cut-off. Correlations fall to zero with increasing station separation distance, but the characteristic correlation length varies with the high-pass filter applied: the higher the cut-off frequency, the smaller the station separation required to achieve de-correlation. Remarkable similarities between these three datasets reveal behavior that, if universal, could be particularly useful for grid management. For high

  15. Classification and quantitation of milk powder by near-infrared spectroscopy and mutual information-based variable selection and partial least squares

    Science.gov (United States)

    Chen, Hui; Tan, Chao; Lin, Zan; Wu, Tong

    2018-01-01

    Milk is among the most popular nutrient source worldwide, which is of great interest due to its beneficial medicinal properties. The feasibility of the classification of milk powder samples with respect to their brands and the determination of protein concentration is investigated by NIR spectroscopy along with chemometrics. Two datasets were prepared for experiment. One contains 179 samples of four brands for classification and the other contains 30 samples for quantitative analysis. Principal component analysis (PCA) was used for exploratory analysis. Based on an effective model-independent variable selection method, i.e., minimal-redundancy maximal-relevance (MRMR), only 18 variables were selected to construct a partial least-square discriminant analysis (PLS-DA) model. On the test set, the PLS-DA model based on the selected variable set was compared with the full-spectrum PLS-DA model, both of which achieved 100% accuracy. In quantitative analysis, the partial least-square regression (PLSR) model constructed by the selected subset of 260 variables outperforms significantly the full-spectrum model. It seems that the combination of NIR spectroscopy, MRMR and PLS-DA or PLSR is a powerful tool for classifying different brands of milk and determining the protein content.

  16. An adaptive technique for multiscale approximate entropy (MAEbin) threshold (r) selection: application to heart rate variability (HRV) and systolic blood pressure variability (SBPV) under postural stress.

    Science.gov (United States)

    Singh, Amritpal; Saini, Barjinder Singh; Singh, Dilbag

    2016-06-01

    Multiscale approximate entropy (MAE) is used to quantify the complexity of a time series as a function of time scale τ. Approximate entropy (ApEn) tolerance threshold selection 'r' is based on either: (1) arbitrary selection in the recommended range (0.1-0.25) times standard deviation of time series (2) or finding maximum ApEn (ApEnmax) i.e., the point where self-matches start to prevail over other matches and choosing the corresponding 'r' (rmax) as threshold (3) or computing rchon by empirically finding the relation between rmax, SD1/SD2 ratio and N using curve fitting, where, SD1 and SD2 are short-term and long-term variability of a time series respectively. None of these methods is gold standard for selection of 'r'. In our previous study [1], an adaptive procedure for selection of 'r' is proposed for approximate entropy (ApEn). In this paper, this is extended to multiple time scales using MAEbin and multiscale cross-MAEbin (XMAEbin). We applied this to simulations i.e. 50 realizations (n = 50) of random number series, fractional Brownian motion (fBm) and MIX (P) [1] series of data length of N = 300 and short term recordings of HRV and SBPV performed under postural stress from supine to standing. MAEbin and XMAEbin analysis was performed on laboratory recorded data of 50 healthy young subjects experiencing postural stress from supine to upright. The study showed that (i) ApEnbin of HRV is more than SBPV in supine position but is lower than SBPV in upright position (ii) ApEnbin of HRV decreases from supine i.e. 1.7324 ± 0.112 (mean ± SD) to upright 1.4916 ± 0.108 due to vagal inhibition (iii) ApEnbin of SBPV increases from supine i.e. 1.5535 ± 0.098 to upright i.e. 1.6241 ± 0.101 due sympathetic activation (iv) individual and cross complexities of RRi and systolic blood pressure (SBP) series depend on time scale under consideration (v) XMAEbin calculated using ApEnmax is correlated with cross-MAE calculated using ApEn (0.1-0.26) in steps of 0

  17. Total sulfur determination in residues of crude oil distillation using FT-IR/ATR and variable selection methods

    Science.gov (United States)

    Müller, Aline Lima Hermes; Picoloto, Rochele Sogari; Mello, Paola de Azevedo; Ferrão, Marco Flores; dos Santos, Maria de Fátima Pereira; Guimarães, Regina Célia Lourenço; Müller, Edson Irineu; Flores, Erico Marlon Moraes

    2012-04-01

    Total sulfur concentration was determined in atmospheric residue (AR) and vacuum residue (VR) samples obtained from petroleum distillation process by Fourier transform infrared spectroscopy with attenuated total reflectance (FT-IR/ATR) in association with chemometric methods. Calibration and prediction set consisted of 40 and 20 samples, respectively. Calibration models were developed using two variable selection models: interval partial least squares (iPLS) and synergy interval partial least squares (siPLS). Different treatments and pre-processing steps were also evaluated for the development of models. The pre-treatment based on multiplicative scatter correction (MSC) and the mean centered data were selected for models construction. The use of siPLS as variable selection method provided a model with root mean square error of prediction (RMSEP) values significantly better than those obtained by PLS model using all variables. The best model was obtained using siPLS algorithm with spectra divided in 20 intervals and combinations of 3 intervals (911-824, 823-736 and 737-650 cm-1). This model produced a RMSECV of 400 mg kg-1 S and RMSEP of 420 mg kg-1 S, showing a correlation coefficient of 0.990.

  18. Comparison of Three Plot Selection Methods for Estimating Change in Temporally Variable, Spatially Clustered Populations.

    Energy Technology Data Exchange (ETDEWEB)

    Thompson, William L. [Bonneville Power Administration, Portland, OR (US). Environment, Fish and Wildlife

    2001-07-01

    Monitoring population numbers is important for assessing trends and meeting various legislative mandates. However, sampling across time introduces a temporal aspect to survey design in addition to the spatial one. For instance, a sample that is initially representative may lose this attribute if there is a shift in numbers and/or spatial distribution in the underlying population that is not reflected in later sampled plots. Plot selection methods that account for this temporal variability will produce the best trend estimates. Consequently, I used simulation to compare bias and relative precision of estimates of population change among stratified and unstratified sampling designs based on permanent, temporary, and partial replacement plots under varying levels of spatial clustering, density, and temporal shifting of populations. Permanent plots produced more precise estimates of change than temporary plots across all factors. Further, permanent plots performed better than partial replacement plots except for high density (5 and 10 individuals per plot) and 25% - 50% shifts in the population. Stratified designs always produced less precise estimates of population change for all three plot selection methods, and often produced biased change estimates and greatly inflated variance estimates under sampling with partial replacement. Hence, stratification that remains fixed across time should be avoided when monitoring populations that are likely to exhibit large changes in numbers and/or spatial distribution during the study period. Key words: bias; change estimation; monitoring; permanent plots; relative precision; sampling with partial replacement; temporary plots.

  19. Sensor combination and chemometric variable selection for online monitoring of Streptomyces coelicolor fed-batch cultivations

    DEFF Research Database (Denmark)

    Ödman, Peter; Johansen, C.L.; Olsson, L.

    2010-01-01

    of biomass and substrate (casamino acids) concentrations, respectively. The effect of combination of fluorescence and gas analyzer data as well as of different variable selection methods was investigated. Improved prediction models were obtained by combination of data from the two sensors and by variable......Fed-batch cultivations of Streptomyces coelicolor, producing the antibiotic actinorhodin, were monitored online by multiwavelength fluorescence spectroscopy and off-gas analysis. Partial least squares (PLS), locally weighted regression, and multilinear PLS (N-PLS) models were built for prediction...

  20. Cholinergic enhancement reduces functional connectivity and BOLD variability in visual extrastriate cortex during selective attention.

    Science.gov (United States)

    Ricciardi, Emiliano; Handjaras, Giacomo; Bernardi, Giulio; Pietrini, Pietro; Furey, Maura L

    2013-01-01

    Enhancing cholinergic function improves performance on various cognitive tasks and alters neural responses in task specific brain regions. We have hypothesized that the changes in neural activity observed during increased cholinergic function reflect an increase in neural efficiency that leads to improved task performance. The current study tested this hypothesis by assessing neural efficiency based on cholinergically-mediated effects on regional brain connectivity and BOLD signal variability. Nine subjects participated in a double-blind, placebo-controlled crossover fMRI study. Following an infusion of physostigmine (1 mg/h) or placebo, echo-planar imaging (EPI) was conducted as participants performed a selective attention task. During the task, two images comprised of superimposed pictures of faces and houses were presented. Subjects were instructed periodically to shift their attention from one stimulus component to the other and to perform a matching task using hand held response buttons. A control condition included phase-scrambled images of superimposed faces and houses that were presented in the same temporal and spatial manner as the attention task; participants were instructed to perform a matching task. Cholinergic enhancement improved performance during the selective attention task, with no change during the control task. Functional connectivity analyses showed that the strength of connectivity between ventral visual processing areas and task-related occipital, parietal and prefrontal regions reduced significantly during cholinergic enhancement, exclusively during the selective attention task. Physostigmine administration also reduced BOLD signal temporal variability relative to placebo throughout temporal and occipital visual processing areas, again during the selective attention task only. Together with the observed behavioral improvement, the decreases in connectivity strength throughout task-relevant regions and BOLD variability within stimulus

  1. Assessing the accuracy and stability of variable selection methods for random forest modeling in ecology

    Science.gov (United States)

    Random forest (RF) modeling has emerged as an important statistical learning method in ecology due to its exceptional predictive performance. However, for large and complex ecological datasets there is limited guidance on variable selection methods for RF modeling. Typically, e...

  2. The new Toyota variable valve timing and lift system

    Energy Technology Data Exchange (ETDEWEB)

    Shimizu, K.; Fuwa, N.; Yoshihara, Y. [Toyota Motor Corporation (Japan); Hori, K. [Toyota Boshoku Corporation (Japan)

    2007-07-01

    A continuously variable valve timing (duration and phase) and lift system was developed. This system was applied to the valvetrain of a new 2.0L L4 engine (3ZRFAE) for the Japanese market. The system has rocker arms, which allow continuously variable timing and lift, situated between a conventional roller-rocker arm and the camshaft, an electromotor actuator to drive it and a phase mechanism for intake and exhaust camshafts (Dual VVT-i). The rocking center of the rocker arm is stationary, and the axial linear motion of a helical spline changes the initial phase of the rocker arm which varies the timing and lift. The linear motion mechanism uses an original planetary roller screw and is driven by a brushless motor with a built-in electric control unit. Since the rocking center and the linear motion helical spline center coincide, a compact cylinder head design was possible, and the cylinder head is a common design with a conventional engine. Since the ECU controls intake valve duration and timing, a fuel economy gain of maximum 10% (depending on driving condition) is obtained by reducing light to medium load pumping losses. Also intake efficiency was maximized throughout the speed range, resulting in a power gain of 10%. Further, HC emissions were reduced due to increased air speed at low valve lift. (orig.)

  3. Fuzzy target selection using RFM variables

    NARCIS (Netherlands)

    Kaymak, U.

    2001-01-01

    An important data mining problem from the world of direct marketing is target selection. The main task in target selection is the determination of potential customers for a product from a client database. Target selection algorithms identify the profiles of customer groups for a particular product,

  4. Bayesian Model Selection under Time Constraints

    Science.gov (United States)

    Hoege, M.; Nowak, W.; Illman, W. A.

    2017-12-01

    Bayesian model selection (BMS) provides a consistent framework for rating and comparing models in multi-model inference. In cases where models of vastly different complexity compete with each other, we also face vastly different computational runtimes of such models. For instance, time series of a quantity of interest can be simulated by an autoregressive process model that takes even less than a second for one run, or by a partial differential equations-based model with runtimes up to several hours or even days. The classical BMS is based on a quantity called Bayesian model evidence (BME). It determines the model weights in the selection process and resembles a trade-off between bias of a model and its complexity. However, in practice, the runtime of models is another weight relevant factor for model selection. Hence, we believe that it should be included, leading to an overall trade-off problem between bias, variance and computing effort. We approach this triple trade-off from the viewpoint of our ability to generate realizations of the models under a given computational budget. One way to obtain BME values is through sampling-based integration techniques. We argue with the fact that more expensive models can be sampled much less under time constraints than faster models (in straight proportion to their runtime). The computed evidence in favor of a more expensive model is statistically less significant than the evidence computed in favor of a faster model, since sampling-based strategies are always subject to statistical sampling error. We present a straightforward way to include this misbalance into the model weights that are the basis for model selection. Our approach follows directly from the idea of insufficient significance. It is based on a computationally cheap bootstrapping error estimate of model evidence and is easy to implement. The approach is illustrated in a small synthetic modeling study.

  5. The impact of selected organizational variables and managerial leadership on radiation therapists' organizational commitment

    International Nuclear Information System (INIS)

    Akroyd, Duane; Legg, Jeff; Jackowski, Melissa B.; Adams, Robert D.

    2009-01-01

    The purpose of this study was to examine the impact of selected organizational factors and the leadership behavior of supervisors on radiation therapists' commitment to their organizations. The population for this study consists of all full time clinical radiation therapists registered by the American Registry of Radiologic Technologists (ARRT) in the United States. A random sample of 800 radiation therapists was obtained from the ARRT for this study. Questionnaires were mailed to all participants and measured organizational variables; managerial leadership variable and three components of organizational commitment (affective, continuance and normative). It was determined that organizational support, and leadership behavior of supervisors each had a significant and positive affect on normative and affective commitment of radiation therapists and each of the models predicted over 40% of the variance in radiation therapists organizational commitment. This study examined radiation therapists' commitment to their organizations and found that affective (emotional attachment to the organization) and normative (feelings of obligation to the organization) commitments were more important than continuance commitment (awareness of the costs of leaving the organization). This study can help radiation oncology administrators and physicians to understand the values their radiation therapy employees hold that are predictive of their commitment to the organization. A crucial result of the study is the importance of the perceived support of the organization and the leadership skills of managers/supervisors on radiation therapists' commitment to the organization.

  6. The swan song in context: long-time-scale X-ray variability of NGC 4051

    Science.gov (United States)

    Uttley, P.; McHardy, I. M.; Papadakis, I. E.; Guainazzi, M.; Fruscione, A.

    1999-07-01

    On 1998 May 9-11, the highly variable, low-luminosity Seyfert 1 galaxy NGC 4051 was observed in an unusual low-flux state by BeppoSAX, RXTE and EUVE. We present fits of the 4-15keV RXTE spectrum and BeppoSAX MECS spectrum obtained during this observation, which are consistent with the interpretation that the source had switched off, leaving only the spectrum of pure reflection from distant cold matter. We place this result in context by showing the X-ray light curve of NGC 4051 obtained by our RXTE monitoring campaign over the past two and a half years, which shows that the low state lasted for ~150d before the May observations (implying that the reflecting material is >10^17cm from the continuum source) and forms part of a light curve showing distinct variations in long-term average flux over time-scales > months. We show that the long-time-scale component to X-ray variability is intrinsic to the primary continuum and is probably distinct from the variability at shorter time-scales. The long-time-scale component to variability maybe associated with variations in the accretion flow of matter on to the central black hole. As the source approaches the low state, the variability process becomes non-linear. NGC 4051 may represent a microcosm of all X-ray variability in radio-quiet active galactic nuclei (AGNs), displaying in a few years a variety of flux states and variability properties which more luminous AGNs may pass through on time-scales of decades to thousands of years.

  7. Diagnostic Value of Selected Echocardiographic Variables to Identify Pulmonary Hypertension in Dogs with Myxomatous Mitral Valve Disease.

    Science.gov (United States)

    Tidholm, A; Höglund, K; Häggström, J; Ljungvall, I

    2015-01-01

    Pulmonary hypertension (PH) is commonly associated with myxomatous mitral valve disease (MMVD). Because dogs with PH present without measureable tricuspid regurgitation (TR), it would be useful to investigate echocardiographic variables that can identify PH. To investigate associations between estimated systolic TR pressure gradient (TRPG) and dog characteristics and selected echocardiographic variables. 156 privately owned dogs. Prospective observational study comparing the estimations of TRPG with dog characteristics and selected echocardiographic variables in dogs with MMVD and measureable TR. Tricuspid regurgitation pressure gradient was significantly (P modeled as linear variables LA/Ao (P modeled as second order polynomial variables: AT/DT (P = .0039) and LVIDDn (P value for the final model was 0.45 and receiver operating characteristic curve analysis suggested the model's performance to predict PH, defined as 36, 45, and 55 mmHg as fair (area under the curve [AUC] = 0.80), good (AUC = 0.86), and excellent (AUC = 0.92), respectively. In dogs with MMVD, the presence of PH might be suspected with the combination of decreased PA AT/DT, increased RVIDDn and LA/Ao, and a small or great LVIDDn. Copyright © 2015 The Authors Journal of Veterinary Internal Medicine published by Wiley Periodicals, Inc. on behalf of the American College of Veterinary Internal Medicine.

  8. Extreme precipitation variability, forage quality and large herbivore diet selection in arid environments

    Science.gov (United States)

    Cain, James W.; Gedir, Jay V.; Marshal, Jason P.; Krausman, Paul R.; Allen, Jamison D.; Duff, Glenn C.; Jansen, Brian; Morgart, John R.

    2017-01-01

    Nutritional ecology forms the interface between environmental variability and large herbivore behaviour, life history characteristics, and population dynamics. Forage conditions in arid and semi-arid regions are driven by unpredictable spatial and temporal patterns in rainfall. Diet selection by herbivores should be directed towards overcoming the most pressing nutritional limitation (i.e. energy, protein [nitrogen, N], moisture) within the constraints imposed by temporal and spatial variability in forage conditions. We investigated the influence of precipitation-induced shifts in forage nutritional quality and subsequent large herbivore responses across widely varying precipitation conditions in an arid environment. Specifically, we assessed seasonal changes in diet breadth and forage selection of adult female desert bighorn sheep Ovis canadensis mexicana in relation to potential nutritional limitations in forage N, moisture and energy content (as proxied by dry matter digestibility, DMD). Succulents were consistently high in moisture but low in N and grasses were low in N and moisture until the wet period. Nitrogen and moisture content of shrubs and forbs varied among seasons and climatic periods, whereas trees had consistently high N and moderate moisture levels. Shrubs, trees and succulents composed most of the seasonal sheep diets but had little variation in DMD. Across all seasons during drought and during summer with average precipitation, forages selected by sheep were higher in N and moisture than that of available forage. Differences in DMD between sheep diets and available forage were minor. Diet breadth was lowest during drought and increased with precipitation, reflecting a reliance on few key forage species during drought. Overall, forage selection was more strongly associated with N and moisture content than energy content. Our study demonstrates that unlike north-temperate ungulates which are generally reported to be energy-limited, N and moisture

  9. Preferences for travel time variability – A study of Danish car drivers

    DEFF Research Database (Denmark)

    Hjorth, Katrine; Rich, Jeppe

    Travel time variability (TTV) is a measure of the extent of unpredictability in travel times. It is generally accepted that TTV has a negative effect on travellers’ wellbeing and overall utility of travelling, and valuation of variability is an important issue in transport demand modelling...... preferences, to exclude non-traders, and to avoid complicated issues related to scheduled public transport services. The survey uses customised Internet questionnaires, containing a series of questions related to the traveller’s most recent morning trip to work, e.g.: • Travel time experienced on this day......, • Number of stops along the way, their duration, and whether these stops involved restrictions on time of day, • Restrictions regarding departure time from home or arrival time at work, • How often such a trip was made within the last month and the range of experienced travel times, • What the traveller...

  10. Impact of strong selection for the PrP major gene on genetic variability of four French sheep breeds (Open Access publication

    Directory of Open Access Journals (Sweden)

    Pantano Thais

    2008-11-01

    Full Text Available Abstract Effective selection on the PrP gene has been implemented since October 2001 in all French sheep breeds. After four years, the ARR "resistant" allele frequency increased by about 35% in young males. The aim of this study was to evaluate the impact of this strong selection on genetic variability. It is focussed on four French sheep breeds and based on the comparison of two groups of 94 animals within each breed: the first group of animals was born before the selection began, and the second, 3–4 years later. Genetic variability was assessed using genealogical and molecular data (29 microsatellite markers. The expected loss of genetic variability on the PrP gene was confirmed. Moreover, among the five markers located in the PrP region, only the three closest ones were affected. The evolution of the number of alleles, heterozygote deficiency within population, expected heterozygosity and the Reynolds distances agreed with the criteria from pedigree and pointed out that neutral genetic variability was not much affected. This trend depended on breed, i.e. on their initial states (population size, PrP frequencies and on the selection strategies for improving scrapie resistance while carrying out selection for production traits.

  11. Leisure time physical activity, screen time, social background, and environmental variables in adolescents.

    Science.gov (United States)

    Mota, Jorge; Gomes, Helena; Almeida, Mariana; Ribeiro, José Carlos; Santos, Maria Paula

    2007-08-01

    This study analyzes the relationships between leisure time physical activity (LTPA), sedentary behaviors, socioeconomic status, and perceived environmental variables. The sample comprised 815 girls and 746 boys. In girls, non-LTPA participants reported significantly more screen time. Girls with safety concerns were more likely to be in the non-LTPA group (OR = 0.60) and those who agreed with the importance of aesthetics were more likely to be in the active-LTPA group (OR = 1.59). In girls, an increase of 1 hr of TV watching was a significant predictor of non-LTPA (OR = 0.38). LTPA for girls, but not for boys, seems to be influenced by certain modifiable factors of the built environment, as well as by time watching TV.

  12. Quality Quandaries- Time Series Model Selection and Parsimony

    DEFF Research Database (Denmark)

    Bisgaard, Søren; Kulahci, Murat

    2009-01-01

    Some of the issues involved in selecting adequate models for time series data are discussed using an example concerning the number of users of an Internet server. The process of selecting an appropriate model is subjective and requires experience and judgment. The authors believe an important...... consideration in model selection should be parameter parsimony. They favor the use of parsimonious mixed ARMA models, noting that research has shown that a model building strategy that considers only autoregressive representations will lead to non-parsimonious models and to loss of forecasting accuracy....

  13. Time step size selection for radiation diffusion calculations

    International Nuclear Information System (INIS)

    Rider, W.J.; Knoll, D.A.

    1999-01-01

    The purpose of this note is to describe a time step control technique as applied to radiation diffusion. Standard practice only provides a heuristic criteria related to the relative change in the dependent variables. The authors propose an alternative based on relatively simple physical principles. This time step control applies to methods of solution that are unconditionally stable and converges nonlinearities within a time step in the governing equations. Commonly, nonlinearities in the governing equations are evaluated using existing (old time) data. The authors refer to this as the semi-implicit (SI) method. When a method converges nonlinearities within a time step, the entire governing equation including all nonlinearities is self-consistently evaluated using advance time data (with appropriate time centering for accuracy)

  14. Exploratory Spectroscopy of Magnetic Cataclysmic Variables Candidates and Other Variable Objects

    Science.gov (United States)

    Oliveira, A. S.; Rodrigues, C. V.; Cieslinski, D.; Jablonski, F. J.; Silva, K. M. G.; Almeida, L. A.; Rodríguez-Ardila, A.; Palhares, M. S.

    2017-04-01

    The increasing number of synoptic surveys made by small robotic telescopes, such as the photometric Catalina Real-Time Transient Survey (CRTS), provides a unique opportunity to discover variable sources and improves the statistical samples of such classes of objects. Our goal is the discovery of magnetic Cataclysmic Variables (mCVs). These are rare objects that probe interesting accretion scenarios controlled by the white-dwarf magnetic field. In particular, improved statistics of mCVs would help to address open questions on their formation and evolution. We performed an optical spectroscopy survey to search for signatures of magnetic accretion in 45 variable objects selected mostly from the CRTS. In this sample, we found 32 CVs, 22 being mCV candidates, 13 of which were previously unreported as such. If the proposed classifications are confirmed, it would represent an increase of 4% in the number of known polars and 12% in the number of known IPs. A fraction of our initial sample was classified as extragalactic sources or other types of variable stars by the inspection of the identification spectra. Despite the inherent complexity in identifying a source as an mCV, variability-based selection, followed by spectroscopic snapshot observations, has proved to be an efficient strategy for their discoveries, being a relatively inexpensive approach in terms of telescope time. Based on observations obtained at the Observatório do Pico dos Dias/LNA, and at the Southern Astrophysical Research (SOAR) telescope, which is a joint project of the Ministério da Ciência, Tecnologia, e Inovação (MCTI) da República Federativa do Brasil, the U.S. National Optical Astronomy Observatory (NOAO), the University of North Carolina at Chapel Hill (UNC), and Michigan State University (MSU).

  15. Real-time flavour tagging selection in ATLAS

    CERN Document Server

    Zivkovic, Lidija; The ATLAS collaboration

    2015-01-01

    In high-energy physics experiments, online selection is crucial to select interesting collisions from the large data volume. ATLAS b-jet triggers are designed to identify heavy-flavour content in real-time and provide the only option to efficiently record events with fully hadronic final states containing b-jets. In doing so, two different, but related, challenges are faced. The physics goal is to optimise as far as possible the rejection of light jets, while retaining a high efficiency on selecting b-jets and maintaining affordable trigger rates without raising jet energy thresholds. This maps into a challenging computing task, as tracks and their corresponding vertexes must be reconstructed and analysed for each jet above the desired threshold, regardless of the increasingly harsh pile-up conditions. We present an overview of the ATLAS strategy for online b-jet selection for the LHC Run 2, including the use of novel methods and sophisticated algorithms designed to face the above mentioned challenges. A firs...

  16. Real-time flavour tagging selection in ATLAS

    CERN Document Server

    \\v{Z}ivkovi{c}, Lidija; The ATLAS collaboration

    2015-01-01

    In high-energy physics experiments, online selection is crucial to select interesting collisions from the large data volume. ATLAS b-jet triggers are designed to identify heavy-flavour content in real-time and provide the only option to efficiently record events with fully hadronic final states containing b-jets. In doing so, two different, but related, challenges are faced. The physics goal is to optimise as far as possible the rejection of light jets, while retaining a high efficiency on selecting b-jets and maintaining affordable trigger rates without raising jet energy thresholds. This maps into a challenging computing task, as tracks and their corresponding vertices must be reconstructed and analysed for each jet above the desired threshold, regardless of the increasingly harsh pile-up conditions. We present an overview of the ATLAS strategy for online b-jet selection for the LHC Run 2, including the use of novel methods and sophisticated algorithms designed to face the above mentioned challenges. A firs...

  17. Inverse Ising problem in continuous time: A latent variable approach

    Science.gov (United States)

    Donner, Christian; Opper, Manfred

    2017-12-01

    We consider the inverse Ising problem: the inference of network couplings from observed spin trajectories for a model with continuous time Glauber dynamics. By introducing two sets of auxiliary latent random variables we render the likelihood into a form which allows for simple iterative inference algorithms with analytical updates. The variables are (1) Poisson variables to linearize an exponential term which is typical for point process likelihoods and (2) Pólya-Gamma variables, which make the likelihood quadratic in the coupling parameters. Using the augmented likelihood, we derive an expectation-maximization (EM) algorithm to obtain the maximum likelihood estimate of network parameters. Using a third set of latent variables we extend the EM algorithm to sparse couplings via L1 regularization. Finally, we develop an efficient approximate Bayesian inference algorithm using a variational approach. We demonstrate the performance of our algorithms on data simulated from an Ising model. For data which are simulated from a more biologically plausible network with spiking neurons, we show that the Ising model captures well the low order statistics of the data and how the Ising couplings are related to the underlying synaptic structure of the simulated network.

  18. Application of several variable-valve-timing concepts to an LHR engine

    Science.gov (United States)

    Morel, T.; Keribar, R.; Sawlivala, M.; Hakim, N.

    1987-01-01

    The paper discusses advantages provided by electronically controlled hydraulically activated valves (ECVs) when applied to low heat rejection (LHR) engines. The ECV concept provides additional engine control flexibility by allowing for a variable valve timing as a function of speed and load, or for a given transient condition. The results of a study carried out to assess the benefits that this flexibility can offer to an LHR engine indicated that, when judged on the benefits to BSFC, volumetric efficiency, and peak firing pressure, ECVs would provide only modest benefits in comparison to conventional valve profiles. It is noted, however, that once installed on the engine, the ECVs would permit a whole range of certain more sophisticated variable valve timing strategies not otherwise possible, such as high compression cranking, engine braking, cylinder cutouts, and volumetric efficiency timing with engine speed.

  19. The added value of time-variable microgravimetry to the understanding of how volcanoes work

    Science.gov (United States)

    Carbone, Daniele; Poland, Michael; Greco, Filippo; Diament, Michel

    2017-01-01

    During the past few decades, time-variable volcano gravimetry has shown great potential for imaging subsurface processes at active volcanoes (including some processes that might otherwise remain “hidden”), especially when combined with other methods (e.g., ground deformation, seismicity, and gas emissions). By supplying information on changes in the distribution of bulk mass over time, gravimetry can provide information regarding processes such as magma accumulation in void space, gas segregation at shallow depths, and mechanisms driving volcanic uplift and subsidence. Despite its potential, time-variable volcano gravimetry is an underexploited method, not widely adopted by volcano researchers or observatories. The cost of instrumentation and the difficulty in using it under harsh environmental conditions is a significant impediment to the exploitation of gravimetry at many volcanoes. In addition, retrieving useful information from gravity changes in noisy volcanic environments is a major challenge. While these difficulties are not trivial, neither are they insurmountable; indeed, creative efforts in a variety of volcanic settings highlight the value of time-variable gravimetry for understanding hazards as well as revealing fundamental insights into how volcanoes work. Building on previous work, we provide a comprehensive review of time-variable volcano gravimetry, including discussions of instrumentation, modeling and analysis techniques, and case studies that emphasize what can be learned from campaign, continuous, and hybrid gravity observations. We are hopeful that this exploration of time-variable volcano gravimetry will excite more scientists about the potential of the method, spurring further application, development, and innovation.

  20. Modified Pressure-Correction Projection Methods: Open Boundary and Variable Time Stepping

    KAUST Repository

    Bonito, Andrea

    2014-10-31

    © Springer International Publishing Switzerland 2015. In this paper, we design and study two modifications of the first order standard pressure increment projection scheme for the Stokes system. The first scheme improves the existing schemes in the case of open boundary condition by modifying the pressure increment boundary condition, thereby minimizing the pressure boundary layer and recovering the optimal first order decay. The second scheme allows for variable time stepping. It turns out that the straightforward modification to variable time stepping leads to unstable schemes. The proposed scheme is not only stable but also exhibits the optimal first order decay. Numerical computations illustrating the theoretical estimates are provided for both new schemes.

  1. Modified Pressure-Correction Projection Methods: Open Boundary and Variable Time Stepping

    KAUST Repository

    Bonito, Andrea; Guermond, Jean-Luc; Lee, Sanghyun

    2014-01-01

    © Springer International Publishing Switzerland 2015. In this paper, we design and study two modifications of the first order standard pressure increment projection scheme for the Stokes system. The first scheme improves the existing schemes in the case of open boundary condition by modifying the pressure increment boundary condition, thereby minimizing the pressure boundary layer and recovering the optimal first order decay. The second scheme allows for variable time stepping. It turns out that the straightforward modification to variable time stepping leads to unstable schemes. The proposed scheme is not only stable but also exhibits the optimal first order decay. Numerical computations illustrating the theoretical estimates are provided for both new schemes.

  2. Intraindividual variability in reaction time predicts cognitive outcomes 5 years later.

    Science.gov (United States)

    Bielak, Allison A M; Hultsch, David F; Strauss, Esther; Macdonald, Stuart W S; Hunter, Michael A

    2010-11-01

    Building on results suggesting that intraindividual variability in reaction time (inconsistency) is highly sensitive to even subtle changes in cognitive ability, this study addressed the capacity of inconsistency to predict change in cognitive status (i.e., cognitive impairment, no dementia [CIND] classification) and attrition 5 years later. Two hundred twelve community-dwelling older adults, initially aged 64-92 years, remained in the study after 5 years. Inconsistency was calculated from baseline reaction time performance. Participants were assigned to groups on the basis of their fluctuations in CIND classification over time. Logistic and Cox regressions were used. Baseline inconsistency significantly distinguished among those who remained or transitioned into CIND over the 5 years and those who were consistently intact (e.g., stable intact vs. stable CIND, Wald (1) = 7.91, p < .01, Exp(β) = 1.49). Average level of inconsistency over time was also predictive of study attrition, for example, Wald (1) = 11.31, p < .01, Exp(β) = 1.24. For both outcomes, greater inconsistency was associated with a greater likelihood of being in a maladaptive group 5 years later. Variability based on moderately cognitively challenging tasks appeared to be particularly sensitive to longitudinal changes in cognitive ability. Mean rate of responding was a comparable predictor of change in most instances, but individuals were at greater relative risk of being in a maladaptive outcome group if they were more inconsistent rather than if they were slower in responding. Implications for the potential utility of intraindividual variability in reaction time as an early marker of cognitive decline are discussed. (c) 2010 APA, all rights reserved

  3. Effects of spring temperatures on the strength of selection on timing of reproduction in a long-distance migratory bird.

    Directory of Open Access Journals (Sweden)

    Marcel E Visser

    2015-04-01

    Full Text Available Climate change has differentially affected the timing of seasonal events for interacting trophic levels, and this has often led to increased selection on seasonal timing. Yet, the environmental variables driving this selection have rarely been identified, limiting our ability to predict future ecological impacts of climate change. Using a dataset spanning 31 years from a natural population of pied flycatchers (Ficedula hypoleuca, we show that directional selection on timing of reproduction intensified in the first two decades (1980-2000 but weakened during the last decade (2001-2010. Against expectation, this pattern could not be explained by the temporal variation in the phenological mismatch with food abundance. We therefore explored an alternative hypothesis that selection on timing was affected by conditions individuals experience when arriving in spring at the breeding grounds: arriving early in cold conditions may reduce survival. First, we show that in female recruits, spring arrival date in the first breeding year correlates positively with hatch date; hence, early-hatched individuals experience colder conditions at arrival than late-hatched individuals. Second, we show that when temperatures at arrival in the recruitment year were high, early-hatched young had a higher recruitment probability than when temperatures were low. We interpret this as a potential cost of arriving early in colder years, and climate warming may have reduced this cost. We thus show that higher temperatures in the arrival year of recruits were associated with stronger selection for early reproduction in the years these birds were born. As arrival temperatures in the beginning of the study increased, but recently declined again, directional selection on timing of reproduction showed a nonlinear change. We demonstrate that environmental conditions with a lag of up to two years can alter selection on phenological traits in natural populations, something that has

  4. Reaction Time Variability in Children With ADHD Symptoms and/or Dyslexia

    OpenAIRE

    Gooch, Debbie; Snowling, Margaret J.; Hulme, Charles

    2012-01-01

    Reaction time (RT) variability on a Stop Signal task was examined among children with attention deficit hyperactivity disorder (ADHD) symptoms and/or dyslexia in comparison to typically developing (TD) controls. Children’s go-trial RTs were analyzed using a novel ex-Gaussian method. Children with ADHD symptoms had increased variability in the fast but not the slow portions of their RT distributions compared to those without ADHD symptoms. The RT distributions of children with d...

  5. GPS Imaging of Time-Variable Earthquake Hazard: The Hilton Creek Fault, Long Valley California

    Science.gov (United States)

    Hammond, W. C.; Blewitt, G.

    2016-12-01

    The Hilton Creek Fault, in Long Valley, California is a down-to-the-east normal fault that bounds the eastern edge of the Sierra Nevada/Great Valley microplate, and lies half inside and half outside the magmatically active caldera. Despite the dense coverage with GPS networks, the rapid and time-variable surface deformation attributable to sporadic magmatic inflation beneath the resurgent dome makes it difficult to use traditional geodetic methods to estimate the slip rate of the fault. While geologic studies identify cumulative offset, constrain timing of past earthquakes, and constrain a Quaternary slip rate to within 1-5 mm/yr, it is not currently possible to use geologic data to evaluate how the potential for slip correlates with transient caldera inflation. To estimate time-variable seismic hazard of the fault we estimate its instantaneous slip rate from GPS data using a new set of algorithms for robust estimation of velocity and strain rate fields and fault slip rates. From the GPS time series, we use the robust MIDAS algorithm to obtain time series of velocity that are highly insensitive to the effects of seasonality, outliers and steps in the data. We then use robust imaging of the velocity field to estimate a gridded time variable velocity field. Then we estimate fault slip rate at each time using a new technique that forms ad-hoc block representations that honor fault geometries, network complexity, connectivity, but does not require labor-intensive drawing of block boundaries. The results are compared to other slip rate estimates that have implications for hazard over different time scales. Time invariant long term seismic hazard is proportional to the long term slip rate accessible from geologic data. Contemporary time-invariant hazard, however, may differ from the long term rate, and is estimated from the geodetic velocity field that has been corrected for the effects of magmatic inflation in the caldera using a published model of a dipping ellipsoidal

  6. Market timing and selectivity performance of mutual funds in Ghana

    Directory of Open Access Journals (Sweden)

    Abubakar Musah

    2014-07-01

    Full Text Available The growing interest in mutual funds in Ghana has been tremendous over the last decade as evidenced by the continuous increases in number and total funds under management. However, no empirical work has been done on the selectivity and timing ability of the mutual fund managers. Using monthly returns data hand-collected from the reports of the mutual fund managers for the period January 2007-December 2012, this paper examines the market timing and selectivity ability of mutual fund managers in Ghana using the classic Treynor-Mazuy (1966 model and Henriksson- Merton (1981 model. The results suggest that, in general mutual fund managers in Ghana are not able to effectively select stocks and also are not able to predict both the magnitude and direction of future market returns. More specifically, all of the sample mutual fund managers attain significant negative selectivity coefficients and also most of them attain insignificant negative timing coefficients.

  7. Improved Variable Selection Algorithm Using a LASSO-Type Penalty, with an Application to Assessing Hepatitis B Infection Relevant Factors in Community Residents

    Science.gov (United States)

    Guo, Pi; Zeng, Fangfang; Hu, Xiaomin; Zhang, Dingmei; Zhu, Shuming; Deng, Yu; Hao, Yuantao

    2015-01-01

    Objectives In epidemiological studies, it is important to identify independent associations between collective exposures and a health outcome. The current stepwise selection technique ignores stochastic errors and suffers from a lack of stability. The alternative LASSO-penalized regression model can be applied to detect significant predictors from a pool of candidate variables. However, this technique is prone to false positives and tends to create excessive biases. It remains challenging to develop robust variable selection methods and enhance predictability. Material and methods Two improved algorithms denoted the two-stage hybrid and bootstrap ranking procedures, both using a LASSO-type penalty, were developed for epidemiological association analysis. The performance of the proposed procedures and other methods including conventional LASSO, Bolasso, stepwise and stability selection models were evaluated using intensive simulation. In addition, methods were compared by using an empirical analysis based on large-scale survey data of hepatitis B infection-relevant factors among Guangdong residents. Results The proposed procedures produced comparable or less biased selection results when compared to conventional variable selection models. In total, the two newly proposed procedures were stable with respect to various scenarios of simulation, demonstrating a higher power and a lower false positive rate during variable selection than the compared methods. In empirical analysis, the proposed procedures yielding a sparse set of hepatitis B infection-relevant factors gave the best predictive performance and showed that the procedures were able to select a more stringent set of factors. The individual history of hepatitis B vaccination, family and individual history of hepatitis B infection were associated with hepatitis B infection in the studied residents according to the proposed procedures. Conclusions The newly proposed procedures improve the identification of

  8. Real-time fiber selection using the Wii remote

    Science.gov (United States)

    Klein, Jan; Scholl, Mike; Köhn, Alexander; Hahn, Horst K.

    2010-02-01

    In the last few years, fiber tracking tools have become popular in clinical contexts, e.g., for pre- and intraoperative neurosurgical planning. The efficient, intuitive, and reproducible selection of fiber bundles still constitutes one of the main issues. In this paper, we present a framework for a real-time selection of axonal fiber bundles using a Wii remote control, a wireless controller for Nintendo's gaming console. It enables the user to select fiber bundles without any other input devices. To achieve a smooth interaction, we propose a novel spacepartitioning data structure for efficient 3D range queries in a data set consisting of precomputed fibers. The data structure which is adapted to the special geometry of fiber tracts allows for queries that are many times faster compared with previous state-of-the-art approaches. In order to extract reliably fibers for further processing, e.g., for quantification purposes or comparisons with preoperatively tracked fibers, we developed an expectationmaximization clustering algorithm that can refine the range queries. Our initial experiments have shown that white matter fiber bundles can be reliably selected within a few seconds by the Wii, which has been placed in a sterile plastic bag to simulate usage under surgical conditions.

  9. Time-lapse culture with morphokinetic embryo selection improves pregnancy and live birth chances and reduces early pregnancy loss: a meta-analysis.

    Science.gov (United States)

    Pribenszky, Csaba; Nilselid, Anna-Maria; Montag, Markus

    2017-11-01

    Embryo evaluation and selection is fundamental in clinical IVF. Time-lapse follow-up of embryo development comprises undisturbed culture and the application of the visual information to support embryo evaluation. A meta-analysis of randomized controlled trials was carried out to study whether time-lapse monitoring with the prospective use of a morphokinetic algorithm for selection of embryos improves overall clinical outcome (pregnancy, early pregnancy loss, stillbirth and live birth rate) compared with embryo selection based on single time-point morphology in IVF cycles. The meta-analysis of five randomized controlled trials (n = 1637) showed that the application of time-lapse monitoring was associated with a significantly higher ongoing clinical pregnancy rate (51.0% versus 39.9%), with a pooled odds ratio of 1.542 (P loss (15.3% versus 21.3%; OR: 0.662; P = 0.019) and a significantly increased live birth rate (44.2% versus 31.3%; OR 1.668; P = 0.009). Difference in stillbirth was not significant between groups (4.7% versus 2.4%). Quality of the evidence was moderate to low owing to inconsistencies across the studies. Selective application and variability were also limitations. Although time-lapse is shown to significantly improve overall clinical outcome, further high-quality evidence is needed before universal conclusions can be drawn. Copyright © 2017 The Author(s). Published by Elsevier Ltd.. All rights reserved.

  10. The effect of court location and available time on the tactical shot selection of elite squash players.

    Science.gov (United States)

    Vučković, Goran; James, Nic; Hughes, Mike; Murray, Stafford; Sporiš, Goran; Perš, Janez

    2013-01-01

    No previous research in squash has considered the time between shots or the proximity of the ball to a wall, which are two important variables that influence shot outcomes. The aim of this paper was to analyse shot types to determine the extent to which they are played in different court areas and a more detailed analysis to determine whether the time available had an influence on the shot selected. Ten elite matches, contested by fifteen of the world's top right handed squash players (age 27 ± 3.2, height 1.81 ± 0.06 m, weight 76.3 ± 3.7 kg), at the men's World Team Championships were processed using the SAGIT/Squash tracking system with shot information manually added to the system. Results suggested that shot responses were dependent upon court location and the time between shots. When these factors were considered repeatable performance existed to the extent that one of two shots was typically played when there was limited time to play the shot (tactics affect shot selections. Key pointsPrevious research has suggested that a playing strategy, elements decided in advance of the match, may be evident for elite players by examining court location and preceding shot type, however these parameters alone are unlikely to be sufficient predictors.At present there is no known analysis in squash, or indeed in any of the racket sports, that has quantified the time available to respond to different shot types. An understanding of the time interval between shots and the movement characteristics of the player responding to different shots according to the court positions might facilitate a better understanding of the dynamics that determine shot selection.Some elements of a general playing strategy were evident e.g. predominately hitting to the back left of the court, but tactical differences in shot selection were also evident on the basis of court location and time available to play a shot.

  11. Variability of African Farming Systems from Phenological Analysis of NDVI Time Series

    Science.gov (United States)

    Vrieling, Anton; deBeurs, K. M.; Brown, Molly E.

    2011-01-01

    Food security exists when people have access to sufficient, safe and nutritious food at all times to meet their dietary needs. The natural resource base is one of the many factors affecting food security. Its variability and decline creates problems for local food production. In this study we characterize for sub-Saharan Africa vegetation phenology and assess variability and trends of phenological indicators based on NDVI time series from 1982 to 2006. We focus on cumulated NDVI over the season (cumNDVI) which is a proxy for net primary productivity. Results are aggregated at the level of major farming systems, while determining also spatial variability within farming systems. High temporal variability of cumNDVI occurs in semiarid and subhumid regions. The results show a large area of positive cumNDVI trends between Senegal and South Sudan. These correspond to positive CRU rainfall trends found and relate to recovery after the 1980's droughts. We find significant negative cumNDVI trends near the south-coast of West Africa (Guinea coast) and in Tanzania. For each farming system, causes of change and variability are discussed based on available literature (Appendix A). Although food security comprises more than the local natural resource base, our results can perform an input for food security analysis by identifying zones of high variability or downward trends. Farming systems are found to be a useful level of analysis. Diversity and trends found within farming system boundaries underline that farming systems are dynamic.

  12. Analysis of Modal Travel Time Variability Due to Mesoscale Ocean Structure

    National Research Council Canada - National Science Library

    Smith, Amy

    1997-01-01

    .... First, for an open ocean environment away from strong boundary currents, the effects of randomly phased linear baroclinic Rossby waves on acoustic travel time are shown to produce a variable overall...

  13. Resting heart rate variability is associated with ex-Gaussian metrics of intra-individual reaction time variability.

    Science.gov (United States)

    Spangler, Derek P; Williams, DeWayne P; Speller, Lassiter F; Brooks, Justin R; Thayer, Julian F

    2018-03-01

    The relationships between vagally mediated heart rate variability (vmHRV) and the cognitive mechanisms underlying performance can be elucidated with ex-Gaussian modeling-an approach that quantifies two different forms of intra-individual variability (IIV) in reaction time (RT). To this end, the current study examined relations of resting vmHRV to whole-distribution and ex-Gaussian IIV. Subjects (N = 83) completed a 5-minute baseline while vmHRV (root mean square of successive differences; RMSSD) was measured. Ex-Gaussian (sigma, tau) and whole-distribution (standard deviation) estimates of IIV were derived from reaction times on a Stroop task. Resting vmHRV was found to be inversely related to tau (exponential IIV) but not to sigma (Gaussian IIV) or the whole-distribution standard deviation of RTs. Findings suggest that individuals with high vmHRV can better prevent attentional lapses but not difficulties with motor control. These findings inform the differential relationships of cardiac vagal control to the cognitive processes underlying human performance. Copyright © 2018 The Authors. Published by Elsevier B.V. All rights reserved.

  14. Cortical Response Variability as a Developmental Index of Selective Auditory Attention

    Science.gov (United States)

    Strait, Dana L.; Slater, Jessica; Abecassis, Victor; Kraus, Nina

    2014-01-01

    Attention induces synchronicity in neuronal firing for the encoding of a given stimulus at the exclusion of others. Recently, we reported decreased variability in scalp-recorded cortical evoked potentials to attended compared with ignored speech in adults. Here we aimed to determine the developmental time course for this neural index of auditory…

  15. Genotype-by-environment interactions leads to variable selection on life-history strategy in Common Evening Primrose (Oenothera biennis).

    Science.gov (United States)

    Johnson, M T J

    2007-01-01

    Monocarpic plant species, where reproduction is fatal, frequently exhibit variation in the length of their prereproductive period prior to flowering. If this life-history variation in flowering strategy has a genetic basis, genotype-by-environment interactions (G x E) may maintain phenotypic diversity in flowering strategy. The native monocarpic plant Common Evening Primrose (Oenothera biennis L., Onagraceae) exhibits phenotypic variation for annual vs. biennial flowering strategies. I tested whether there was a genetic basis to variation in flowering strategy in O. biennis, and whether environmental variation causes G x E that imposes variable selection on flowering strategy. In a field experiment, I randomized more than 900 plants from 14 clonal families (genotypes) into five distinct habitats that represented a natural productivity gradient. G x E strongly affected the lifetime fruit production of O. biennis, with the rank-order in relative fitness of genotypes changing substantially between habitats. I detected genetic variation in annual vs. biennial strategies in most habitats, as well as a G x E effect on flowering strategy. This variation in flowering strategy was correlated with genetic variation in relative fitness, and phenotypic and genotypic selection analyses revealed that environmental variation resulted in variable directional selection on annual vs. biennial strategies. Specifically, a biennial strategy was favoured in moderately productive environments, whereas an annual strategy was favoured in low-productivity environments. These results highlight the importance of variable selection for the maintenance of genetic variation in the life-history strategy of a monocarpic plant.

  16. Quadratic time dependent Hamiltonians and separation of variables

    Science.gov (United States)

    Anzaldo-Meneses, A.

    2017-06-01

    Time dependent quantum problems defined by quadratic Hamiltonians are solved using canonical transformations. The Green's function is obtained and a comparison with the classical Hamilton-Jacobi method leads to important geometrical insights like exterior differential systems, Monge cones and time dependent Gaussian metrics. The Wei-Norman approach is applied using unitary transformations defined in terms of generators of the associated Lie groups, here the semi-direct product of the Heisenberg group and the symplectic group. A new explicit relation for the unitary transformations is given in terms of a finite product of elementary transformations. The sequential application of adequate sets of unitary transformations leads naturally to a new separation of variables method for time dependent Hamiltonians, which is shown to be related to the Inönü-Wigner contraction of Lie groups. The new method allows also a better understanding of interacting particles or coupled modes and opens an alternative way to analyze topological phases in driven systems.

  17. The Salience of Selected Variables on Choice for Movie Attendance among High School Students.

    Science.gov (United States)

    Austin, Bruce A.

    A questionnaire was designed for a study assessing both the importance of 28 variables in movie attendance and the importance of movie-going as a leisure-time activity. Respondents were 130 ninth and twelfth grade students. The 28 variables were broadly organized into eight categories: movie production personnel, production elements, advertising,…

  18. Effects of selected design variables on three ramp, external compression inlet performance. [boundary layer control bypasses, and mass flow rate

    Science.gov (United States)

    Kamman, J. H.; Hall, C. L.

    1975-01-01

    Two inlet performance tests and one inlet/airframe drag test were conducted in 1969 at the NASA-Ames Research Center. The basic inlet system was two-dimensional, three ramp (overhead), external compression, with variable capture area. The data from these tests were analyzed to show the effects of selected design variables on the performance of this type of inlet system. The inlet design variables investigated include inlet bleed, bypass, operating mass flow ratio, inlet geometry, and variable capture area.

  19. Assessing the Time Variability of Jupiter's Tropospheric Properties from 1996 to 2011

    Science.gov (United States)

    Orton, G. S.; Fletcher, L. N.; Yanamandra-Fisher, P. A.; Simon-Miller, A. A.; Greco, J.; Wakefield, L.

    2012-01-01

    We acquired and analyzed mid-infrared images of Jupiter's disk at selected wavelengths from NASA's Infrared Telescope Facility (IRTF) from 1996 to 2011, including a period of large-scale changes of cloud color and albedo. We derived the 100-300 mbar temperature structure, together with tracers of vertical motion: the thickness of a 600- mbar cloud layer, the 300-mbar abundance of the condensable gas NH3, and the 400- mbar para- vs. ortho-H2 ratio. The biggest visual change was detected in the normally dark South Equatorial Belt (SEB) that 'faded' to a light color in 2010, during which both cloud thickness and NH3 abundance rose; both returned to their pre-fade levels in 2011, as the SEB regained its normal dark color. The cloud thickness in Jupiter's North Temperate Belt (NTB) increased in 2002, coincident with its visible brightening, and its NH3 abundance spiked in 2002-2003. Jupiter's Equatorial Zone (EZ), a region marked by more subtle but widespread color and albedo change, showed high cloud thickness variability between 2007 and 2009. In Jupiter's North Equatorial Belt (NEB), the cloud thickened in 2005, then slowly decreased to a minimum value in 2010-2011. No temperature variations were associated with any of these changes, but we discovered temperature oscillations of approx.2-4 K in all regions, with 4- or 8-year periods and phasing that was dissimilar in the different regions. There was also no detectable change in the para- vs. ortho-H2 ratio over time, leading to the possibility that it is driven from much deeper atmospheric levels and may be time-invariant. Our future work will continue to survey the variability of these properties through the Juno mission, which arrives at Jupiter in 2016, and to connect these observations with those made using raster-scanned images from 1980 to 1993 (Orton et al. 1996 Science 265, 625).

  20. Selection and adaptation in irradiated plant and animal populations: a review

    International Nuclear Information System (INIS)

    Hart, D.R.

    1981-03-01

    Available literature on the effects of ionizing radiation on mutation rates, variability and adaptive responses to selection in exposed plant and animal populations is reviewed. Accumulated variability, and hence potential selection differentials, may be increased by many times due to induced mutation. The radiation dose that maximizes induced mutation varies greatly among species, strains and genetic systems. Induced variability tends to enhance the respose to selection, but this effect may be delayed or prevented by an initial reduction in the heritability of induced variation. Significantly, the detrimental effects of harmful mutations in irradiated populations may exceed the beneficial effects of selection for adaptive characteristics. Selection for radioresistance may occur at lethal or sub-lethal radiation doses but dose relationships are highly variable. (author)

  1. Genetic and Psychosocial Predictors of Aggression: Variable Selection and Model Building With Component-Wise Gradient Boosting.

    Science.gov (United States)

    Suchting, Robert; Gowin, Joshua L; Green, Charles E; Walss-Bass, Consuelo; Lane, Scott D

    2018-01-01

    Rationale : Given datasets with a large or diverse set of predictors of aggression, machine learning (ML) provides efficient tools for identifying the most salient variables and building a parsimonious statistical model. ML techniques permit efficient exploration of data, have not been widely used in aggression research, and may have utility for those seeking prediction of aggressive behavior. Objectives : The present study examined predictors of aggression and constructed an optimized model using ML techniques. Predictors were derived from a dataset that included demographic, psychometric and genetic predictors, specifically FK506 binding protein 5 (FKBP5) polymorphisms, which have been shown to alter response to threatening stimuli, but have not been tested as predictors of aggressive behavior in adults. Methods : The data analysis approach utilized component-wise gradient boosting and model reduction via backward elimination to: (a) select variables from an initial set of 20 to build a model of trait aggression; and then (b) reduce that model to maximize parsimony and generalizability. Results : From a dataset of N = 47 participants, component-wise gradient boosting selected 8 of 20 possible predictors to model Buss-Perry Aggression Questionnaire (BPAQ) total score, with R 2 = 0.66. This model was simplified using backward elimination, retaining six predictors: smoking status, psychopathy (interpersonal manipulation and callous affect), childhood trauma (physical abuse and neglect), and the FKBP5_13 gene (rs1360780). The six-factor model approximated the initial eight-factor model at 99.4% of R 2 . Conclusions : Using an inductive data science approach, the gradient boosting model identified predictors consistent with previous experimental work in aggression; specifically psychopathy and trauma exposure. Additionally, allelic variants in FKBP5 were identified for the first time, but the relatively small sample size limits generality of results and calls for

  2. Inter and intra-observer variability of time-lapse annotations

    DEFF Research Database (Denmark)

    Sundvall Germeys, Linda Karin M; Ingerslev, Hans Jakob; Knudsen, Ulla Breth

    . This provides the basis for further investigation of embryo assessment and selection by time-lapse imaging in prospective trials. Study funding/competing interest(s): Research at the Fertility Clinic was funded by an unrestricted grant from Ferring and MSD. The authors have no competing interests to declare....

  3. INDUCED GENETIC VARIABILITY AND SELECTION FOR HIGH YIELDING MUTANTS IN BREAD WHEAT(TRITICUM AESTIVUM L.)

    International Nuclear Information System (INIS)

    SOBIEH, S.EL-S.S.

    2007-01-01

    This study was conducted during the two winter seasons of 2004/2005 and 2005/2006 at the experimental farm belonging to Plant Research Department, Nuclear Research Centre, AEA, Egypt.The aim of this study is to determine the effect of gamma rays(150, 200 and 250 Gy) on means of yield and its attributes for exotic wheat variety (vir-25) and induction of genetic variability that permits to perform visual selection through the irradiated populations, as well as to determine difference in seed protein patterns between vir-25 parent variety and some selectants in M2 generation.The results showed that the different doses of gamma rays had non-significant effect on mean value of yield/plant and significant effect on mean values of it's attributes. 0n the other hand, the considered genetic variability was generated as result of applying gamma irradiation. The highest amount of induced genetic variability was detected for number of grains/ spike, spike length and number of spikes/plant. Additionally, these three traits exhibited strong association with grain yield/plant, hence, they were used as a criterion for selection.Some variant plants were selected from radiation treatment 250 Gy, with 2-10 spikes per plant.These variant plants exhibited increasing in spike length and number of gains/spike.The results also revealed that protein electrophoresis were varied in the number and position of bands from genotype to another and various genotypes share bands with molecular weights 31.4 and 3.2 KD.Many bands were found to be specific for the genotype and the nine wheat mutants were characterized by the presence of bands of molecular weights: 151.9, 125.7, 14.1 and 5.7 KD at M-167.4, 21.7 and 8.2 at M-299.7 KD at M-3136.1, 97.6, 49.8, 27.9 and 20.6 KD at M-4 135.2, 95.3 and 28.1 KD at M-5 135.5, 67.7, 47.1, 32.3, 21.9 and 9.6 KD at M-6 126.1, 112.1, 103.3, 58.8, 20.9 and 12.1 KD at M-7 127.7, 116.6, 93.9, 55.0 and 47.4 KD at M-8 141.7, 96.1, 79.8, 68.9, 42.1, 32.7, 22.0 and 13

  4. Numerical counting ratemeter with variable time constant and integrated circuits

    International Nuclear Information System (INIS)

    Kaiser, J.; Fuan, J.

    1967-01-01

    We present here the prototype of a numerical counting ratemeter which is a special version of variable time-constant frequency meter (1). The originality of this work lies in the fact that the change in the time constant is carried out automatically. Since the criterion for this change is the accuracy in the annunciated result, the integration time is varied as a function of the frequency. For the prototype described in this report, the time constant varies from 1 sec to 1 millisec. for frequencies in the range 10 Hz to 10 MHz. This prototype is built entirely of MECL-type integrated circuits from Motorola and is thus contained in two relatively small boxes. (authors) [fr

  5. Sensitivity of Variables with Time for Degraded RC Shear Wall with Low Steel Ratio under Seismic Load

    International Nuclear Information System (INIS)

    Park, Jun Hee; Choun, Young Sun; Choi, In Kil

    2011-01-01

    Various factors lead to the degradation of reinforced concrete (RC) shear wall over time. The steel section loss, concrete spalling and strength of material have been considered for the structural analysis of degraded shear wall. When all variables with respect to degradation are considered for probabilistic evaluation of degraded shear wall, many of time and effort were demanded. Therefore, it is required to define important variables related to structural behavior for effectively conducting probabilistic seismic analysis of structures with age-related degradation. In this study, variables were defined by applying the function of time to consider degradation with time. Importance of variables with time on the seismic response was investigated by conducting sensitivity analysis

  6. Efficient conservative ADER schemes based on WENO reconstruction and space-time predictor in primitive variables

    Science.gov (United States)

    Zanotti, Olindo; Dumbser, Michael

    2016-01-01

    We present a new version of conservative ADER-WENO finite volume schemes, in which both the high order spatial reconstruction as well as the time evolution of the reconstruction polynomials in the local space-time predictor stage are performed in primitive variables, rather than in conserved ones. To obtain a conservative method, the underlying finite volume scheme is still written in terms of the cell averages of the conserved quantities. Therefore, our new approach performs the spatial WENO reconstruction twice: the first WENO reconstruction is carried out on the known cell averages of the conservative variables. The WENO polynomials are then used at the cell centers to compute point values of the conserved variables, which are subsequently converted into point values of the primitive variables. This is the only place where the conversion from conservative to primitive variables is needed in the new scheme. Then, a second WENO reconstruction is performed on the point values of the primitive variables to obtain piecewise high order reconstruction polynomials of the primitive variables. The reconstruction polynomials are subsequently evolved in time with a novel space-time finite element predictor that is directly applied to the governing PDE written in primitive form. The resulting space-time polynomials of the primitive variables can then be directly used as input for the numerical fluxes at the cell boundaries in the underlying conservative finite volume scheme. Hence, the number of necessary conversions from the conserved to the primitive variables is reduced to just one single conversion at each cell center. We have verified the validity of the new approach over a wide range of hyperbolic systems, including the classical Euler equations of gas dynamics, the special relativistic hydrodynamics (RHD) and ideal magnetohydrodynamics (RMHD) equations, as well as the Baer-Nunziato model for compressible two-phase flows. In all cases we have noticed that the new ADER

  7. Selective logging: do rates of forest turnover in stems, species composition and functional traits decrease with time since disturbance? - A 45 year perspective.

    Science.gov (United States)

    Osazuwa-Peters, Oyomoare L; Jiménez, Iván; Oberle, Brad; Chapman, Colin A; Zanne, Amy E

    2015-12-01

    Selective logging, the targeted harvesting of timber trees in a single cutting cycle, is globally rising in extent and intensity. Short-term impacts of selective logging on tropical forests have been widely investigated, but long-term effects on temporal dynamics of forest structure and composition are largely unknown. Understanding these long-term dynamics will help determine whether tropical forests are resilient to selective logging and inform choices between competing demands of anthropogenic use versus conservation of tropical forests. Forest dynamics can be studied within the framework of succession theory, which predicts that temporal turnover rates should decline with time since disturbance. Here, we investigated the temporal dynamics of a tropical forest in Kibale National Park, Uganda over 45 years following selective logging. We estimated turnover rates in stems, species composition, and functional traits (wood density and diameter at breast height), using observations from four censuses in 1989, 1999, 2006, and 2013, of stems ≥ 10 cm diameter within 17 unlogged and 9 logged 200 × 10 m vegetation plots. We used null models to account for interdependencies among turnover rates in stems, species composition, and functional traits. We tested predictions that turnover rates should be higher and decrease with increasing time since the selective logging event in logged forest, but should be less temporally variable in unlogged forest. Overall, we found higher turnover rates in logged forest for all three attributes, but turnover rates did not decline through time in logged forest and was not less temporally variable in unlogged forest. These results indicate that successional models that assume recovery to pre-disturbance conditions are inadequate for predicting the effects of selective logging on the dynamics of the tropical forest in Kibale. Selective logging resulted in persistently higher turnover rates, which may compromise the carbon storage capacity

  8. Optimal Subset Selection of Time-Series MODIS Images and Sample Data Transfer with Random Forests for Supervised Classification Modelling.

    Science.gov (United States)

    Zhou, Fuqun; Zhang, Aining

    2016-10-25

    Nowadays, various time-series Earth Observation data with multiple bands are freely available, such as Moderate Resolution Imaging Spectroradiometer (MODIS) datasets including 8-day composites from NASA, and 10-day composites from the Canada Centre for Remote Sensing (CCRS). It is challenging to efficiently use these time-series MODIS datasets for long-term environmental monitoring due to their vast volume and information redundancy. This challenge will be greater when Sentinel 2-3 data become available. Another challenge that researchers face is the lack of in-situ data for supervised modelling, especially for time-series data analysis. In this study, we attempt to tackle the two important issues with a case study of land cover mapping using CCRS 10-day MODIS composites with the help of Random Forests' features: variable importance, outlier identification. The variable importance feature is used to analyze and select optimal subsets of time-series MODIS imagery for efficient land cover mapping, and the outlier identification feature is utilized for transferring sample data available from one year to an adjacent year for supervised classification modelling. The results of the case study of agricultural land cover classification at a regional scale show that using only about a half of the variables we can achieve land cover classification accuracy close to that generated using the full dataset. The proposed simple but effective solution of sample transferring could make supervised modelling possible for applications lacking sample data.

  9. Selecting groups of covariates in the elastic net

    DEFF Research Database (Denmark)

    Clemmensen, Line Katrine Harder

    This paper introduces a novel method to select groups of variables in sparse regression and classication settings. The groups are formed based on the correlations between covariates and ensure that for example spatial or spectral relations are preserved without explicitly coding for these....... The preservation of relations gives increased interpretability. The method is based on the elastic net and adaptively selects highly correlated groups of variables and does therefore not waste time in grouping irrelevant variables for the problem at hand. The method is illustrated on a simulated data set...

  10. Time-variable gravity fields derived from GPS tracking of Swarm

    Czech Academy of Sciences Publication Activity Database

    Bezděk, Aleš; Sebera, Josef; da Encarnacao, J.T.; Klokočník, Jaroslav

    2016-01-01

    Roč. 205, č. 3 (2016), s. 1665-1669 ISSN 0956-540X R&D Projects: GA MŠk LG14026; GA ČR GA13-36843S Institutional support: RVO:67985815 Keywords : satellite geodesy * time variable gravity * global change from geodesy Subject RIV: DD - Geochemistry Impact factor: 2.414, year: 2016

  11. Energy-efficient relay selection and optimal power allocation for performance-constrained dual-hop variable-gain AF relaying

    KAUST Repository

    Zafar, Ammar; Radaydeh, Redha Mahmoud Mesleh; Chen, Yunfei; Alouini, Mohamed-Slim

    2013-01-01

    This paper investigates the energy-efficiency enhancement of a variable-gain dual-hop amplify-and-forward (AF) relay network utilizing selective relaying. The objective is to minimize the total consumed power while keeping the end-to-end signal

  12. Correlates of adolescent sleep time and variability in sleep time: the role of individual and health related characteristics.

    Science.gov (United States)

    Moore, Melisa; Kirchner, H Lester; Drotar, Dennis; Johnson, Nathan; Rosen, Carol; Redline, Susan

    2011-03-01

    Adolescents are predisposed to short sleep duration and irregular sleep patterns due to certain host characteristics (e.g., age, pubertal status, gender, ethnicity, socioeconomic class, and neighborhood distress) and health-related variables (e.g., ADHD, asthma, birth weight, and BMI). The aim of the current study was to investigate the relationship between such variables and actigraphic measures of sleep duration and variability. Cross-sectional study of 247 adolescents (48.5% female, 54.3% ethnic minority, mean age of 13.7years) involved in a larger community-based cohort study. Significant univariate predictors of sleep duration included gender, minority ethnicity, neighborhood distress, parent income, and BMI. In multivariate models, gender, minority status, and BMI were significantly associated with sleep duration (all pminority adolescents, and those of a lower BMI obtaining more sleep. Univariate models demonstrated that age, minority ethnicity, neighborhood distress, parent education, parent income, pubertal status, and BMI were significantly related to variability in total sleep time. In the multivariate model, age, minority status, and BMI were significantly related to variability in total sleep time (all pminority adolescents, and those of a lower BMI obtaining more regular sleep. These data show differences in sleep patterns in population sub-groups of adolescents which may be important in understanding pediatric health risk profiles. Sub-groups that may particularly benefit from interventions aimed at improving sleep patterns include boys, overweight, and minority adolescents. Copyright © 2011 Elsevier B.V. All rights reserved.

  13. A variational conformational dynamics approach to the selection of collective variables in metadynamics

    Science.gov (United States)

    McCarty, James; Parrinello, Michele

    2017-11-01

    In this paper, we combine two powerful computational techniques, well-tempered metadynamics and time-lagged independent component analysis. The aim is to develop a new tool for studying rare events and exploring complex free energy landscapes. Metadynamics is a well-established and widely used enhanced sampling method whose efficiency depends on an appropriate choice of collective variables. Often the initial choice is not optimal leading to slow convergence. However by analyzing the dynamics generated in one such run with a time-lagged independent component analysis and the techniques recently developed in the area of conformational dynamics, we obtain much more efficient collective variables that are also better capable of illuminating the physics of the system. We demonstrate the power of this approach in two paradigmatic examples.

  14. Real time event selection and flash analog-to-digital converters

    International Nuclear Information System (INIS)

    Imori, Masatosi

    1983-01-01

    In high-energy particle experiments, high-speed analog logic is employed to select events on a real-time basis. Flash analog-to-digital converters replace the high-speed analog logic with digital logic. The digital logic gives great flexibility to the scheme for real-time event selection. This paper proposes the use of flash A/D converters for the logic used to obtain the total sum of the energy deposited in individual counters in a shower detector. (author)

  15. Evaluating Selection and Timing Ability of a Mutual Fund

    Directory of Open Access Journals (Sweden)

    Duguleană L.

    2009-12-01

    Full Text Available The paper presents the methodology and a case study to evaluate the performance of a mutual fund by taking a look at the timing and selection abilities of a portfolio manager. Separating the timing and selection abilities of the fund manager is taken into consideration by two major models. The data about the mutual fund chosen for study is the German blue chip fund “DWS Deutsche Aktien Typ O”, which includes most of the DAX 30 companies. The data consists of 117 monthly observations of the fund returns from January 1999 to September 2008. We used EViews to analyse the data.

  16. Excitation of Earth Rotation Variations "Observed" by Time-Variable Gravity

    Science.gov (United States)

    Chao, Ben F.; Cox, C. M.

    2005-01-01

    Time variable gravity measurements have been made over the past two decades using the space geodetic technique of satellite laser ranging, and more recently by the GRACE satellite mission with improved spatial resolutions. The degree-2 harmonic components of the time-variable gravity contain important information about the Earth s length-of-day and polar motion excitation functions, in a way independent to the traditional "direct" Earth rotation measurements made by, for example, the very-long-baseline interferometry and GPS. In particular, the (degree=2, order= 1) components give the mass term of the polar motion excitation; the (2,O) component, under certain mass conservation conditions, gives the mass term of the length-of-day excitation. Combining these with yet another independent source of angular momentum estimation calculated from global geophysical fluid models (for example the atmospheric angular momentum, in both mass and motion terms), in principle can lead to new insights into the dynamics, particularly the role or the lack thereof of the cores, in the excitation processes of the Earth rotation variations.

  17. Petroleomics by electrospray ionization FT-ICR mass spectrometry coupled to partial least squares with variable selection methods: prediction of the total acid number of crude oils.

    Science.gov (United States)

    Terra, Luciana A; Filgueiras, Paulo R; Tose, Lílian V; Romão, Wanderson; de Souza, Douglas D; de Castro, Eustáquio V R; de Oliveira, Mirela S L; Dias, Júlio C M; Poppi, Ronei J

    2014-10-07

    Negative-ion mode electrospray ionization, ESI(-), with Fourier transform ion cyclotron resonance mass spectrometry (FT-ICR MS) was coupled to a Partial Least Squares (PLS) regression and variable selection methods to estimate the total acid number (TAN) of Brazilian crude oil samples. Generally, ESI(-)-FT-ICR mass spectra present a power of resolution of ca. 500,000 and a mass accuracy less than 1 ppm, producing a data matrix containing over 5700 variables per sample. These variables correspond to heteroatom-containing species detected as deprotonated molecules, [M - H](-) ions, which are identified primarily as naphthenic acids, phenols and carbazole analog species. The TAN values for all samples ranged from 0.06 to 3.61 mg of KOH g(-1). To facilitate the spectral interpretation, three methods of variable selection were studied: variable importance in the projection (VIP), interval partial least squares (iPLS) and elimination of uninformative variables (UVE). The UVE method seems to be more appropriate for selecting important variables, reducing the dimension of the variables to 183 and producing a root mean square error of prediction of 0.32 mg of KOH g(-1). By reducing the size of the data, it was possible to relate the selected variables with their corresponding molecular formulas, thus identifying the main chemical species responsible for the TAN values.

  18. Variable slip wind generator modeling for real-time simulation

    Energy Technology Data Exchange (ETDEWEB)

    Gagnon, R.; Brochu, J.; Turmel, G. [Hydro-Quebec, Varennes, PQ (Canada). IREQ

    2006-07-01

    A model of a wind turbine using a variable slip wound-rotor induction machine was presented. The model was created as part of a library of generic wind generator models intended for wind integration studies. The stator winding of the wind generator was connected directly to the grid and the rotor was driven by the turbine through a drive train. The variable resistors was synthesized by an external resistor in parallel with a diode rectifier. A forced-commutated power electronic device (IGBT) was connected to the wound rotor by slip rings and brushes. Simulations were conducted in a Matlab/Simulink environment using SimPowerSystems blocks to model power systems elements and Simulink blocks to model the turbine, control system and drive train. Detailed descriptions of the turbine, the drive train and the control system were provided. The model's implementation in the simulator was also described. A case study demonstrating the real-time simulation of a wind generator connected at the distribution level of a power system was presented. Results of the case study were then compared with results obtained from the SimPowerSystems off-line simulation. Results showed good agreement between the waveforms, demonstrating the conformity of the real-time and the off-line simulations. The capability of Hypersim for real-time simulation of wind turbines with power electronic converters in a distribution network was demonstrated. It was concluded that hardware-in-the-loop (HIL) simulation of wind turbine controllers for wind integration studies in power systems is now feasible. 5 refs., 1 tab., 6 figs.

  19. Demographic Variables and Selective, Sustained Attention and Planning through Cognitive Tasks among Healthy Adults

    OpenAIRE

    Afsaneh Zarghi; Zali; A; Tehranidost; M; Mohammad Reza Zarindast; Ashrafi; F; Doroodgar; Khodadadi

    2011-01-01

    Introduction: Cognitive tasks are considered to be applicable and appropriate in assessing cognitive domains. The purpose of our study is to determine the relationship existence between variables of age, sex and education with selective, sustained attention and planning abilities by means of computerized cognitive tasks among healthy adults. Methods: A cross-sectional study was implemented during 6 months from June to November, 2010 on 84 healthy adults (42 male and 42 female). The whole part...

  20. The impact of selected organizational variables and managerial leadership on radiation therapists' organizational commitment

    Energy Technology Data Exchange (ETDEWEB)

    Akroyd, Duane [Department of Adult and Community College Education, College of Education, Campus Box 7801, North Carolina State University, Raleigh, NC 27695 (United States)], E-mail: duane_akroyd@ncsu.edu; Legg, Jeff [Department of Radiologic Sciences, Virginia Commonwealth University, Richmond, VA 23284 (United States); Jackowski, Melissa B. [Division of Radiologic Sciences, University of North Carolina School of Medicine 27599 (United States); Adams, Robert D. [Department of Radiation Oncology, University of North Carolina School of Medicine 27599 (United States)

    2009-05-15

    The purpose of this study was to examine the impact of selected organizational factors and the leadership behavior of supervisors on radiation therapists' commitment to their organizations. The population for this study consists of all full time clinical radiation therapists registered by the American Registry of Radiologic Technologists (ARRT) in the United States. A random sample of 800 radiation therapists was obtained from the ARRT for this study. Questionnaires were mailed to all participants and measured organizational variables; managerial leadership variable and three components of organizational commitment (affective, continuance and normative). It was determined that organizational support, and leadership behavior of supervisors each had a significant and positive affect on normative and affective commitment of radiation therapists and each of the models predicted over 40% of the variance in radiation therapists organizational commitment. This study examined radiation therapists' commitment to their organizations and found that affective (emotional attachment to the organization) and normative (feelings of obligation to the organization) commitments were more important than continuance commitment (awareness of the costs of leaving the organization). This study can help radiation oncology administrators and physicians to understand the values their radiation therapy employees hold that are predictive of their commitment to the organization. A crucial result of the study is the importance of the perceived support of the organization and the leadership skills of managers/supervisors on radiation therapists' commitment to the organization.

  1. Enhanced Requirements for Assessment in a Competency-Based, Time-Variable Medical Education System.

    Science.gov (United States)

    Gruppen, Larry D; Ten Cate, Olle; Lingard, Lorelei A; Teunissen, Pim W; Kogan, Jennifer R

    2018-03-01

    Competency-based, time-variable medical education has reshaped the perceptions and practices of teachers, curriculum designers, faculty developers, clinician educators, and program administrators. This increasingly popular approach highlights the fact that learning among different individuals varies in duration, foundation, and goal. Time variability places particular demands on the assessment data that are so necessary for making decisions about learner progress. These decisions may be formative (e.g., feedback for improvement) or summative (e.g., decisions about advancing a student). This article identifies challenges to collecting assessment data and to making assessment decisions in a time-variable system. These challenges include managing assessment data, defining and making valid assessment decisions, innovating in assessment, and modeling the considerable complexity of assessment in real-world settings and richly interconnected social systems. There are hopeful signs of creativity in assessment both from researchers and practitioners, but the transition from a traditional to a competency-based medical education system will likely continue to create much controversy and offer opportunities for originality and innovation in assessment.

  2. Aggressive time step selection for the time asymptotic velocity diffusion problem

    International Nuclear Information System (INIS)

    Hewett, D.W.; Krapchev, V.B.; Hizanidis, K.; Bers, A.

    1984-12-01

    An aggressive time step selector for an ADI algorithm is preseneted that is applied to the linearized 2-D Fokker-Planck equation including an externally imposed quasilinear diffusion term. This method provides a reduction in CPU requirements by factors of two or three compared to standard ADI. More important, the robustness of the procedure greatly reduces the work load of the user. The procedure selects a nearly optimal Δt with a minimum of intervention by the user thus relieving the need to supervise the algorithm. In effect, the algorithm does its own supervision by discarding time steps made with Δt too large

  3. Discrete-time bidirectional associative memory neural networks with variable delays

    International Nuclear Information System (INIS)

    Liang Jinling; Cao Jinde; Ho, Daniel W.C.

    2005-01-01

    Based on the linear matrix inequality (LMI), some sufficient conditions are presented in this Letter for the existence, uniqueness and global exponential stability of the equilibrium point of discrete-time bidirectional associative memory (BAM) neural networks with variable delays. Some of the stability criteria obtained in this Letter are delay-dependent, and some of them are delay-independent, they are less conservative than the ones reported so far in the literature. Furthermore, the results provide one more set of easily verified criteria for determining the exponential stability of discrete-time BAM neural networks

  4. Discrete-time bidirectional associative memory neural networks with variable delays

    Science.gov (United States)

    Liang, variable delays [rapid communication] J.; Cao, J.; Ho, D. W. C.

    2005-02-01

    Based on the linear matrix inequality (LMI), some sufficient conditions are presented in this Letter for the existence, uniqueness and global exponential stability of the equilibrium point of discrete-time bidirectional associative memory (BAM) neural networks with variable delays. Some of the stability criteria obtained in this Letter are delay-dependent, and some of them are delay-independent, they are less conservative than the ones reported so far in the literature. Furthermore, the results provide one more set of easily verified criteria for determining the exponential stability of discrete-time BAM neural networks.

  5. Selection of entropy-measure parameters for knowledge discovery in heart rate variability data.

    Science.gov (United States)

    Mayer, Christopher C; Bachler, Martin; Hörtenhuber, Matthias; Stocker, Christof; Holzinger, Andreas; Wassertheurer, Siegfried

    2014-01-01

    Heart rate variability is the variation of the time interval between consecutive heartbeats. Entropy is a commonly used tool to describe the regularity of data sets. Entropy functions are defined using multiple parameters, the selection of which is controversial and depends on the intended purpose. This study describes the results of tests conducted to support parameter selection, towards the goal of enabling further biomarker discovery. This study deals with approximate, sample, fuzzy, and fuzzy measure entropies. All data were obtained from PhysioNet, a free-access, on-line archive of physiological signals, and represent various medical conditions. Five tests were defined and conducted to examine the influence of: varying the threshold value r (as multiples of the sample standard deviation σ, or the entropy-maximizing rChon), the data length N, the weighting factors n for fuzzy and fuzzy measure entropies, and the thresholds rF and rL for fuzzy measure entropy. The results were tested for normality using Lilliefors' composite goodness-of-fit test. Consequently, the p-value was calculated with either a two sample t-test or a Wilcoxon rank sum test. The first test shows a cross-over of entropy values with regard to a change of r. Thus, a clear statement that a higher entropy corresponds to a high irregularity is not possible, but is rather an indicator of differences in regularity. N should be at least 200 data points for r = 0.2 σ and should even exceed a length of 1000 for r = rChon. The results for the weighting parameters n for the fuzzy membership function show different behavior when coupled with different r values, therefore the weighting parameters have been chosen independently for the different threshold values. The tests concerning rF and rL showed that there is no optimal choice, but r = rF = rL is reasonable with r = rChon or r = 0.2σ. Some of the tests showed a dependency of the test significance on the data at hand. Nevertheless, as the medical

  6. Heart-Rate Variability During Deep Sleep in World-Class Alpine Skiers: A Time-Efficient Alternative to Morning Supine Measurements.

    Science.gov (United States)

    Herzig, David; Testorelli, Moreno; Olstad, Daniela Schäfer; Erlacher, Daniel; Achermann, Peter; Eser, Prisca; Wilhelm, Matthias

    2017-05-01

    It is increasingly popular to use heart-rate variability (HRV) to tailor training for athletes. A time-efficient method is HRV assessment during deep sleep. To validate the selection of deep-sleep segments identified by RR intervals with simultaneous electroencephalography (EEG) recordings and to compare HRV parameters of these segments with those of standard morning supine measurements. In 11 world-class alpine skiers, RR intervals were monitored during 10 nights, and simultaneous EEGs were recorded during 2-4 nights. Deep sleep was determined from the HRV signal and verified by delta power from the EEG recordings. Four further segments were chosen for HRV determination, namely, a 4-h segment from midnight to 4 AM and three 5-min segments: 1 just before awakening, 1 after waking in supine position, and 1 in standing after orthostatic challenge. Training load was recorded every day. A total of 80 night and 68 morning measurements of 9 athletes were analyzed. Good correspondence between the phases selected by RR intervals vs those selected by EEG was found. Concerning root-mean-squared difference of successive RR intervals (RMSSD), a marker for parasympathetic activity, the best relationship with the morning supine measurement was found in deep sleep. HRV is a simple tool for approximating deep-sleep phases, and HRV measurement during deep sleep could provide a time-efficient alternative to HRV in supine position.

  7. Fast Determination of Distribution-Connected PV Impacts Using a Variable Time-Step Quasi-Static Time-Series Approach: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Mather, Barry

    2017-08-24

    The increasing deployment of distribution-connected photovoltaic (DPV) systems requires utilities to complete complex interconnection studies. Relatively simple interconnection study methods worked well for low penetrations of photovoltaic systems, but more complicated quasi-static time-series (QSTS) analysis is required to make better interconnection decisions as DPV penetration levels increase. Tools and methods must be developed to support this. This paper presents a variable-time-step solver for QSTS analysis that significantly shortens the computational time and effort to complete a detailed analysis of the operation of a distribution circuit with many DPV systems. Specifically, it demonstrates that the proposed variable-time-step solver can reduce the required computational time by as much as 84% without introducing any important errors to metrics, such as the highest and lowest voltage occurring on the feeder, number of voltage regulator tap operations, and total amount of losses realized in the distribution circuit during a 1-yr period. Further improvement in computational speed is possible with the introduction of only modest errors in these metrics, such as a 91 percent reduction with less than 5 percent error when predicting voltage regulator operations.

  8. Change in intraindividual variability over time as a key metric for defining performance-based cognitive fatigability.

    Science.gov (United States)

    Wang, Chao; Ding, Mingzhou; Kluger, Benzi M

    2014-03-01

    Cognitive fatigability is conventionally quantified as the increase over time in either mean reaction time (RT) or error rate from two or more time periods during sustained performance of a prolonged cognitive task. There is evidence indicating that these mean performance measures may not sufficiently reflect the response characteristics of cognitive fatigue. We hypothesized that changes in intraindividual variability over time would be a more sensitive and ecologically meaningful metric for investigations of fatigability of cognitive performance. To test the hypothesis fifteen young adults were recruited. Trait fatigue perceptions in various domains were assessed with the Multidimensional Fatigue Index (MFI). Behavioral data were then recorded during performance of a three-hour continuous cued Stroop task. Results showed that intraindividual variability, as quantified by the coefficient of variation of RT, increased linearly over the course of three hours and demonstrated a significantly greater effect size than mean RT or accuracy. Change in intraindividual RT variability over time was significantly correlated with relevant subscores of the MFI including reduced activity, reduced motivation and mental fatigue. While change in mean RT over time was also correlated with reduced motivation and mental fatigue, these correlations were significantly smaller than those associated with intraindividual RT variability. RT distribution analysis using an ex-Gaussian model further revealed that change in intraindividual variability over time reflects an increase in the exponential component of variance and may reflect attentional lapses or other breakdowns in cognitive control. These results suggest that intraindividual variability and its change over time provide important metrics for measuring cognitive fatigability and may prove useful for inferring the underlying neuronal mechanisms of both perceptions of fatigue and objective changes in performance. Copyright © 2014

  9. REAL-TIME FLAVOUR TAGGING SELECTION IN ATLAS

    CERN Document Server

    Bokan, Petar; The ATLAS collaboration

    2016-01-01

    The ATLAS experiment includes a well-developed trigger system that allows a selection of events which are thought to be of interest, while achieving a high overall rejection against less interesting processes. An important part of the online event selection is the ability to distinguish between jets arising from heavy-flavour quarks (b- and c-jets) and light jets (jets from u-, d-, s- and gluon jets) in real-time. This is essential for many physics analysis that include processes with large jet multiplicity and b-quarks in the final state. Many changes were implemented to the ATLAS online b-jet selection for the Run-2 of the LHC. An overview of the b-jet trigger strategy and performance during 2015 data taking is presented. The ability to use complex offline Multivariate (MV2) b-tagging algorithms directly at High Level Trigger (HLT) was tested in this period. Details on online tagging algorithms are given together with the plans on how to adapt to the new high-luminosity and increased pileup conditions by ex...

  10. Predictor variables for half marathon race time in recreational female runners

    OpenAIRE

    Knechtle, Beat; Knechtle, Patrizia; Barandun, Ursula; Rosemann, Thomas; Lepers, Romuald

    2011-01-01

    INTRODUCTION: The relationship between skin-fold thickness and running performance has been investigated from 100 m to the marathon distance, except the half marathon distance. OBJECTIVE: To investigate whether anthropometry characteristics or training practices were related to race time in 42 recreational female half marathoners to determine the predictor variables of half-marathon race time and to inform future novice female half marathoners. METHODS: Observational field study at the ‘Half ...

  11. Earth System Data Records of Mass Transport from Time-Variable Gravity Data

    Science.gov (United States)

    Zlotnicki, V.; Talpe, M.; Nerem, R. S.; Landerer, F. W.; Watkins, M. M.

    2014-12-01

    Satellite measurements of time variable gravity have revolutionized the study of Earth, by measuring the ice losses of Greenland, Antarctica and land glaciers, changes in groundwater including unsustainable losses due to extraction of groundwater, the mass and currents of the oceans and their redistribution during El Niño events, among other findings. Satellite measurements of gravity have been made primarily by four techniques: satellite tracking from land stations using either lasers or Doppler radio systems, satellite positioning by GNSS/GPS, satellite to satellite tracking over distances of a few hundred km using microwaves, and through a gravity gradiometer (radar altimeters also measure the gravity field, but over the oceans only). We discuss the challenges in the measurement of gravity by different instruments, especially time-variable gravity. A special concern is how to bridge a possible gap in time between the end of life of the current GRACE satellite pair, launched in 2002, and a future GRACE Follow-On pair to be launched in 2017. One challenge in combining data from different measurement systems consists of their different spatial and temporal resolutions and the different ways in which they alias short time scale signals. Typically satellite measurements of gravity are expressed in spherical harmonic coefficients (although expansions in terms of 'mascons', the masses of small spherical caps, has certain advantages). Taking advantage of correlations among spherical harmonic coefficients described by empirical orthogonal functions and derived from GRACE data it is possible to localize the otherwise coarse spatial resolution of the laser and Doppler derived gravity models. This presentation discusses the issues facing a climate data record of time variable mass flux using these different data sources, including its validation.

  12. Time variable cosmological constants from the age of universe

    International Nuclear Information System (INIS)

    Xu Lixin; Lu Jianbo; Li Wenbo

    2010-01-01

    In this Letter, time variable cosmological constant, dubbed age cosmological constant, is investigated motivated by the fact: any cosmological length scale and time scale can introduce a cosmological constant or vacuum energy density into Einstein's theory. The age cosmological constant takes the form ρ Λ =3c 2 M P 2 /t Λ 2 , where t Λ is the age or conformal age of our universe. The effective equation of state (EoS) of age cosmological constant are w Λ eff =-1+2/3 (√(Ω Λ ))/c and w Λ eff =-1+2/3 (√(Ω Λ ))/c (1+z) when the age and conformal age of universe are taken as the role of cosmological time scales respectively. The EoS are the same as the so-called agegraphic dark energy models. However, the evolution histories are different from the agegraphic ones for their different evolution equations.

  13. On the physical processes which lie at the bases of time variability of GRBs

    International Nuclear Information System (INIS)

    Ruffini, R.; Bianco, C. L.; Fraschetti, F.; Xue, S-S.

    2001-01-01

    The relative-space-time-transformation (RSTT) paradigm and the interpretation of the burst-structure (IBS) paradigm are applied to probe the origin of the time variability of GRBs. Again GRB 991216 is used as a prototypical case, thanks to the precise data from the CGRO, RXTE and Chandra satellites. It is found that with the exception of the relatively inconspicuous but scientifically very important signal originating from the initial proper gamma ray burst (P-GRB), all the other spikes and time variabilities can be explained by the interaction of the accelerated-baryonic-matter pulse with inhomogeneities in the interstellar matter. This can be demonstrated by using the RSTT paradigm as well as the IBS paradigm, to trace a typical spike observed in arrival time back to the corresponding one in the laboratory time. Using these paradigms, the identification of the physical nature of the time variability of the GRBs can be made most convincingly. It is made explicit the dependence of a) the intensities of the afterglow, b) the spikes amplitude and c) the actual time structure on the Lorentz gamma factor of the accelerated-baryonic-matter pulse. In principle it is possible to read off from the spike structure the detailed density contrast of the interstellar medium in the host galaxy, even at very high redshift

  14. Antipersistent dynamics in short time scale variability of self-potential signals

    Directory of Open Access Journals (Sweden)

    M. Ragosta

    2000-06-01

    Full Text Available Time scale properties of self-potential signals are investigated through the analysis of the second order structure function (variogram, a powerful tool to investigate the spatial and temporal variability of observational data. In this work we analyse two sequences of self-potential values measured by means of a geophysical monitoring array located in a seismically active area of Southern Italy. The range of scales investigated goes from a few minutes to several days. It is shown that signal fluctuations are characterised by two time scale ranges in which self-potential variability appears to follow slightly different dynamical behaviours. Results point to the presence of fractal, non stationary features expressing a long term correlation with scaling coefficients which are the clue of stabilising mechanisms. In the scale ranges in which the series show scale invariant behaviour, self-potentials evolve like fractional Brownian motions with anticorrelated increments typical of processes regulated by negative feedback mechanisms (antipersistence. On scales below about 6 h the strength of such an antipersistence appears to be slightly greater than that observed on larger time scales where the fluctuations are less efficiently stabilised.

  15. Using Derivative Estimates to Describe Intraindividual Variability at Multiple Time Scales

    Science.gov (United States)

    Deboeck, Pascal R.; Montpetit, Mignon A.; Bergeman, C. S.; Boker, Steven M.

    2009-01-01

    The study of intraindividual variability is central to the study of individuals in psychology. Previous research has related the variance observed in repeated measurements (time series) of individuals to traitlike measures that are logically related. Intraindividual measures, such as intraindividual standard deviation or the coefficient of…

  16. Relation between sick leave and selected exposure variables among women semiconductor workers in Malaysia

    Science.gov (United States)

    Chee, H; Rampal, K

    2003-01-01

    Aims: To determine the relation between sick leave and selected exposure variables among women semiconductor workers. Methods: This was a cross sectional survey of production workers from 18 semiconductor factories. Those selected had to be women, direct production operators up to the level of line leader, and Malaysian citizens. Sick leave and exposure to physical and chemical hazards were determined by self reporting. Three sick leave variables were used; number of sick leave days taken in the past year was the variable of interest in logistic regression models where the effects of age, marital status, work task, work schedule, work section, and duration of work in factory and work section were also explored. Results: Marital status was strongly linked to the taking of sick leave. Age, work schedule, and duration of work in the factory were significant confounders only in certain cases. After adjusting for these confounders, chemical and physical exposures, with the exception of poor ventilation and smelling chemicals, showed no significant relation to the taking of sick leave within the past year. Work section was a good predictor for taking sick leave, as wafer polishing workers faced higher odds of taking sick leave for each of the three cut off points of seven days, three days, and not at all, while parts assembly workers also faced significantly higher odds of taking sick leave. Conclusion: In Malaysia, the wafer fabrication factories only carry out a limited portion of the work processes, in particular, wafer polishing and the processes immediately prior to and following it. This study, in showing higher illness rates for workers in wafer polishing compared to semiconductor assembly, has implications for the governmental policy of encouraging the setting up of wafer fabrication plants with the full range of work processes. PMID:12660374

  17. An Examination of Program Selection Criteria for Part-Time MBA Students

    Science.gov (United States)

    Colburn, Michael; Fox, Daniel E.; Westerfelt, Debra Kay

    2011-01-01

    Prospective graduate students select a graduate program as a result of a multifaceted decision-making process. This study examines the selection criteria that part-time MBA students used in selecting a program at a private university. Further, it analyzes the methods by which the students first learned of the MBA program. The authors posed the…

  18. Variable dead time counters: 2. A computer simulation

    International Nuclear Information System (INIS)

    Hooton, B.W.; Lees, E.W.

    1980-09-01

    A computer model has been developed to give a pulse train which simulates that generated by a variable dead time counter (VDC) used in safeguards determination of Pu mass. The model is applied to two algorithms generally used for VDC analysis. It is used to determine their limitations at high counting rates and to investigate the effects of random neutrons from (α,n) reactions. Both algorithms are found to be deficient for use with masses of 240 Pu greater than 100g and one commonly used algorithm is shown, by use of the model and also by theory, to yield a result which is dependent on the random neutron intensity. (author)

  19. Utilizing Response Time Distributions for Item Selection in CAT

    Science.gov (United States)

    Fan, Zhewen; Wang, Chun; Chang, Hua-Hua; Douglas, Jeffrey

    2012-01-01

    Traditional methods for item selection in computerized adaptive testing only focus on item information without taking into consideration the time required to answer an item. As a result, some examinees may receive a set of items that take a very long time to finish, and information is not accrued as efficiently as possible. The authors propose two…

  20. Persistent Performance of Fund Managers: An Analysis of Selection and Timing Skills

    Directory of Open Access Journals (Sweden)

    Bilal Ahmad Pandow

    2017-11-01

    Full Text Available The persistence in manager’s ability to select stocks and to time risk factors is a vital issue for accessing the performance of any asset management company. The fund manager who comes out successful today, whether the same will be able to sustain the performance in the future is a matter of concern to the investors and other stake holders. More than the stock picking ability of fund managers, one would be interested in knowing whether there is consistency in selectivity and timing performance or not. If a fund manager is able to deliver better performance consistently i.e. quarter-after-quarter or year-after-year, then the mangers’ performance in selecting the right type of stocks for the portfolio would be considered satisfactory. This paper has attempted to analyze the persistence in both stock selection and timing performance of mutual fund managers in India through Henriksson & Morton; Jenson, and Fama’s model over a period of five years. It is found that the fund managers present persistence in selection skills however, the sample funds haven’t shown progressive timing skills in Indian context.

  1. The reliable solution and computation time of variable parameters Logistic model

    OpenAIRE

    Pengfei, Wang; Xinnong, Pan

    2016-01-01

    The reliable computation time (RCT, marked as Tc) when applying a double precision computation of a variable parameters logistic map (VPLM) is studied. First, using the method proposed, the reliable solutions for the logistic map are obtained. Second, for a time-dependent non-stationary parameters VPLM, 10000 samples of reliable experiments are constructed, and the mean Tc is then computed. The results indicate that for each different initial value, the Tcs of the VPLM are generally different...

  2. Synthesis of Biochemical Applications on Digital Microfluidic Biochips with Operation Execution Time Variability

    DEFF Research Database (Denmark)

    Alistar, Mirela; Pop, Paul

    2015-01-01

    that each biochemical operation in an application is characterized by a worst-case execution time (wcet). However, during the execution of the application, due to variability and randomness in biochemical reactions, operations may finish earlier than their wcetswcets, resulting in unexploited slack...... in the schedule. In this paper, we first propose an online synthesis strategy that re-synthesizes the application at runtime when operations experience variability in their execution time, exploiting thus the slack to obtain shorter application completion times. We also propose a quasi-static synthesis strategy...... approaches have been proposed for the synthesis of digital microfluidic biochips, which, starting from a biochemical application and a given biochip architecture, determine the allocation, resource binding, scheduling, placement and routing of the operations in the application. Researchers have assumed...

  3. Variable Circular Collimator in Robotic Radiosurgery: A Time-Efficient Alternative to a Mini-Multileaf Collimator?

    International Nuclear Information System (INIS)

    Water, Steven van de; Hoogeman, Mischa S.; Breedveld, Sebastiaan; Nuyttens, Joost J.M.E.; Schaart, Dennis R.; Heijmen, Ben J.M.

    2011-01-01

    Purpose: Compared with many small circular beams used in CyberKnife treatments, beam's eye view-shaped fields are generally more time-efficient for dose delivery. However, beam's eye view-shaping devices, such as a mini-multileaf collimator (mMLC), are not presently available for CyberKnife, although a variable-aperture collimator (Iris, 12 field diameters; 5-60 mm) is available. We investigated whether the Iris can mimic noncoplanar mMLC treatments using a limited set of principal beam orientations (nodes) to produce time-efficient treatment plans. Methods and Materials: The data from 10 lung cancer patients and the beam-orientation optimization algorithm 'Cycle' were used to generate stereotactic treatment plans (3 x 20 Gy) for a CyberKnife virtually equipped with a mMLC. Typically, 10-16 favorable beam orientations were selected from 117 available robot node positions using beam's eye view-shaped fields with uniform fluence. Second, intensity-modulated Iris plans were generated by inverse optimization of nonisocentric circular candidate beams targeted from the same nodes selected in the mMLC plans. The plans were evaluated using the mean lung dose, lung volume receiving ≥20 Gy, conformality index, number of nodes, beams, and monitor units, and estimated treatment time. Results: The mMLC plans contained an average of 12 nodes and 11,690 monitor units. For a comparable mean lung dose, the Iris plans contained 12 nodes, 64 beams, and 21,990 monitor units. The estimated fraction duration was 12.2 min (range, 10.8-13.5) for the mMLC plans and 18.4 min (range, 12.9-28.5) for the Iris plans. In contrast to the mMLC plans, the treatment time for the Iris plans increased with an increasing target volume. The Iris plans were, on average, 40% longer than the corresponding mMLC plans for small targets ( 3 ) and ≤121% longer for larger targets. For a comparable conformality index, similar results were obtained. Conclusion: For stereotactic lung irradiation, time

  4. Time and space variability of spectral estimates of atmospheric pressure

    Science.gov (United States)

    Canavero, Flavio G.; Einaudi, Franco

    1987-01-01

    The temporal and spatial behaviors of atmospheric pressure spectra over the northern Italy and the Alpine massif were analyzed using data on surface pressure measurements carried out at two microbarograph stations in the Po Valley, one 50 km south of the Alps, the other in the foothills of the Dolomites. The first 15 days of the study overlapped with the Alpex Intensive Observation Period. The pressure records were found to be intrinsically nonstationary and were found to display substantial time variability, implying that the statistical moments depend on time. The shape and the energy content of spectra depended on different time segments. In addition, important differences existed between spectra obtained at the two stations, indicating a substantial effect of topography, particularly for periods less than 40 min.

  5. Input variable selection for interpolating high-resolution climate ...

    African Journals Online (AJOL)

    Although the primary input data of climate interpolations are usually meteorological data, other related (independent) variables are frequently incorporated in the interpolation process. One such variable is elevation, which is known to have a strong influence on climate. This research investigates the potential of 4 additional ...

  6. Modeling seasonality in bimonthly time series

    NARCIS (Netherlands)

    Ph.H.B.F. Franses (Philip Hans)

    1992-01-01

    textabstractA recurring issue in modeling seasonal time series variables is the choice of the most adequate model for the seasonal movements. One selection method for quarterly data is proposed in Hylleberg et al. (1990). Market response models are often constructed for bimonthly variables, and

  7. Energy-efficient relay selection and optimal power allocation for performance-constrained dual-hop variable-gain AF relaying

    KAUST Repository

    Zafar, Ammar

    2013-12-01

    This paper investigates the energy-efficiency enhancement of a variable-gain dual-hop amplify-and-forward (AF) relay network utilizing selective relaying. The objective is to minimize the total consumed power while keeping the end-to-end signal-to-noise-ratio (SNR) above a certain peak value and satisfying the peak power constraints at the source and relay nodes. To achieve this objective, an optimal relay selection and power allocation strategy is derived by solving the power minimization problem. Numerical results show that the derived optimal strategy enhances the energy-efficiency as compared to a benchmark scheme in which both the source and the selected relay transmit at peak power. © 2013 IEEE.

  8. Variable mechanical ventilation.

    Science.gov (United States)

    Fontela, Paula Caitano; Prestes, Renata Bernardy; Forgiarini, Luiz Alberto; Friedman, Gilberto

    2017-01-01

    To review the literature on the use of variable mechanical ventilation and the main outcomes of this technique. Search, selection, and analysis of all original articles on variable ventilation, without restriction on the period of publication and language, available in the electronic databases LILACS, MEDLINE®, and PubMed, by searching the terms "variable ventilation" OR "noisy ventilation" OR "biologically variable ventilation". A total of 36 studies were selected. Of these, 24 were original studies, including 21 experimental studies and three clinical studies. Several experimental studies reported the beneficial effects of distinct variable ventilation strategies on lung function using different models of lung injury and healthy lungs. Variable ventilation seems to be a viable strategy for improving gas exchange and respiratory mechanics and preventing lung injury associated with mechanical ventilation. However, further clinical studies are necessary to assess the potential of variable ventilation strategies for the clinical improvement of patients undergoing mechanical ventilation.

  9. Relay selection in cooperative communication systems over continuous time-varying fading channel

    Directory of Open Access Journals (Sweden)

    Ke Geng

    2017-02-01

    Full Text Available In this paper, we study relay selection under outdated channel state information (CSI in a decode-and-forward (DF cooperative system. Unlike previous researches on cooperative communication under outdated CSI, we consider that the channel varies continuously over time, i.e., the channel not only changes between relay selection and data transmission but also changes during data transmission. Thus the level of accuracy of the CSI used in relay selection degrades with data transmission. We first evaluate the packet error rate (PER of the cooperative system under continuous time-varying fading channel, and find that the PER performance deteriorates more seriously under continuous time-varying fading channel than when the channel is assumed to be constant during data transmission. Then, we propose a repeated relay selection (RRS strategy to improve the PER performance, in which the forwarded data is divided into multiple segments and relay is reselected before the transmission of each segment based on the updated CSI. Finally, we propose a combined relay selection (CRS strategy which takes advantage of three different relay selection strategies to further mitigate the impact of outdated CSI.

  10. Evaluation of Online Log Variables That Estimate Learners' Time Management in a Korean Online Learning Context

    Science.gov (United States)

    Jo, Il-Hyun; Park, Yeonjeong; Yoon, Meehyun; Sung, Hanall

    2016-01-01

    The purpose of this study was to identify the relationship between the psychological variables and online behavioral patterns of students, collected through a learning management system (LMS). As the psychological variable, time and study environment management (TSEM), one of the sub-constructs of MSLQ, was chosen to verify a set of time-related…

  11. Sex-specific selection for MHC variability in Alpine chamois

    Directory of Open Access Journals (Sweden)

    Schaschl Helmut

    2012-02-01

    Full Text Available Abstract Background In mammals, males typically have shorter lives than females. This difference is thought to be due to behavioural traits which enhance competitive abilities, and hence male reproductive success, but impair survival. Furthermore, in many species males usually show higher parasite burden than females. Consequently, the intensity of selection for genetic factors which reduce susceptibility to pathogens may differ between sexes. High variability at the major histocompatibility complex (MHC genes is believed to be advantageous for detecting and combating the range of infectious agents present in the environment. Increased heterozygosity at these immune genes is expected to be important for individual longevity. However, whether males in natural populations benefit more from MHC heterozygosity than females has rarely been investigated. We investigated this question in a long-term study of free-living Alpine chamois (Rupicapra rupicapra, a polygynous mountain ungulate. Results Here we show that male chamois survive significantly (P = 0.022 longer if heterozygous at the MHC class II DRB locus, whereas females do not. Improved survival of males was not a result of heterozygote advantage per se, as background heterozygosity (estimated across twelve microsatellite loci did not change significantly with age. Furthermore, reproductively active males depleted their body fat reserves earlier than females leading to significantly impaired survival rates in this sex (P Conclusions Increased MHC class II DRB heterozygosity with age in males, suggests that MHC heterozygous males survive longer than homozygotes. Reproductively active males appear to be less likely to survive than females most likely because of the energetic challenge of the winter rut, accompanied by earlier depletion of their body fat stores, and a generally higher parasite burden. This scenario renders the MHC-mediated immune response more important for males than for females

  12. Methodological development for selection of significant predictors explaining fatal road accidents.

    Science.gov (United States)

    Dadashova, Bahar; Arenas-Ramírez, Blanca; Mira-McWilliams, José; Aparicio-Izquierdo, Francisco

    2016-05-01

    Identification of the most relevant factors for explaining road accident occurrence is an important issue in road safety research, particularly for future decision-making processes in transport policy. However model selection for this particular purpose is still an ongoing research. In this paper we propose a methodological development for model selection which addresses both explanatory variable and adequate model selection issues. A variable selection procedure, TIM (two-input model) method is carried out by combining neural network design and statistical approaches. The error structure of the fitted model is assumed to follow an autoregressive process. All models are estimated using Markov Chain Monte Carlo method where the model parameters are assigned non-informative prior distributions. The final model is built using the results of the variable selection. For the application of the proposed methodology the number of fatal accidents in Spain during 2000-2011 was used. This indicator has experienced the maximum reduction internationally during the indicated years thus making it an interesting time series from a road safety policy perspective. Hence the identification of the variables that have affected this reduction is of particular interest for future decision making. The results of the variable selection process show that the selected variables are main subjects of road safety policy measures. Published by Elsevier Ltd.

  13. Quantifying selection in evolving populations using time-resolved genetic data

    Science.gov (United States)

    Illingworth, Christopher J. R.; Mustonen, Ville

    2013-01-01

    Methods which uncover the molecular basis of the adaptive evolution of a population address some important biological questions. For example, the problem of identifying genetic variants which underlie drug resistance, a question of importance for the treatment of pathogens, and of cancer, can be understood as a matter of inferring selection. One difficulty in the inference of variants under positive selection is the potential complexity of the underlying evolutionary dynamics, which may involve an interplay between several contributing processes, including mutation, recombination and genetic drift. A source of progress may be found in modern sequencing technologies, which confer an increasing ability to gather information about evolving populations, granting a window into these complex processes. One particularly interesting development is the ability to follow evolution as it happens, by whole-genome sequencing of an evolving population at multiple time points. We here discuss how to use time-resolved sequence data to draw inferences about the evolutionary dynamics of a population under study. We begin by reviewing our earlier analysis of a yeast selection experiment, in which we used a deterministic evolutionary framework to identify alleles under selection for heat tolerance, and to quantify the selection acting upon them. Considering further the use of advanced intercross lines to measure selection, we here extend this framework to cover scenarios of simultaneous recombination and selection, and of two driver alleles with multiple linked neutral, or passenger, alleles, where the driver pair evolves under an epistatic fitness landscape. We conclude by discussing the limitations of the approach presented and outlining future challenges for such methodologies.

  14. Genetic and Psychosocial Predictors of Aggression: Variable Selection and Model Building With Component-Wise Gradient Boosting

    Directory of Open Access Journals (Sweden)

    Robert Suchting

    2018-05-01

    Full Text Available Rationale: Given datasets with a large or diverse set of predictors of aggression, machine learning (ML provides efficient tools for identifying the most salient variables and building a parsimonious statistical model. ML techniques permit efficient exploration of data, have not been widely used in aggression research, and may have utility for those seeking prediction of aggressive behavior.Objectives: The present study examined predictors of aggression and constructed an optimized model using ML techniques. Predictors were derived from a dataset that included demographic, psychometric and genetic predictors, specifically FK506 binding protein 5 (FKBP5 polymorphisms, which have been shown to alter response to threatening stimuli, but have not been tested as predictors of aggressive behavior in adults.Methods: The data analysis approach utilized component-wise gradient boosting and model reduction via backward elimination to: (a select variables from an initial set of 20 to build a model of trait aggression; and then (b reduce that model to maximize parsimony and generalizability.Results: From a dataset of N = 47 participants, component-wise gradient boosting selected 8 of 20 possible predictors to model Buss-Perry Aggression Questionnaire (BPAQ total score, with R2 = 0.66. This model was simplified using backward elimination, retaining six predictors: smoking status, psychopathy (interpersonal manipulation and callous affect, childhood trauma (physical abuse and neglect, and the FKBP5_13 gene (rs1360780. The six-factor model approximated the initial eight-factor model at 99.4% of R2.Conclusions: Using an inductive data science approach, the gradient boosting model identified predictors consistent with previous experimental work in aggression; specifically psychopathy and trauma exposure. Additionally, allelic variants in FKBP5 were identified for the first time, but the relatively small sample size limits generality of results and calls for

  15. Age and Sex Differences in Intra-Individual Variability in a Simple Reaction Time Task

    Science.gov (United States)

    Ghisletta, Paolo; Renaud, Olivier; Fagot, Delphine; Lecerf, Thierry; de Ribaupierre, Anik

    2018-01-01

    While age effects in reaction time (RT) tasks across the lifespan are well established for level of performance, analogous findings have started appearing also for indicators of intra-individual variability (IIV). Children are not only slower, but also display more variability than younger adults in RT. Yet, little is known about potential…

  16. Lyapunov-based constrained engine torque control using electronic throttle and variable cam timing

    NARCIS (Netherlands)

    Feru, E.; Lazar, M.; Gielen, R.H.; Kolmanovsky, I.V.; Di Cairano, S.

    2012-01-01

    In this paper, predictive control of a spark ignition engine equipped with an electronic throttle and a variable cam timing actuator is considered. The objective is to adjust the throttle angle and the engine cam timing in order to reduce the exhaust gas emissions while maintaining fast and

  17. A study of applying variable valve timing to highly rated diesel engines

    Energy Technology Data Exchange (ETDEWEB)

    Stone, C R; Leonard, H J [comps.; Brunel Univ., Uxbridge (United Kingdom); Charlton, S J [comp.; Bath Univ. (United Kingdom)

    1992-10-01

    The main objective of the research was to use Simulation Program for Internal Combustion Engines (SPICE) to quantify the potential offered by Variable Valve Timing (VVT) in improving engine performance. A model has been constructed of a particular engine using SPICE. The model has been validated with experimental data, and it has been shown that accurate predictions are made when the valve timing is changed. (author)

  18. IRAS variables as galactic structure tracers - Classification of the bright variables

    Science.gov (United States)

    Allen, L. E.; Kleinmann, S. G.; Weinberg, M. D.

    1993-01-01

    The characteristics of the 'bright infrared variables' (BIRVs), a sample consisting of the 300 brightest stars in the IRAS Point Source Catalog with IRAS variability index VAR of 98 or greater, are investigated with the purpose of establishing which of IRAS variables are AGB stars (e.g., oxygen-rich Miras and carbon stars, as was assumed by Weinberg (1992)). Results of the analysis of optical, infrared, and microwave spectroscopy of these stars indicate that, out of 88 stars in the BIRV sample identified with cataloged variables, 86 can be classified as Miras. Results of a similar analysis performed for a color-selected sample of stars, using the color limits employed by Habing (1988) to select AGB stars, showed that, out of 52 percent of classified stars, 38 percent are non-AGB stars, including H II regions, planetary nebulae, supergiants, and young stellar objects, indicating that studies using color-selected samples are subject to misinterpretation.

  19. Repeat what after whom? Exploring variable selectivity in a cross-dialectal shadowing task.

    Directory of Open Access Journals (Sweden)

    Abby eWalker

    2015-05-01

    Full Text Available Twenty women from Christchurch, New Zealand and sixteen from Columbus Ohio (dialect region U.S. Midland participated in a bimodal lexical naming task where they repeated monosyllabic words after four speakers from four regional dialects: New Zealand, Australia, U.S. Inland North and U.S. Midland. The resulting utterances were acoustically analyzed, and presented to listeners on Amazon Mechanical Turk in an AXB task. Convergence is observed, but differs depending on the dialect of the speaker, the dialect of the model, the particular word class being shadowed, and the order in which dialects are presented to participants. We argue that these patterns are generally consistent with findings that convergence is promoted by a large phonetic distance between shadower and model (Babel, 2010, contra Kim, Horton & Bradlow, 2011, and greater existing variability in a vowel class (Babel, 2012. The results also suggest that more comparisons of accommodation towards different dialects are warranted, and that the investigation of the socio-indexical meaning of specific linguistic forms in context is a promising avenue for understanding variable selectivity in convergence.

  20. gamboostLSS: An R Package for Model Building and Variable Selection in the GAMLSS Framework

    OpenAIRE

    Hofner, Benjamin; Mayr, Andreas; Schmid, Matthias

    2014-01-01

    Generalized additive models for location, scale and shape are a flexible class of regression models that allow to model multiple parameters of a distribution function, such as the mean and the standard deviation, simultaneously. With the R package gamboostLSS, we provide a boosting method to fit these models. Variable selection and model choice are naturally available within this regularized regression framework. To introduce and illustrate the R package gamboostLSS and its infrastructure, we...

  1. Selective dopamine D3 receptor antagonism by SB-277011A attenuates cocaine reinforcement as assessed by progressive-ratio and variable-cost–variable-payoff fixed-ratio cocaine self-administration in rats

    OpenAIRE

    Xi, Zheng-Xiong; Gilbert, Jeremy G.; Pak, Arlene C.; Ashby, Charles R.; Heidbreder, Christian A.; Gardner, Eliot L.

    2005-01-01

    In rats, acute administration of SB-277011A, a highly selective dopamine (DA) D3 receptor antagonist, blocks cocaine-enhanced brain stimulation reward, cocaine-seeking behaviour and reinstatement of cocaine-seeking behaviour. Here, we investigated whether SB-277011A attenuates cocaine reinforcement as assessed by cocaine self-administration under variable-cost–variable-payoff fixed-ratio (FR) and progressive-ratio (PR) reinforcement schedules. Acute i.p. administration of SB-277011A (3–24 mg/...

  2. Predictor variables for half marathon race time in recreational female runners

    Directory of Open Access Journals (Sweden)

    Beat Knechtle

    2011-01-01

    Full Text Available INTRODUCTION: The relationship between skin-fold thickness and running performance has been investigated from 100 m to the marathon distance, except the half marathon distance. OBJECTIVE: To investigate whether anthropometry characteristics or training practices were related to race time in 42 recreational female half marathoners to determine the predictor variables of half-marathon race time and to inform future novice female half marathoners. METHODS: Observational field study at the 'Half Marathon Basel' in Switzerland. RESULTS: In the bivariate analysis, body mass (r = 0.60, body mass index (r = 0.48, body fat (r = 0.56, skin-fold at pectoral (r = 0.61, mid-axilla (r = 0.69, triceps (r = 0.49, subscapular (r = 0.61, abdominal (r = 0.59, suprailiac (r = 0.55 medial calf (r = 0.53 site, and speed of the training sessions (r = -0.68 correlated to race time. Mid-axilla skin-fold (p = 0.04 and speed of the training sessions (p = 0.0001 remained significant after multi-variate analysis. Race time in a half marathon might be predicted by the following equation (r² = 0.71: Race time (min = 166.7 + 1.7x (mid-axilla skin-fold, mm - 6.4x (speed in training, km/h. Running speed during training was related to skinfold thickness at mid-axilla (r = -0.31, subscapular (r = -0.38, abdominal (r = -0.44, suprailiacal (r = -0.41, the sum of eight skin-folds (r = -0.36 and percent body fat (r = -0.31. CONCLUSION: Anthropometric and training variables were related to half-marathon race time in recreational female runners. Skin-fold thicknesses at various upper body locations were related to training intensity. High running speed in training appears to be important for fast half-marathon race times and may reduce upper body skin-fold thicknesses in recreational female half marathoners.

  3. Predictor variables for half marathon race time in recreational female runners.

    Science.gov (United States)

    Knechtle, Beat; Knechtle, Patrizia; Barandun, Ursula; Rosemann, Thomas; Lepers, Romuald

    2011-01-01

    The relationship between skin-fold thickness and running performance has been investigated from 100 m to the marathon distance, except the half marathon distance. To investigate whether anthropometry characteristics or training practices were related to race time in 42 recreational female half marathoners to determine the predictor variables of half-marathon race time and to inform future novice female half marathoners. Observational field study at the 'Half Marathon Basel' in Switzerland. In the bivariate analysis, body mass (r = 0.60), body mass index (r = 0.48), body fat (r = 0.56), skin-fold at pectoral (r = 0.61), mid-axilla (r = 0.69), triceps (r = 0.49), subscapular (r = 0.61), abdominal (r = 0.59), suprailiac (r = 0.55) medial calf (r = 0.53) site, and speed of the training sessions (r = -0.68) correlated to race time. Mid-axilla skin-fold (p = 0.04) and speed of the training sessions (p = 0.0001) remained significant after multi-variate analysis. Race time in a half marathon might be predicted by the following equation (r² = 0.71): Race time (min) = 166.7 + 1.7x (mid-axilla skin-fold, mm) - 6.4x (speed in training, km/h). Running speed during training was related to skinfold thickness at mid-axilla (r = -0.31), subscapular (r = -0.38), abdominal (r = -0.44), suprailiacal (r = -0.41), the sum of eight skin-folds (r = -0.36) and percent body fat (r = -0.31). Anthropometric and training variables were related to half-marathon race time in recreational female runners. Skin-fold thicknesses at various upper body locations were related to training intensity. High running speed in training appears to be important for fast half-marathon race times and may reduce upper body skin-fold thicknesses in recreational female half marathoners.

  4. Real-time variables dictionary (RTVD), and expert system for development of real-time applications in nuclear power plants

    International Nuclear Information System (INIS)

    Senra Martinez, A.; Schirru, R.; Dutra Thome Filho, Z.

    1990-01-01

    It is presented in this paper a computerized methodology based on a data dictionary managed by an expert system called Real-Time Variables Dictionary (RTVD). This system is very usefull for development of real-time applications in nuclear power plants. It is described in details the RTVD functions and its implantation in a VAX 8600 computer. It is also pointed out the concepts of artificial intelligence used in teh RTVD

  5. Ultrahigh Dimensional Variable Selection for Interpolation of Point Referenced Spatial Data: A Digital Soil Mapping Case Study

    Science.gov (United States)

    Lamb, David W.; Mengersen, Kerrie

    2016-01-01

    Modern soil mapping is characterised by the need to interpolate point referenced (geostatistical) observations and the availability of large numbers of environmental characteristics for consideration as covariates to aid this interpolation. Modelling tasks of this nature also occur in other fields such as biogeography and environmental science. This analysis employs the Least Angle Regression (LAR) algorithm for fitting Least Absolute Shrinkage and Selection Operator (LASSO) penalized Multiple Linear Regressions models. This analysis demonstrates the efficiency of the LAR algorithm at selecting covariates to aid the interpolation of geostatistical soil carbon observations. Where an exhaustive search of the models that could be constructed from 800 potential covariate terms and 60 observations would be prohibitively demanding, LASSO variable selection is accomplished with trivial computational investment. PMID:27603135

  6. Impact of Psychological Variables on Playing Ability of University Level Soccer Players

    Directory of Open Access Journals (Sweden)

    Ertan Tufekcioglu

    2014-10-01

    Full Text Available The purpose of the study was to find out the relationship between psychological variables and soccer playing ability among the university level male players. 42 soccer players representing different universities who participated in inter university competitions were selected as the subjects of the study. The dependent variable was soccer playing ability and independent variables were the selected psychological variables. Soccer playing ability was determined through a 10 point scale at the time of competitions. Psychological variables included achievement motivation, anxiety, self-concept and aggression. The data was statistically analyzed using Karl Pearson’s correlation coefficient and multiple regression analysis using SPSS. It was concluded that soccer playing ability has a positive correlation with achievement motivation and self-concept whereas anxiety and aggression have a negative correlation with soccer playing ability.

  7. Trip Travel Time Forecasting Based on Selective Forgetting Extreme Learning Machine

    Directory of Open Access Journals (Sweden)

    Zhiming Gui

    2014-01-01

    Full Text Available Travel time estimation on road networks is a valuable traffic metric. In this paper, we propose a machine learning based method for trip travel time estimation in road networks. The method uses the historical trip information extracted from taxis trace data as the training data. An optimized online sequential extreme machine, selective forgetting extreme learning machine, is adopted to make the prediction. Its selective forgetting learning ability enables the prediction algorithm to adapt to trip conditions changes well. Experimental results using real-life taxis trace data show that the forecasting model provides an effective and practical way for the travel time forecasting.

  8. Selective logging: do rates of forest turnover in stems, species composition and functional traits decrease with time since disturbance? – A 45 year perspective

    Science.gov (United States)

    Osazuwa-Peters, Oyomoare L.; Jiménez, Iván; Oberle, Brad; Chapman, Colin A.; Zanne, Amy E.

    2015-01-01

    Selective logging, the targeted harvesting of timber trees in a single cutting cycle, is globally rising in extent and intensity. Short-term impacts of selective logging on tropical forests have been widely investigated, but long-term effects on temporal dynamics of forest structure and composition are largely unknown. Understanding these long-term dynamics will help determine whether tropical forests are resilient to selective logging and inform choices between competing demands of anthropogenic use versus conservation of tropical forests. Forest dynamics can be studied within the framework of succession theory, which predicts that temporal turnover rates should decline with time since disturbance. Here, we investigated the temporal dynamics of a tropical forest in Kibale National Park, Uganda over 45 years following selective logging. We estimated turnover rates in stems, species composition, and functional traits (wood density and diameter at breast height), using observations from four censuses in 1989, 1999, 2006, and 2013, of stems ≥ 10 cm diameter within 17 unlogged and 9 logged 200 × 10 m vegetation plots. We used null models to account for interdependencies among turnover rates in stems, species composition, and functional traits. We tested predictions that turnover rates should be higher and decrease with increasing time since the selective logging event in logged forest, but should be less temporally variable in unlogged forest. Overall, we found higher turnover rates in logged forest for all three attributes, but turnover rates did not decline through time in logged forest and was not less temporally variable in unlogged forest. These results indicate that successional models that assume recovery to pre-disturbance conditions are inadequate for predicting the effects of selective logging on the dynamics of the tropical forest in Kibale. Selective logging resulted in persistently higher turnover rates, which may compromise the carbon storage capacity

  9. Effect of individual thinking styles on item selection during study time allocation.

    Science.gov (United States)

    Jia, Xiaoyu; Li, Weijian; Cao, Liren; Li, Ping; Shi, Meiling; Wang, Jingjing; Cao, Wei; Li, Xinyu

    2018-04-01

    The influence of individual differences on learners' study time allocation has been emphasised in recent studies; however, little is known about the role of individual thinking styles (analytical versus intuitive). In the present study, we explored the influence of individual thinking styles on learners' application of agenda-based and habitual processes when selecting the first item during a study-time allocation task. A 3-item cognitive reflection test (CRT) was used to determine individuals' degree of cognitive reliance on intuitive versus analytical cognitive processing. Significant correlations between CRT scores and the choices of first item selection were observed in both Experiment 1a (study time was 5 seconds per triplet) and Experiment 1b (study time was 20 seconds per triplet). Furthermore, analytical decision makers constructed a value-based agenda (prioritised high-reward items), whereas intuitive decision makers relied more upon habitual responding (selected items from the leftmost of the array). The findings of Experiment 1a were replicated in Experiment 2 notwithstanding ruling out the possible effects from individual intelligence and working memory capacity. Overall, the individual thinking style plays an important role on learners' study time allocation and the predictive ability of CRT is reliable in learners' item selection strategy. © 2016 International Union of Psychological Science.

  10. Start time variability and predictability in railroad train and engine freight and passenger service employees.

    Science.gov (United States)

    2014-04-01

    Start time variability in work schedules is often hypothesized to be a cause of railroad employee fatigue because unpredictable work start times prevent employees from planning sleep and personal activities. This report examines work start time diffe...

  11. Birth order and selected work-related personality variables.

    Science.gov (United States)

    Phillips, A S; Bedeian, A G; Mossholder, K W; Touliatos, J

    1988-12-01

    A possible link between birth order and various individual characteristics (e. g., intelligence, potential eminence, need for achievement, sociability) has been suggested by personality theorists such as Adler for over a century. The present study examines whether birth order is associated with selected personality variables that may be related to various work outcomes. 3 of 7 hypotheses were supported and the effect sizes for these were small. Firstborns scored significantly higher than later borns on measures of dominance, good impression, and achievement via conformity. No differences between firstborns and later borns were found in managerial potential, work orientation, achievement via independence, and sociability. The study's sample consisted of 835 public, government, and industrial accountants responding to a national US survey of accounting professionals. The nature of the sample may have been partially responsible for the results obtained. Its homogeneity may have caused any birth order effects to wash out. It can be argued that successful membership in the accountancy profession requires internalization of a set of prescribed rules and standards. It may be that accountants as a group are locked in to a behavioral framework. Any differentiation would result from spurious interpersonal differences, not from predictable birth-order related characteristics. A final interpretation is that birth order effects are nonexistent or statistical artifacts. Given the present data and particularistic sample, however, the authors have insufficient information from which to draw such a conclusion.

  12. Dissecting Time- from Tumor-Related Gene Expression Variability in Bilateral Breast Cancer

    Directory of Open Access Journals (Sweden)

    Maurizio Callari

    2018-01-01

    Full Text Available Metachronous (MBC and synchronous bilateral breast tumors (SBC are mostly distinct primaries, whereas paired primaries and their local recurrences (LRC share a common origin. Intra-pair gene expression variability in MBC, SBC, and LRC derives from time/tumor microenvironment-related and tumor genetic background-related factors and pairs represents an ideal model for trying to dissect tumor-related from microenvironment-related variability. Pairs of tumors derived from women with SBC (n = 18, MBC (n = 11, and LRC (n = 10 undergoing local-regional treatment were profiled for gene expression; similarity between pairs was measured using an intraclass correlation coefficient (ICC computed for each gene and compared using analysis of variance (ANOVA. When considering biologically unselected genes, the highest correlations were found for primaries and paired LRC, and the lowest for MBC pairs. By instead limiting the analysis to the breast cancer intrinsic genes, correlations between primaries and paired LRC were enhanced, while lower similarities were observed for SBC and MBC. Focusing on stromal-related genes, the ICC values decreased for MBC and were significantly different from SBC. These findings indicate that it is possible to dissect intra-pair gene expression variability into components that are associated with genetic origin or with time and microenvironment by using specific gene subsets.

  13. Retention time variability as a mechanism for animal mediated long-distance dispersal.

    Directory of Open Access Journals (Sweden)

    Vishwesha Guttal

    Full Text Available Long-distance dispersal (LDD events, although rare for most plant species, can strongly influence population and community dynamics. Animals function as a key biotic vector of seeds and thus, a mechanistic and quantitative understanding of how individual animal behaviors scale to dispersal patterns at different spatial scales is a question of critical importance from both basic and applied perspectives. Using a diffusion-theory based analytical approach for a wide range of animal movement and seed transportation patterns, we show that the scale (a measure of local dispersal of the seed dispersal kernel increases with the organisms' rate of movement and mean seed retention time. We reveal that variations in seed retention time is a key determinant of various measures of LDD such as kurtosis (or shape of the kernel, thinkness of tails and the absolute number of seeds falling beyond a threshold distance. Using empirical data sets of frugivores, we illustrate the importance of variability in retention times for predicting the key disperser species that influence LDD. Our study makes testable predictions linking animal movement behaviors and gut retention times to dispersal patterns and, more generally, highlights the potential importance of animal behavioral variability for the LDD of seeds.

  14. Multiscale time irreversibility of heart rate and blood pressure variability during orthostasis

    International Nuclear Information System (INIS)

    Chladekova, L; Czippelova, B; Turianikova, Z; Tonhajzerova, I; Calkovska, A; Javorka, M; Baumert, M

    2012-01-01

    Time irreversibility is a characteristic feature of non-equilibrium, complex systems such as the cardiovascular control mediated by the autonomic nervous system (ANS). Time irreversibility analysis of heart rate variability (HRV) and blood pressure variability (BPV) represents a new approach to assess cardiovascular regulatory mechanisms. The aim of this paper was to assess the changes in HRV and BPV irreversibility during the active orthostatic test (a balance of ANS shifted towards sympathetic predominance) in 28 healthy young subjects. We used three different time irreversibility indices—Porta’s, Guzik's and Ehler's indices (P%, G% and E, respectively) derived from data segments containing 1000 beat-to-beat intervals on four timescales. We observed an increase in the HRV and a decrease in the BPV irreversibility during standing compared to the supine position. The postural change in irreversibility was confirmed by surrogate data analysis. The differences were more evident in G% and E than P% and for higher scale factors. Statistical analysis showed a close relationship between G% and E. Contrary to this, the association between P% and G% and P% and E was not proven. We conclude that time irreversibility of beat-to-beat HRV and BPV is significantly altered during orthostasis, implicating involvement of the autonomous nervous system in its generation. (paper)

  15. Analysis of time variable gravity data over Africa

    International Nuclear Information System (INIS)

    Barletta, Valentina R.; Aoudia, Abdelkarim

    2010-01-01

    Africa, in principle, is a unique laboratory where to address the individual contribution of the different facets of the Earth system as well as their interactions. However, it shows both a rich hydrology that exhibits complex characteristics of rivers and wide basins of different sizes in addition to the hydrology of lakes, and other wetlands and storage reservoirs and groundwater aquifers, and continuous and discontinuous changes in the physical properties of the Earth interior. Stretching and heating processes are accompanied by punctuated episodes of faulting and/or volcanism, and longer-term changes in surface elevation that disrupt river drainage and climate. Space gravity missions GRACE, flying since 2002, was expressly designed to detect the time-dependent gravity field in order to study the hydrological cycle of the Earth, but has also evidenced Solid Earth phenomena such as Post Glacial Rebound (PGR) and the signature of a giant earthquake such as the 2004 Sumatra. Hence the idea to analyze time variable gravity data over Africa in order to retrieve fingerprints of geophysical phenomena. The exploitation of the GRACE data for geophysics, however, is not straightforward. Indeed, the quality of the signal is not uniform worldwide and gravity is always the superposition of contributions from solid Earth as well as climate-related phenomena, that cannot be easily distinguished, at a first glance, both in time and space. In the present study we show that mass changes cannot be classified simply as trends or periodic signals. We follow an alternative way to separate complementary components, periodic and non-periodic signals, without loosing information. We show that the a priori periodic and linear trend fitting function is not everywhere appropriate and in some cases it is even so poor to result in misinterpreting the data. Variations in long term behavior and periodicities higher than the usual annual (and semi-annual) indeed occur, related to geophysical

  16. Talent identification and selection in elite youth football: An Australian context.

    Science.gov (United States)

    O'Connor, Donna; Larkin, Paul; Mark Williams, A

    2016-10-01

    We identified the perceptual-cognitive skills and player history variables that differentiate players selected or not selected into an elite youth football (i.e. soccer) programme in Australia. A sample of elite youth male football players (n = 127) completed an adapted participation history questionnaire and video-based assessments of perceptual-cognitive skills. Following data collection, 22 of these players were offered a full-time scholarship for enrolment at an elite player residential programme. Participants selected for the scholarship programme recorded superior performance on the combined perceptual-cognitive skills tests compared to the non-selected group. There were no significant between group differences on the player history variables. Stepwise discriminant function analysis identified four predictor variables that resulted in the best categorization of selected and non-selected players (i.e. recent match-play performance, region, number of other sports participated, combined perceptual-cognitive performance). The effectiveness of the discriminant function is reflected by 93.7% of players being correctly classified, with the four variables accounting for 57.6% of the variance. Our discriminating model for selection may provide a greater understanding of the factors that influence elite youth talent selection and identification.

  17. The effect of aquatic plyometric training with and without resistance on selected physical fitness variables among volleyball players

    Directory of Open Access Journals (Sweden)

    K. KAMALAKKANNAN

    2011-06-01

    Full Text Available The purpose of this study is to analyze the effect of aquatic plyometric training with and without the use ofweights on selected physical fitness variables among volleyball players. To achieve the purpose of these study 36physically active undergraduate volleyball players between 18 and 20 years of age volunteered as participants.The participants were randomly categorized into three groups of 12 each: a control group (CG, an aquaticPlyometric training with weight group (APTWG, and an aquatic Plyometric training without weight group(APTWOG. The subjects of the control group were not exposed to any training. Both experimental groupsunderwent their respective experimental treatment for 12 weeks, 3 days per week and a single session on eachday. Speed, endurance, and explosive power were measured as the dependent variables for this study. 36 days ofexperimental treatment was conducted for all the groups and pre and post data was collected. The collected datawere analyzed using an analysis of covariance (ANCOVA and followed by a Scheffé’s post hoc test. The resultsrevealed significant differences between groups on all the selected dependent variables. This study demonstratedthat aquatic plyometric training can be one effective means for improving speed, endurance, and explosivepower in volley ball players

  18. TIME VARIABILITY OF EMISSION LINES FOR FOUR ACTIVE T TAURI STARS. I. OCTOBER-DECEMBER IN 2010

    Energy Technology Data Exchange (ETDEWEB)

    Chou, Mei-Yin; Takami, Michihiro; Karr, Jennifer L.; Shang Hsien; Liu, Hauyu Baobab [Institute of Astronomy and Astrophysics, Academia Sinica, P.O. Box 23-141, Taipei 10617, Taiwan (China); Manset, Nadine [Canada-France-Hawaii Telescope, 65-1238 Mamalahoa Hwy, Kamuela, HI 96743 (United States); Beck, Tracy [Space Telescope Science Institute, 3700 San Martin Drive, Baltimore, MD 21218 (United States); Pyo, Tae-Soo [Subaru Telescope, 650 North Aohoku Place, Hilo, HI 96720 (United States); Chen, Wen-Ping; Panwar, Neelam [Institute of Astronomy, National Central University, Taoyuan County 32001, Taiwan (China)

    2013-04-15

    We present optical spectrophotometric monitoring of four active T Tauri stars (DG Tau, RY Tau, XZ Tau, RW Aur A) at high spectral resolution (R {approx}> 1 Multiplication-Sign 10{sup 4}), to investigate the correlation between time variable mass ejection seen in the jet/wind structure of the driving source and time variable mass accretion probed by optical emission lines. This may allow us to constrain the understanding of the jet/wind launching mechanism, the location of the launching region, and the physical link with magnetospheric mass accretion. In 2010, observations were made at six different epochs to investigate how daily and monthly variability might affect such a study. We perform comparisons between the line profiles we observed and those in the literature over a period of decades and confirm the presence of time variability separate from the daily and monthly variability during our observations. This is so far consistent with the idea that these line profiles have a long-term variability (3-20 yr) related to episodic mass ejection suggested by the structures in the extended flow components. We also investigate the correlations between equivalent widths and between luminosities for different lines. We find that these correlations are consistent with the present paradigm of steady magnetospheric mass accretion and emission line regions that are close to the star.

  19. TIME VARIABILITY OF EMISSION LINES FOR FOUR ACTIVE T TAURI STARS. I. OCTOBER–DECEMBER IN 2010

    International Nuclear Information System (INIS)

    Chou, Mei-Yin; Takami, Michihiro; Karr, Jennifer L.; Shang Hsien; Liu, Hauyu Baobab; Manset, Nadine; Beck, Tracy; Pyo, Tae-Soo; Chen, Wen-Ping; Panwar, Neelam

    2013-01-01

    We present optical spectrophotometric monitoring of four active T Tauri stars (DG Tau, RY Tau, XZ Tau, RW Aur A) at high spectral resolution (R ∼> 1 × 10 4 ), to investigate the correlation between time variable mass ejection seen in the jet/wind structure of the driving source and time variable mass accretion probed by optical emission lines. This may allow us to constrain the understanding of the jet/wind launching mechanism, the location of the launching region, and the physical link with magnetospheric mass accretion. In 2010, observations were made at six different epochs to investigate how daily and monthly variability might affect such a study. We perform comparisons between the line profiles we observed and those in the literature over a period of decades and confirm the presence of time variability separate from the daily and monthly variability during our observations. This is so far consistent with the idea that these line profiles have a long-term variability (3-20 yr) related to episodic mass ejection suggested by the structures in the extended flow components. We also investigate the correlations between equivalent widths and between luminosities for different lines. We find that these correlations are consistent with the present paradigm of steady magnetospheric mass accretion and emission line regions that are close to the star.

  20. FCERI AND HISTAMINE METABOLISM GENE VARIABILITY IN SELECTIVE RESPONDERS TO NSAIDS

    Directory of Open Access Journals (Sweden)

    Gemma Amo

    2016-09-01

    Full Text Available The high-affinity IgE receptor (Fcε RI is a heterotetramer of three subunits: Fcε RIα, Fcε RIβ and Fcε RIγ (αβγ2 encoded by three genes designated as FCER1A, FCER1B (MS4A2 and FCER1G, respectively. Recent evidence points to FCERI gene variability as a relevant factor in the risk of developing allergic diseases. Because Fcε RI plays a key role in the events downstream of the triggering factors in immunological response, we hypothesized that FCERI gene variants might be related with the risk of, or with the clinical response to, selective (IgE mediated non-steroidal anti-inflammatory (NSAID hypersensitivity.From a cohort of 314 patients suffering from selective hypersensitivity to metamizole, ibuprofen, diclofenac, paracetamol, acetylsalicylic acid (ASA, propifenazone, naproxen, ketoprofen, dexketoprofen, etofenamate, aceclofenac, etoricoxib, dexibuprofen, indomethacin, oxyphenylbutazone or piroxicam, and 585 unrelated healthy controls that tolerated these NSAIDs, we analyzed the putative effects of the FCERI SNPs FCER1A rs2494262, rs2427837 and rs2251746; FCER1B rs1441586, rs569108 and rs512555; FCER1G rs11587213, rs2070901 and rs11421. Furthermore, in order to identify additional genetic markers which might be associated with the risk of developing selective NSAID hypersensitivity, or which may modify the putative association of FCERI gene variations with risk, we analyzed polymorphisms known to affect histamine synthesis or metabolism, such as rs17740607, rs2073440, rs1801105, rs2052129, rs10156191, rs1049742 and rs1049793 in the HDC, HNMT and DAO genes.No major genetic associations with risk or with clinical presentation, and no gene-gene interactions, or gene-phenotype interactions (including age, gender, IgE concentration, antecedents of atopy, culprit drug or clinical presentation were identified in patients. However, logistic regression analyses indicated that the presence of antecedents of atopy and the DAO SNP rs2052129 (GG

  1. Space-Time Joint Interference Cancellation Using Fuzzy-Inference-Based Adaptive Filtering Techniques in Frequency-Selective Multipath Channels

    Directory of Open Access Journals (Sweden)

    Chen Yu-Fan

    2006-01-01

    Full Text Available An adaptive minimum mean-square error (MMSE array receiver based on the fuzzy-logic recursive least-squares (RLS algorithm is developed for asynchronous DS-CDMA interference suppression in the presence of frequency-selective multipath fading. This receiver employs a fuzzy-logic control mechanism to perform the nonlinear mapping of the squared error and squared error variation, denoted by ( , , into a forgetting factor . For the real-time applicability, a computationally efficient version of the proposed receiver is derived based on the least-mean-square (LMS algorithm using the fuzzy-inference-controlled step-size . This receiver is capable of providing both fast convergence/tracking capability as well as small steady-state misadjustment as compared with conventional LMS- and RLS-based MMSE DS-CDMA receivers. Simulations show that the fuzzy-logic LMS and RLS algorithms outperform, respectively, other variable step-size LMS (VSS-LMS and variable forgetting factor RLS (VFF-RLS algorithms at least 3 dB and 1.5 dB in bit-error-rate (BER for multipath fading channels.

  2. Real-time flavour tagging selection in ATLAS

    CERN Document Server

    Varni, Carlo; The ATLAS collaboration

    2017-01-01

    The ATLAS experiment includes a well-developed trigger system that allows a selection of events which are thought to be of interest, while achieving a high overall rejection against less interesting processes. An important part of the online event selection is the ability to distinguish between jets arising from heavy-flavour quarks (b- and c-jets) and light jets (jets from u-, d-, s- and gluon jets) in real-time. This is essential for many physics analysis that include processes with large jet multiplicity and b-quarks in the final state. An overview of the b-jet triggers with a description of the application and performance of the offline Multivariate (MV2) b-tagging algorithms at High Level Trigger (HLT) in Run 2 will be presented. During 2016 b-jet trigger menu and algorithms were adapted to use The Fast Tracker (FTK) system which will be commissioned in 2017. We will show initial expected performance of newly designed triggers and compare it with the existing HLT chains.

  3. Real-time flavour tagging selection in ATLAS

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00407355; The ATLAS collaboration

    2015-01-01

    In high-energy physics experiments, online selection is crucial to identify the few interesting collisions from the large data volume processed. In the overall ATLAS trigger strategy, b-jet triggers are designed to identify heavy-flavour content in real-time and, in particular, provide the only option to efficiently record events with fully hadronic final states containing b-jets. In doing so, two different, but related, challenges are faced. The physics goal is to optimise as far as possible the rejection of light jets from multijet processes, while retaining a high efficiency on selecting jets from beauty, and maintaining affordable trigger rates without raising jet energy thresholds. This maps into a challenging computing task, as charged tracks and their corresponding vertexes must be reconstructed and analysed for each jet above the desired threshold, regardless of the increasingly harsh pile-up conditions. The performance of b-jet triggers during the LHC Run 1 data-taking campaigns is presented, togethe...

  4. Real-time flavour tagging selection in ATLAS

    CERN Document Server

    Bertella, Claudia; The ATLAS collaboration

    2015-01-01

    In high-energy physics experiments, online selection is crucial to identify the few interesting collisions from the large data volume processed. In the overall ATLAS trigger strategy, b-jet triggers are designed to identify heavy-flavor content in real-time and, in particular, provide the only option to efficiently record events with fully hadronic final states containing b-jets. In doing so, two different, but related, challenges are faced. The physics goal is to optimise as far as possible the rejection of light jets from QCD processes, while retaining a high efficiency on selecting jets from beauty, while maintaining affordable trigger rates without raising jet energy thresholds. This maps into a challenging computing task, as charged tracks and their corresponding vertexes must be reconstructed and analysed for each jet above the desired threshold, regardless of the increasingly harsh pile-up conditions. The performance of b-jet triggers during the LHC Run 1 data-taking campaigns is presented, together wi...

  5. Real-time flavour tagging selection in ATLAS

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00407355; The ATLAS collaboration

    2016-01-01

    In high-energy physics experiments, online selection is crucial to identify the few interesting collisions from the large data volume processed. In the overall ATLAS trigger strategy, b-jet triggers are designed to identify heavy-flavour content in real-time and, in particular, provide the only option to efficiently record events with fully hadronic final states containing b-jets. In doing so, two different, but related, challenges are faced. The physics goal is to optimise as far as possible the rejection of light jets from multijet processes, while retaining a high efficiency on selecting jets from beauty, and maintaining affordable trigger rates without raising jet energy thresholds. This maps into a challenging computing task, as charged tracks and their corresponding vertexes must be reconstructed and analysed for each jet above the desired threshold, regardless of the increasingly harsh pile-up conditions. The performance of b-jet triggers during the LHC Run 1 data-taking campaigns is presented, togethe...

  6. Prediction of Placental Barrier Permeability: A Model Based on Partial Least Squares Variable Selection Procedure

    Directory of Open Access Journals (Sweden)

    Yong-Hong Zhang

    2015-05-01

    Full Text Available Assessing the human placental barrier permeability of drugs is very important to guarantee drug safety during pregnancy. Quantitative structure–activity relationship (QSAR method was used as an effective assessing tool for the placental transfer study of drugs, while in vitro human placental perfusion is the most widely used method. In this study, the partial least squares (PLS variable selection and modeling procedure was used to pick out optimal descriptors from a pool of 620 descriptors of 65 compounds and to simultaneously develop a QSAR model between the descriptors and the placental barrier permeability expressed by the clearance indices (CI. The model was subjected to internal validation by cross-validation and y-randomization and to external validation by predicting CI values of 19 compounds. It was shown that the model developed is robust and has a good predictive potential (r2 = 0.9064, RMSE = 0.09, q2 = 0.7323, rp2 = 0.7656, RMSP = 0.14. The mechanistic interpretation of the final model was given by the high variable importance in projection values of descriptors. Using PLS procedure, we can rapidly and effectively select optimal descriptors and thus construct a model with good stability and predictability. This analysis can provide an effective tool for the high-throughput screening of the placental barrier permeability of drugs.

  7. Joint High-Dimensional Bayesian Variable and Covariance Selection with an Application to eQTL Analysis

    KAUST Repository

    Bhadra, Anindya

    2013-04-22

    We describe a Bayesian technique to (a) perform a sparse joint selection of significant predictor variables and significant inverse covariance matrix elements of the response variables in a high-dimensional linear Gaussian sparse seemingly unrelated regression (SSUR) setting and (b) perform an association analysis between the high-dimensional sets of predictors and responses in such a setting. To search the high-dimensional model space, where both the number of predictors and the number of possibly correlated responses can be larger than the sample size, we demonstrate that a marginalization-based collapsed Gibbs sampler, in combination with spike and slab type of priors, offers a computationally feasible and efficient solution. As an example, we apply our method to an expression quantitative trait loci (eQTL) analysis on publicly available single nucleotide polymorphism (SNP) and gene expression data for humans where the primary interest lies in finding the significant associations between the sets of SNPs and possibly correlated genetic transcripts. Our method also allows for inference on the sparse interaction network of the transcripts (response variables) after accounting for the effect of the SNPs (predictor variables). We exploit properties of Gaussian graphical models to make statements concerning conditional independence of the responses. Our method compares favorably to existing Bayesian approaches developed for this purpose. © 2013, The International Biometric Society.

  8. Low-Energy Real-Time OS Using Voltage Scheduling Algorithm for Variable Voltage Processors

    OpenAIRE

    Okuma, Takanori; Yasuura, Hiroto

    2001-01-01

    This paper presents a real-time OS based on $ mu $ITRON using proposed voltage scheduling algorithm for variable voltage processors which can vary supply voltage dynamically. The proposed voltage scheduling algorithms assign voltage level for each task dynamically in order to minimize energy consumption under timing constraints. Using the presented real-time OS, running tasks with low supply voltage leads to drastic energy reduction. In addition, the presented voltage scheduling algorithm is ...

  9. Angular scanning and variable wavelength surface plasmon resonance allowing free sensor surface selection for optimum material- and bio-sensing

    NARCIS (Netherlands)

    Lakayan, Dina; Tuppurainen, Jussipekka; Albers, Martin; van Lint, Matthijs J.; van Iperen, Dick J.; Weda, Jelmer J.A.; Kuncova-Kallio, Johana; Somsen, Govert W.; Kool, Jeroen

    2018-01-01

    A variable-wavelength Kretschmann configuration surface plasmon resonance (SPR) apparatus with angle scanning is presented. The setup provides the possibility of selecting the optimum wavelength with respect to the properties of the metal layer of the sensorchip, sample matrix, and biomolecular

  10. Space and time variability of heating requirements for greenhouse tomato production in the Euro-Mediterranean area.

    Science.gov (United States)

    Mariani, Luigi; Cola, Gabriele; Bulgari, Roberta; Ferrante, Antonio; Martinetti, Livia

    2016-08-15

    The Euro-Mediterranean area is the seat of a relevant greenhouse activity, meeting the needs of important markets. A quantitative assessment of greenhouse energy consumption and of its variability in space and time is an important decision support tool for both greenhouse-sector policies and farmers. A mathematical model of greenhouse energy balance was developed and parameterized for a state-of-the-art greenhouse to evaluate the heating requirements for vegetables growing. Tomato was adopted as reference crop, due to its high energy requirement for fruit setting and ripening and its economic relevance. In order to gain a proper description of the Euro-Mediterranean area, 56 greenhouse areas located within the ranges 28°N-72°N and 11°W-55°E were analyzed over the period 1973-2014. Moreover, the two 1973-1987 and 1988-2014 sub-periods were separately studied to describe climate change effects on energy consumption. Results account for the spatial variability of energy needs for tomato growing, highlighting the strong influence of latitude on the magnitude of heat requirements. The comparison between the two selected sub-periods shows a decrease of energy demand in the current warm phase, more relevant for high latitudes. Finally, suggestions to reduce energy consumptions are provided. Copyright © 2016 Elsevier B.V. All rights reserved.

  11. Dynamic variable selection in SNP genotype autocalling from APEX microarray data

    Directory of Open Access Journals (Sweden)

    Zamar Ruben H

    2006-11-01

    Full Text Available Abstract Background Single nucleotide polymorphisms (SNPs are DNA sequence variations, occurring when a single nucleotide – adenine (A, thymine (T, cytosine (C or guanine (G – is altered. Arguably, SNPs account for more than 90% of human genetic variation. Our laboratory has developed a highly redundant SNP genotyping assay consisting of multiple probes with signals from multiple channels for a single SNP, based on arrayed primer extension (APEX. This mini-sequencing method is a powerful combination of a highly parallel microarray with distinctive Sanger-based dideoxy terminator sequencing chemistry. Using this microarray platform, our current genotype calling system (known as SNP Chart is capable of calling single SNP genotypes by manual inspection of the APEX data, which is time-consuming and exposed to user subjectivity bias. Results Using a set of 32 Coriell DNA samples plus three negative PCR controls as a training data set, we have developed a fully-automated genotyping algorithm based on simple linear discriminant analysis (LDA using dynamic variable selection. The algorithm combines separate analyses based on the multiple probe sets to give a final posterior probability for each candidate genotype. We have tested our algorithm on a completely independent data set of 270 DNA samples, with validated genotypes, from patients admitted to the intensive care unit (ICU of St. Paul's Hospital (plus one negative PCR control sample. Our method achieves a concordance rate of 98.9% with a 99.6% call rate for a set of 96 SNPs. By adjusting the threshold value for the final posterior probability of the called genotype, the call rate reduces to 94.9% with a higher concordance rate of 99.6%. We also reversed the two independent data sets in their training and testing roles, achieving a concordance rate up to 99.8%. Conclusion The strength of this APEX chemistry-based platform is its unique redundancy having multiple probes for a single SNP. Our

  12. Fixation times in evolutionary games under weak selection

    International Nuclear Information System (INIS)

    Altrock, Philipp M; Traulsen, Arne

    2009-01-01

    In evolutionary game dynamics, reproductive success increases with the performance in an evolutionary game. If strategy A performs better than strategy B, strategy A will spread in the population. Under stochastic dynamics, a single mutant will sooner or later take over the entire population or go extinct. We analyze the mean exit times (or average fixation times) associated with this process. We show analytically that these times depend on the payoff matrix of the game in an amazingly simple way under weak selection, i.e. strong stochasticity: the payoff difference Δπ is a linear function of the number of A individuals i, Δπ=u i+v. The unconditional mean exit time depends only on the constant term v. Given that a single A mutant takes over the population, the corresponding conditional mean exit time depends only on the density dependent term u. We demonstrate this finding for two commonly applied microscopic evolutionary processes.

  13. FIRE: an SPSS program for variable selection in multiple linear regression analysis via the relative importance of predictors.

    Science.gov (United States)

    Lorenzo-Seva, Urbano; Ferrando, Pere J

    2011-03-01

    We provide an SPSS program that implements currently recommended techniques and recent developments for selecting variables in multiple linear regression analysis via the relative importance of predictors. The approach consists of: (1) optimally splitting the data for cross-validation, (2) selecting the final set of predictors to be retained in the equation regression, and (3) assessing the behavior of the chosen model using standard indices and procedures. The SPSS syntax, a short manual, and data files related to this article are available as supplemental materials from brm.psychonomic-journals.org/content/supplemental.

  14. Stroop interference and the timing of selective response activation.

    NARCIS (Netherlands)

    Lansbergen, M.M.; Kenemans, J.L.

    2008-01-01

    OBJECTIVE: To examine the exact timing of selective response activation in a manual color-word Stroop task. METHODS: Healthy individuals performed two versions of a manual color-word Stroop task, varying in the probability of incongruent color-words, while EEG was recorded. RESULTS: Stroop

  15. Spontaneous variability of pre-dialysis concentrations of uremic toxins over time in stable hemodialysis patients.

    Directory of Open Access Journals (Sweden)

    Sunny Eloot

    Full Text Available Numerous outcome studies and interventional trials in hemodialysis (HD patients are based on uremic toxin concentrations determined at one single or a limited number of time points. The reliability of these studies however entirely depends on how representative these cross-sectional concentrations are. We therefore investigated the variability of predialysis concentrations of uremic toxins over time.Prospectively collected predialysis serum samples of the midweek session of week 0, 1, 2, 3, 4, 8, 12, and 16 were analyzed for a panel of uremic toxins in stable chronic HD patients (N = 18 while maintaining dialyzer type and dialysis mode during the study period.Concentrations of the analyzed uremic toxins varied substantially between individuals, but also within stable HD patients (intra-patient variability. For urea, creatinine, beta-2-microglobulin, and some protein-bound uremic toxins, Intra-class Correlation Coefficient (ICC was higher than 0.7. However, for phosphorus, uric acid, symmetric and asymmetric dimethylarginine, and the protein-bound toxins hippuric acid and indoxyl sulfate, ICC values were below 0.7, implying a concentration variability within the individual patient even exceeding 65% of the observed inter-patient variability.Intra-patient variability may affect the interpretation of the association between a single concentration of certain uremic toxins and outcomes. When performing future outcome and interventional studies with uremic toxins other than described here, one should quantify their intra-patient variability and take into account that for solutes with a large intra-patient variability associations could be missed.

  16. Beyond space and time: advanced selection for seismological data

    Science.gov (United States)

    Trabant, C. M.; Van Fossen, M.; Ahern, T. K.; Casey, R. E.; Weertman, B.; Sharer, G.; Benson, R. B.

    2017-12-01

    Separating the available raw data from that useful for any given study is often a tedious step in a research project, particularly for first-order data quality problems such as broken sensors, incorrect response information, and non-continuous time series. With the ever increasing amounts of data available to researchers, this chore becomes more and more time consuming. To assist users in this pre-processing of data, the IRIS Data Management Center (DMC) has created a system called Research Ready Data Sets (RRDS). The RRDS system allows researchers to apply filters that constrain their data request using criteria related to signal quality, response correctness, and high resolution data availability. In addition to the traditional selection methods of stations at a geographic location for given time spans, RRDS will provide enhanced criteria for data selection based on many of the measurements available in the DMC's MUSTANG quality control system. This means that data may be selected based on background noise (tolerance relative to high and low noise Earth models), signal-to-noise ratio for earthquake arrivals, signal RMS, instrument response corrected signal correlation with Earth tides, time tear (gaps/overlaps) counts, timing quality (when reported in the raw data by the datalogger) and more. The new RRDS system is available as a web service designed to operate as a request filter. A request is submitted containing the traditional station and time constraints as well as data quality constraints. The request is then filtered and a report is returned that indicates 1) the request that would subsequently be submitted to a data access service, 2) a record of the quality criteria specified and 3) a record of the data rejected based on those criteria, including the relevant values. This service can be used to either filter a request prior to requesting the actual data or to explore which data match a set of enhanced criteria without downloading the data. We are

  17. Managing anthelmintic resistance-Variability in the dose of drug reaching the target worms influences selection for resistance?

    Science.gov (United States)

    Leathwick, Dave M; Luo, Dongwen

    2017-08-30

    The concentration profile of anthelmintic reaching the target worms in the host can vary between animals even when administered doses are tailored to individual liveweight at the manufacturer's recommended rate. Factors contributing to variation in drug concentration include weather, breed of animal, formulation and the route by which drugs are administered. The implications of this variability for the development of anthelmintic resistance was investigated using Monte-Carlo simulation. A model framework was established where 100 animals each received a single drug treatment. The 'dose' of drug allocated to each animal (i.e. the concentration-time profile of drug reaching the target worms) was sampled at random from a distribution of doses with mean m and standard deviation s. For each animal the dose of drug was used in conjunction with pre-determined dose-response relationships, representing single and poly-genetic inheritance, to calculate efficacy against susceptible and resistant genotypes. These data were then used to calculate the overall change in resistance gene frequency for the worm population as a result of the treatment. Values for m and s were varied to reflect differences in both mean dose and the variability in dose, and for each combination of these 100,000 simulations were run. The resistance gene frequency in the population after treatment increased as m decreased and as s increased. This occurred for both single and poly-gene models and for different levels of dominance (survival under treatment) of the heterozygote genotype(s). The results indicate that factors which result in lower and/or more variable concentrations of active reaching the target worms are more likely to select for resistance. The potential of different routes of anthelmintic administration to play a role in the development of anthelmintic resistance is discussed. Copyright © 2017 Elsevier B.V. All rights reserved.

  18. Continuous-Time Mean-Variance Portfolio Selection under the CEV Process

    OpenAIRE

    Ma, Hui-qiang

    2014-01-01

    We consider a continuous-time mean-variance portfolio selection model when stock price follows the constant elasticity of variance (CEV) process. The aim of this paper is to derive an optimal portfolio strategy and the efficient frontier. The mean-variance portfolio selection problem is formulated as a linearly constrained convex program problem. By employing the Lagrange multiplier method and stochastic optimal control theory, we obtain the optimal portfolio strategy and mean-variance effici...

  19. An experiment on selecting most informative variables in socio-economic data

    Directory of Open Access Journals (Sweden)

    L. Jenkins

    2014-01-01

    Full Text Available In many studies where data are collected on several variables, there is a motivation to find if fewer variables would provide almost as much information. Variance of a variable about its mean is the common statistical measure of information content, and that is used here. We are interested whether the variability in one variable is sufficiently correlated with that in one or more of the other variables that the first variable is redundant. We wish to find one or more ‘principal variables’ that sufficiently reflect the information content in all the original variables. The paper explains the method of principal variables and reports experiments using the technique to see if just a few variables are sufficient to reflect the information in 11 socioeconomic variables on 130 countries from a World Bank (WB database. While the method of principal variables is highly successful in a statistical sense, the WB data varies greatly from year to year, demonstrating that fewer variables wo uld be inadequate for this data.

  20. Modifiable variables in physical therapy education programs associated with first-time and three-year National Physical Therapy Examination pass rates in the United States

    Directory of Open Access Journals (Sweden)

    Chad Cook

    2015-09-01

    Full Text Available Purpose: This study aimed to examine the modifiable programmatic characteristics reflected in the Commission on Accreditation in Physical Therapy Education (CAPTE Annual Accreditation Report for all accredited programs that reported pass rates on the National Physical Therapist Examination, and to build a predictive model for first-time and three-year ultimate pass rates. Methods: This observational study analyzed programmatic information from the 185 CAPTE-accredited physical therapy programs in the United States and Puerto Rico out of a total of 193 programs that provided the first-time and three-year ultimate pass rates in 2011. Fourteen predictive variables representing student selection and composition, clinical education length and design, and general program length and design were analyzed against first-time pass rates and ultimate pass rates on the NPTE. Univariate and multivariate multinomial regression analysis for first-time pass rates and logistic regression analysis for three-year ultimate pass rates were performed. Results: The variables associated with the first-time pass rate in the multivariate analysis were the mean undergraduate grade point average (GPA and the average age of the cohort. Multivariate analysis showed that mean undergraduate GPA was associated with the three-year ultimate pass rate. Conclusions: Mean undergraduate GPA was found to be the only modifiable predictor for both first-time and three-year pass rates among CAPTE-accredited physical therapy programs.

  1. ANALYSIS THE DIURNAL VARIATIONS ON SELECTED PHYSICAL AND PHYSIOLOGICAL PARAMETERS

    Directory of Open Access Journals (Sweden)

    A. MAHABOOBJAN

    2010-12-01

    Full Text Available The purpose of the study was to analyze the diurnal variations on selected physical and physiological parameters such as speed, explosive power, resting heart rate and breath holding time among college students. To achieve the purpose of this study, a total of twenty players (n=20 from Government Arts College, Salem were selected as subjects To study the diurnal variation of the players on selected physiological and performance variables, the data were collected 4 times a day with every four hours in between the times it from 6.00 to 18.00 hours were selected as another categorical variable. One way repeated measures (ANOVA was used to analyze the data. If the obtained F-ratio was significant, Seheffe’s post-hoc test was used to find out the significant difference if anyamong the paired means. The level of significance was fixed at.05 level. It has concluded that both physical and physiological parameters were significantly deferred with reference to change of temperature in a day

  2. Constraints on arm selection processes when reaching: degrees of freedom and joint amplitudes interact to influence limb selection.

    Science.gov (United States)

    Kim, Wondae; Buchanan, John; Gabbard, Carl

    2011-01-01

    With an interest in identifying the variables that constrain arm choice when reaching, the authors had 11 right-handed participants perform free-choice and assigned-limb reaches at 9 object positions. The right arm was freely selected 100% of the time when reaching to positions at 30° and 40° into right hemispace. However, the left arm was freely selected to reach to positions at -30° and -40° in left hemispace 85% of the time. A comparison between free- and assigned-limb reaching kinematics revealed that free limb selection when reaching to the farthest positions was constrained by joint amplitude requirements and the time devoted to limb deceleration. Differences between free- and assigned-arm reaches were not evident when reaching to the midline and positions of ±10°, even though the right arm was freely selected most often for these positions. Different factors contribute to limb selection as a function of distance into a specific hemispace.

  3. Variable Selection for Nonparametric Gaussian Process Priors: Models and Computational Strategies.

    Science.gov (United States)

    Savitsky, Terrance; Vannucci, Marina; Sha, Naijun

    2011-02-01

    This paper presents a unified treatment of Gaussian process models that extends to data from the exponential dispersion family and to survival data. Our specific interest is in the analysis of data sets with predictors that have an a priori unknown form of possibly nonlinear associations to the response. The modeling approach we describe incorporates Gaussian processes in a generalized linear model framework to obtain a class of nonparametric regression models where the covariance matrix depends on the predictors. We consider, in particular, continuous, categorical and count responses. We also look into models that account for survival outcomes. We explore alternative covariance formulations for the Gaussian process prior and demonstrate the flexibility of the construction. Next, we focus on the important problem of selecting variables from the set of possible predictors and describe a general framework that employs mixture priors. We compare alternative MCMC strategies for posterior inference and achieve a computationally efficient and practical approach. We demonstrate performances on simulated and benchmark data sets.

  4. gamboostLSS: An R Package for Model Building and Variable Selection in the GAMLSS Framework

    Directory of Open Access Journals (Sweden)

    Benjamin Hofner

    2016-10-01

    Full Text Available Generalized additive models for location, scale and shape are a flexible class of regression models that allow to model multiple parameters of a distribution function, such as the mean and the standard deviation, simultaneously. With the R package gamboostLSS, we provide a boosting method to fit these models. Variable selection and model choice are naturally available within this regularized regression framework. To introduce and illustrate the R package gamboostLSS and its infrastructure, we use a data set on stunted growth in India. In addition to the specification and application of the model itself, we present a variety of convenience functions, including methods for tuning parameter selection, prediction and visualization of results. The package gamboostLSS is available from the Comprehensive R Archive Network (CRAN at https://CRAN.R-project.org/package=gamboostLSS.

  5. An Epidemic Model of Computer Worms with Time Delay and Variable Infection Rate

    Directory of Open Access Journals (Sweden)

    Yu Yao

    2018-01-01

    Full Text Available With rapid development of Internet, network security issues become increasingly serious. Temporary patches have been put on the infectious hosts, which may lose efficacy on occasions. This leads to a time delay when vaccinated hosts change to susceptible hosts. On the other hand, the worm infection is usually a nonlinear process. Considering the actual situation, a variable infection rate is introduced to describe the spread process of worms. According to above aspects, we propose a time-delayed worm propagation model with variable infection rate. Then the existence condition and the stability of the positive equilibrium are derived. Due to the existence of time delay, the worm propagation system may be unstable and out of control. Moreover, the threshold τ0 of Hopf bifurcation is obtained. The worm propagation system is stable if time delay is less than τ0. When time delay is over τ0, the system will be unstable. In addition, numerical experiments have been performed, which can match the conclusions we deduce. The numerical experiments also show that there exists a threshold in the parameter a, which implies that we should choose appropriate infection rate β(t to constrain worm prevalence. Finally, simulation experiments are carried out to prove the validity of our conclusions.

  6. The first-passage time distribution for the diffusion model with variable drift

    DEFF Research Database (Denmark)

    Blurton, Steven Paul; Kesselmeier, Miriam; Gondan, Matthias

    2017-01-01

    across trials. This extra flexibility allows accounting for slow errors that often occur in response time experiments. So far, the predicted response time distributions were obtained by numerical evaluation as analytical solutions were not available. Here, we present an analytical expression...... for the cumulative first-passage time distribution in the diffusion model with normally distributed trial-to-trial variability in the drift. The solution is obtained with predefined precision, and its evaluation turns out to be extremely fast.......The Ratcliff diffusion model is now arguably the most widely applied model for response time data. Its major advantage is its description of both response times and the probabilities for correct as well as incorrect responses. The model assumes a Wiener process with drift between two constant...

  7. Estimating inter-annual variability in winter wheat sowing dates from satellite time series in Camargue, France

    Science.gov (United States)

    Manfron, Giacinto; Delmotte, Sylvestre; Busetto, Lorenzo; Hossard, Laure; Ranghetti, Luigi; Brivio, Pietro Alessandro; Boschetti, Mirco

    2017-05-01

    Crop simulation models are commonly used to forecast the performance of cropping systems under different hypotheses of change. Their use on a regional scale is generally constrained, however, by a lack of information on the spatial and temporal variability of environment-related input variables (e.g., soil) and agricultural practices (e.g., sowing dates) that influence crop yields. Satellite remote sensing data can shed light on such variability by providing timely information on crop dynamics and conditions over large areas. This paper proposes a method for analyzing time series of MODIS satellite data in order to estimate the inter-annual variability of winter wheat sowing dates. A rule-based method was developed to automatically identify a reliable sample of winter wheat field time series, and to infer the corresponding sowing dates. The method was designed for a case study in the Camargue region (France), where winter wheat is characterized by vernalization, as in other temperate regions. The detection criteria were chosen on the grounds of agronomic expertise and by analyzing high-confidence time-series vegetation index profiles for winter wheat. This automatic method identified the target crop on more than 56% (four-year average) of the cultivated areas, with low commission errors (11%). It also captured the seasonal variability in sowing dates with errors of ±8 and ±16 days in 46% and 66% of cases, respectively. Extending the analysis to the years 2002-2012 showed that sowing in the Camargue was usually done on or around November 1st (±4 days). Comparing inter-annual sowing date variability with the main local agro-climatic drivers showed that the type of preceding crop and the weather conditions during the summer season before the wheat sowing had a prominent role in influencing winter wheat sowing dates.

  8. Survival time and effect of selected predictor variables on survival in owned pet cats seropositive for feline immunodeficiency and leukemia virus attending a referral clinic in northern Italy.

    Science.gov (United States)

    Spada, Eva; Perego, Roberta; Sgamma, Elena Assunta; Proverbio, Daniela

    2018-02-01

    Feline immunodeficiency virus (FIV) and feline leukemia virus (FeLV) are among the most important feline infectious diseases worldwide. This retrospective study investigated survival times and effects of selected predictor factors on survival time in a population of owned pet cats in Northern Italy testing positive for the presence of FIV antibodies and FeLV antigen. One hundred and three retrovirus-seropositive cats, 53 FIV-seropositive cats, 40 FeLV-seropositive cats, and 10 FIV+FeLV-seropositive cats were included in the study. A population of 103 retrovirus-seronegative age and sex-matched cats was selected. Survival time was calculated and compared between retrovirus-seronegative, FIV, FeLV and FIV+FeLV-seropositive cats using Kaplan-Meier survival analysis. Cox proportional-hazards regression analysis was used to study the effect of selected predictor factors (male gender, peripheral blood cytopenia as reduced red blood cells - RBC- count, leukopenia, neutropenia and lymphopenia, hypercreatininemia and reduced albumin to globulin ratio) on survival time in retrovirus-seropositive populations. Median survival times for seronegative cats, FIV, FeLV and FIV+FeLV-seropositive cats were 3960, 2040, 714 and 77days, respectively. Compared to retrovirus-seronegative cats median survival time was significantly lower (P<0.000) in FeLV and FIV+FeLV-seropositive cats. Median survival time in FeLV and FIV+FeLV-seropositive cats was also significant lower (P<0.000) when compared to FIV-seropositive cats. Hazard ratio of death in FeLV and FIV+FeLV-seropositive cats being respectively 3.4 and 7.4 times higher, in comparison to seronegative cats and 2.3 and 4.8 times higher in FeLV and FIV+FeLV-seropositive cats as compared to FIV-seropositive cats. A Cox proportional-hazards regression analysis showed that FIV and FeLV-seropositive cats with reduced RBC counts at time of diagnosis of seropositivity had significantly shorter survival times when compared to FIV and Fe

  9. Time variability of C-reactive protein: implications for clinical risk stratification.

    Directory of Open Access Journals (Sweden)

    Peter Bogaty

    Full Text Available C-reactive protein (CRP is proposed as a screening test for predicting risk and guiding preventive approaches in coronary artery disease (CAD. However, the stability of repeated CRP measurements over time in subjects with and without CAD is not well defined. We sought to determine the stability of serial CRP measurements in stable subjects with distinct CAD manifestations and a group without CAD while carefully controlling for known confounders.We prospectively studied 4 groups of 25 stable subjects each 1 a history of recurrent acute coronary events; 2 a single myocardial infarction ≥7 years ago; 3 longstanding CAD (≥7 years that had never been unstable; 4 no CAD. Fifteen measurements of CRP were obtained to cover 21 time-points: 3 times during one day; 5 consecutive days; 4 consecutive weeks; 4 consecutive months; and every 3 months over the year. CRP risk threshold was set at 2.0 mg/L. We estimated variance across time-points using standard descriptive statistics and Bayesian hierarchical models.Median CRP values of the 4 groups and their pattern of variability did not differ substantially so all subjects were analyzed together. The median individual standard deviation (SD CRP values within-day, within-week, between-weeks and between-months were 0.07, 0.19, 0.36 and 0.63 mg/L, respectively. Forty-six percent of subjects changed CRP risk category at least once and 21% had ≥4 weekly and monthly CRP values in both low and high-risk categories.Considering its large intra-individual variability, it may be problematic to rely on CRP values for CAD risk prediction and therapeutic decision-making in individual subjects.

  10. Quantum mechanics of time travel through post-selected teleportation

    International Nuclear Information System (INIS)

    Lloyd, Seth; Garcia-Patron, Raul; Maccone, Lorenzo; Giovannetti, Vittorio; Shikano, Yutaka

    2011-01-01

    This paper discusses the quantum mechanics of closed-timelike curves (CTCs) and of other potential methods for time travel. We analyze a specific proposal for such quantum time travel, the quantum description of CTCs based on post-selected teleportation (P-CTCs). We compare the theory of P-CTCs to previously proposed quantum theories of time travel: the theory is inequivalent to Deutsch's theory of CTCs, but it is consistent with path-integral approaches (which are the best suited for analyzing quantum-field theory in curved space-time). We derive the dynamical equations that a chronology-respecting system interacting with a CTC will experience. We discuss the possibility of time travel in the absence of general-relativistic closed-timelike curves, and investigate the implications of P-CTCs for enhancing the power of computation.

  11. Multivariate time series modeling of selected childhood diseases in ...

    African Journals Online (AJOL)

    This paper is focused on modeling the five most prevalent childhood diseases in Akwa Ibom State using a multivariate approach to time series. An aggregate of 78,839 reported cases of malaria, upper respiratory tract infection (URTI), Pneumonia, anaemia and tetanus were extracted from five randomly selected hospitals in ...

  12. Energy decay of a variable-coefficient wave equation with nonlinear time-dependent localized damping

    Directory of Open Access Journals (Sweden)

    Jieqiong Wu

    2015-09-01

    Full Text Available We study the energy decay for the Cauchy problem of the wave equation with nonlinear time-dependent and space-dependent damping. The damping is localized in a bounded domain and near infinity, and the principal part of the wave equation has a variable-coefficient. We apply the multiplier method for variable-coefficient equations, and obtain an energy decay that depends on the property of the coefficient of the damping term.

  13. Towards a More Biologically-meaningful Climate Characterization: Variability in Space and Time at Multiple Scales

    Science.gov (United States)

    Christianson, D. S.; Kaufman, C. G.; Kueppers, L. M.; Harte, J.

    2013-12-01

    fine-spatial scales (sub-meter to 10-meter) shows greater temperature variability with warmer mean temperatures. This is inconsistent with the inherent assumption made in current species distribution models that fine-scale variability is static, implying that current projections of future species ranges may be biased -- the direction and magnitude requiring further study. While we focus our findings on the cross-scaling characteristics of temporal and spatial variability, we also compare the mean-variance relationship between 1) experimental climate manipulations and observed conditions and 2) temporal versus spatial variance, i.e., variability in a time-series at one location vs. variability across a landscape at a single time. The former informs the rich debate concerning the ability to experimentally mimic a warmer future. The latter informs space-for-time study design and analyses, as well as species persistence via a combined spatiotemporal probability of suitable future habitat.

  14. Sodium bicarbonate ingestion and individual variability in time-to-peak pH.

    Science.gov (United States)

    Sparks, Andy; Williams, Emily; Robinson, Amy; Miller, Peter; Bentley, David J; Bridge, Craig; Mc Naughton, Lars R

    2017-01-01

    This study determined variability in time-to-peak pH after consumption of 300 mg kg - 1 of sodium bicarbonate. Seventeen participants (mean ± SD: age 21.38 ± 1.5 years; mass 75.8 ± 5.8 kg; height 176.8 ± 7.6 cm) reported to the laboratory where a resting capillary sample was taken. Then, 300 mg kg -1 of NaHCO 3 in 450 ml of flavoured water was ingested. Participants rested for 90 min and repeated blood samples were procured at 10 min intervals for 60 min and then every 5 min until 90 min. Blood pH concentrations were measured. Results suggested that time-to-peak pH (64.41 ± 18.78 min) was variable with a range of 10-85 min and a coefficient of variation of 29.16%. A bimodal distribution occurred, at 65 and 75 min. In conclusion, athletes, when using NaHCO 3 as an ergogenic aid, should determine their time-to-peak pH to best utilize the added buffering capacity this substance allows.

  15. Industrial implementation of spatial variability control by real-time SPC

    Science.gov (United States)

    Roule, O.; Pasqualini, F.; Borde, M.

    2016-10-01

    Advanced technology nodes require more and more information to get the wafer process well setup. The critical dimension of components decreases following Moore's law. At the same time, the intra-wafer dispersion linked to the spatial non-uniformity of tool's processes is not capable to decrease in the same proportions. APC systems (Advanced Process Control) are being developed in waferfab to automatically adjust and tune wafer processing, based on a lot of process context information. It can generate and monitor complex intrawafer process profile corrections between different process steps. It leads us to put under control the spatial variability, in real time by our SPC system (Statistical Process Control). This paper will outline the architecture of an integrated process control system for shape monitoring in 3D, implemented in waferfab.

  16. Continuous-Time Mean-Variance Portfolio Selection with Random Horizon

    International Nuclear Information System (INIS)

    Yu, Zhiyong

    2013-01-01

    This paper examines the continuous-time mean-variance optimal portfolio selection problem with random market parameters and random time horizon. Treating this problem as a linearly constrained stochastic linear-quadratic optimal control problem, I explicitly derive the efficient portfolios and efficient frontier in closed forms based on the solutions of two backward stochastic differential equations. Some related issues such as a minimum variance portfolio and a mutual fund theorem are also addressed. All the results are markedly different from those in the problem with deterministic exit time. A key part of my analysis involves proving the global solvability of a stochastic Riccati equation, which is interesting in its own right

  17. Continuous-Time Mean-Variance Portfolio Selection with Random Horizon

    Energy Technology Data Exchange (ETDEWEB)

    Yu, Zhiyong, E-mail: yuzhiyong@sdu.edu.cn [Shandong University, School of Mathematics (China)

    2013-12-15

    This paper examines the continuous-time mean-variance optimal portfolio selection problem with random market parameters and random time horizon. Treating this problem as a linearly constrained stochastic linear-quadratic optimal control problem, I explicitly derive the efficient portfolios and efficient frontier in closed forms based on the solutions of two backward stochastic differential equations. Some related issues such as a minimum variance portfolio and a mutual fund theorem are also addressed. All the results are markedly different from those in the problem with deterministic exit time. A key part of my analysis involves proving the global solvability of a stochastic Riccati equation, which is interesting in its own right.

  18. Separating different scales of motion in time series of meteorological variables

    International Nuclear Information System (INIS)

    Eskridge, R.E.; Rao, S.T.; Porter, P.S.

    1997-01-01

    In this study, four methods are evaluated for detecting and tracking changes in time series of climate variables. The PEST algorithm and the monthly anomaly technique are shown to have shortcomings, while the wavelet transform and Kolmogorov-Zurbenko (KZ) filter methods are shown to be capable of separating time scales with minimal errors. The behavior of the filters are examined by transfer functions. The KZ filter, anomaly technique, and PEST were also applied to temperature data to estimate long-term trends. The KZ filter provides estimates with about 10 times higher confidence than the other methods. Advantages of the KZ filter over the wavelet transform method are that it may be applied to datasets containing missing observations and is very easy to use. 10 refs., 8 figs., 1 tab

  19. Computational procedure of optimal inventory model involving controllable backorder rate and variable lead time with defective units

    Science.gov (United States)

    Lee, Wen-Chuan; Wu, Jong-Wuu; Tsou, Hsin-Hui; Lei, Chia-Ling

    2012-10-01

    This article considers that the number of defective units in an arrival order is a binominal random variable. We derive a modified mixture inventory model with backorders and lost sales, in which the order quantity and lead time are decision variables. In our studies, we also assume that the backorder rate is dependent on the length of lead time through the amount of shortages and let the backorder rate be a control variable. In addition, we assume that the lead time demand follows a mixture of normal distributions, and then relax the assumption about the form of the mixture of distribution functions of the lead time demand and apply the minimax distribution free procedure to solve the problem. Furthermore, we develop an algorithm procedure to obtain the optimal ordering strategy for each case. Finally, three numerical examples are also given to illustrate the results.

  20. Aberrant Time to Most Recent Common Ancestor as a Signature of Natural Selection.

    Science.gov (United States)

    Hunter-Zinck, Haley; Clark, Andrew G

    2015-10-01

    Natural selection inference methods often target one mode of selection of a particular age and strength. However, detecting multiple modes simultaneously, or with atypical representations, would be advantageous for understanding a population's evolutionary history. We have developed an anomaly detection algorithm using distributions of pairwise time to most recent common ancestor (TMRCA) to simultaneously detect multiple modes of natural selection in whole-genome sequences. As natural selection distorts local genealogies in distinct ways, the method uses pairwise TMRCA distributions, which approximate genealogies at a nonrecombining locus, to detect distortions without targeting a specific mode of selection. We evaluate the performance of our method, TSel, for both positive and balancing selection over different time-scales and selection strengths and compare TSel's performance with that of other methods. We then apply TSel to the Complete Genomics diversity panel, a set of human whole-genome sequences, and recover loci previously inferred to be under positive or balancing selection. © The Author 2015. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  1. Variable Neighbourhood Search and Mathematical Programming for Just-in-Time Job-Shop Scheduling Problem

    Directory of Open Access Journals (Sweden)

    Sunxin Wang

    2014-01-01

    Full Text Available This paper presents a combination of variable neighbourhood search and mathematical programming to minimize the sum of earliness and tardiness penalty costs of all operations for just-in-time job-shop scheduling problem (JITJSSP. Unlike classical E/T scheduling problem with each job having its earliness or tardiness penalty cost, each operation in this paper has its earliness and tardiness penalties, which are paid if the operation is completed before or after its due date. Our hybrid algorithm combines (i a variable neighbourhood search procedure to explore the huge feasible solution spaces efficiently by alternating the swap and insertion neighbourhood structures and (ii a mathematical programming model to optimize the completion times of the operations for a given solution in each iteration procedure. Additionally, a threshold accepting mechanism is proposed to diversify the local search of variable neighbourhood search. Computational results on the 72 benchmark instances show that our algorithm can obtain the best known solution for 40 problems, and the best known solutions for 33 problems are updated.

  2. VARIABILITY OF AMYLOSE AND AMYLOPECTIN IN WINTER WHEAT AND SELECTION FOR SPECIAL PURPOSES

    Directory of Open Access Journals (Sweden)

    Nikolina Weg Krstičević

    2015-06-01

    Full Text Available The aim of this study was to investigate the variability of amylose and amylopectin in 24 Croatian and six foreign winter wheat varieties and to detect the potential of these varieties for special purposes. Starch composition analysis was based on the separation of amylose and amylopectin and the determination of their amounts and ratios. Analysis of the amount of amylose and amylopectin determined statistically highly significant differences between the varieties. The tested varieties are mostly bread wheat of different quality which have the usual content of amylose and amylopectin. Some varieties were identified among them with high amylopectin and low amylose content and one variety with high amylose content. They have the potential in future breeding programs and selection for special purposes.

  3. Molecular dynamics based enhanced sampling of collective variables with very large time steps

    Science.gov (United States)

    Chen, Pei-Yang; Tuckerman, Mark E.

    2018-01-01

    Enhanced sampling techniques that target a set of collective variables and that use molecular dynamics as the driving engine have seen widespread application in the computational molecular sciences as a means to explore the free-energy landscapes of complex systems. The use of molecular dynamics as the fundamental driver of the sampling requires the introduction of a time step whose magnitude is limited by the fastest motions in a system. While standard multiple time-stepping methods allow larger time steps to be employed for the slower and computationally more expensive forces, the maximum achievable increase in time step is limited by resonance phenomena, which inextricably couple fast and slow motions. Recently, we introduced deterministic and stochastic resonance-free multiple time step algorithms for molecular dynamics that solve this resonance problem and allow ten- to twenty-fold gains in the large time step compared to standard multiple time step algorithms [P. Minary et al., Phys. Rev. Lett. 93, 150201 (2004); B. Leimkuhler et al., Mol. Phys. 111, 3579-3594 (2013)]. These methods are based on the imposition of isokinetic constraints that couple the physical system to Nosé-Hoover chains or Nosé-Hoover Langevin schemes. In this paper, we show how to adapt these methods for collective variable-based enhanced sampling techniques, specifically adiabatic free-energy dynamics/temperature-accelerated molecular dynamics, unified free-energy dynamics, and by extension, metadynamics, thus allowing simulations employing these methods to employ similarly very large time steps. The combination of resonance-free multiple time step integrators with free-energy-based enhanced sampling significantly improves the efficiency of conformational exploration.

  4. A search for time variability and its possible regularities in linear polarization of Be stars

    International Nuclear Information System (INIS)

    Huang, L.; Guo, Z.H.; Hsu, J.C.; Huang, L.

    1989-01-01

    Linear polarization measurements are presented for 14 Be stars obtained at McDonald Observatory during four observing runs from June to November of 1983. Methods of observation and data reduction are described. Seven of eight program stars which were observed on six or more nights exhibited obvious polarimetric variations on time-scales of days or months. The incidence is estimated as 50% and may be as high as 93%. No connection can be found between polarimetric variability and rapid periodic light or spectroscopic variability for our stars. Ultra-rapid variability on time-scale of minutes was searched for with negative results. In all cases the position angles also show variations indicating that the axis of symmetry of the circumstellar envelope changes its orientation in space. For the Be binary CX Dra the variations in polarization seems to have a period which is just half of the orbital period

  5. Perfectionistic Cognitions: Stability, Variability, and Changes Over Time.

    Science.gov (United States)

    Prestele, Elisabeth; Altstötter-Gleich, Christine

    2018-02-01

    The construct of perfectionistic cognitions is defined as a state-like construct resulting from a perfectionistic self-schema and activated by specific situational demands. Only a few studies have investigated whether and how perfectionistic cognitions change across different situations and whether they reflect stable between-person differences or also within-person variations over time. We conducted 2 studies to investigate the variability and stability of 3 dimensions of perfectionistic cognitions while situational demands changed (Study 1) and on a daily level during a highly demanding period of time (Study 2). The results of both studies revealed that stable between-person differences accounted for the largest proportion of variance in the dimensions of perfectionistic cognitions and that these differences were validly associated with between-person differences in affect. The frequency of perfectionistic cognitions increased during students' first semester at university, and these average within-person changes were different for the 3 dimensions of perfectionistic cognitions (Study 1). In addition, there were between-person differences in the within-person changes that were validly associated with concurrent changes in closely related constructs (unpleasant mood and tense arousal). Within-person variations in perfectionistic cognitions were also validly associated with variations in unpleasant mood and tense arousal from day to day (Study 2).

  6. Habitat Heterogeneity Variably Influences Habitat Selection by Wild Herbivores in a Semi-Arid Tropical Savanna Ecosystem.

    Directory of Open Access Journals (Sweden)

    Victor K Muposhi

    Full Text Available An understanding of the habitat selection patterns by wild herbivores is critical for adaptive management, particularly towards ecosystem management and wildlife conservation in semi arid savanna ecosystems. We tested the following predictions: (i surface water availability, habitat quality and human presence have a strong influence on the spatial distribution of wild herbivores in the dry season, (ii habitat suitability for large herbivores would be higher compared to medium-sized herbivores in the dry season, and (iii spatial extent of suitable habitats for wild herbivores will be different between years, i.e., 2006 and 2010, in Matetsi Safari Area, Zimbabwe. MaxEnt modeling was done to determine the habitat suitability of large herbivores and medium-sized herbivores. MaxEnt modeling of habitat suitability for large herbivores using the environmental variables was successful for the selected species in 2006 and 2010, except for elephant (Loxodonta africana for the year 2010. Overall, large herbivores probability of occurrence was mostly influenced by distance from rivers. Distance from roads influenced much of the variability in the probability of occurrence of medium-sized herbivores. The overall predicted area for large and medium-sized herbivores was not different. Large herbivores may not necessarily utilize larger habitat patches over medium-sized herbivores due to the habitat homogenizing effect of water provisioning. Effect of surface water availability, proximity to riverine ecosystems and roads on habitat suitability of large and medium-sized herbivores in the dry season was highly variable thus could change from one year to another. We recommend adaptive management initiatives aimed at ensuring dynamic water supply in protected areas through temporal closure and or opening of water points to promote heterogeneity of wildlife habitats.

  7. Space-Time Joint Interference Cancellation Using Fuzzy-Inference-Based Adaptive Filtering Techniques in Frequency-Selective Multipath Channels

    Science.gov (United States)

    Hu, Chia-Chang; Lin, Hsuan-Yu; Chen, Yu-Fan; Wen, Jyh-Horng

    2006-12-01

    An adaptive minimum mean-square error (MMSE) array receiver based on the fuzzy-logic recursive least-squares (RLS) algorithm is developed for asynchronous DS-CDMA interference suppression in the presence of frequency-selective multipath fading. This receiver employs a fuzzy-logic control mechanism to perform the nonlinear mapping of the squared error and squared error variation, denoted by ([InlineEquation not available: see fulltext.],[InlineEquation not available: see fulltext.]), into a forgetting factor[InlineEquation not available: see fulltext.]. For the real-time applicability, a computationally efficient version of the proposed receiver is derived based on the least-mean-square (LMS) algorithm using the fuzzy-inference-controlled step-size[InlineEquation not available: see fulltext.]. This receiver is capable of providing both fast convergence/tracking capability as well as small steady-state misadjustment as compared with conventional LMS- and RLS-based MMSE DS-CDMA receivers. Simulations show that the fuzzy-logic LMS and RLS algorithms outperform, respectively, other variable step-size LMS (VSS-LMS) and variable forgetting factor RLS (VFF-RLS) algorithms at least 3 dB and 1.5 dB in bit-error-rate (BER) for multipath fading channels.

  8. Embryo selection: the role of time-lapse monitoring.

    Science.gov (United States)

    Kovacs, Peter

    2014-12-15

    In vitro fertilization has been available for over 3 decades. Its use is becoming more widespread worldwide, and in the developed world, up to 5% of children have been born following IVF. It is estimated that over 5 million children have been conceived in vitro. In addition to giving hope to infertile couples to have their own family, in vitro fertilization has also introduced risks as well. The risk of multiple gestation and the associated maternal and neonatal morbidity/mortality has increased significantly over the past few decades. While stricter transfer policies have eliminated the majority of the high-order multiples, these changes have not yet had much of an impact on the incidence of twins. A twin pregnancy can be avoided by the transfer of a single embryo only. However, the traditionally used method of morphologic embryo selection is not predictive enough to allow routine single embryo transfer; therefore, new screening tools are needed. Time-lapse embryo monitoring allows continuous, non-invasive embryo observation without the need to remove the embryo from optimal culturing conditions. The extra information on the cleavage pattern, morphologic changes and embryo development dynamics could help us identify embryos with a higher implantation potential. These technologic improvements enable us to objectively select the embryo(s) for transfer based on certain algorithms. In the past 5-6 years, numerous studies have been published that confirmed the safety of time-lapse technology. In addition, various markers have already been identified that are associated with the minimal likelihood of implantation and others that are predictive of blastocyst development, implantation potential, genetic health and pregnancy. Various groups have proposed different algorithms for embryo selection based on mostly retrospective data analysis. However, large prospective trials are needed to study the full benefit of these (and potentially new) algorithms before their

  9. Maximum Lateness Scheduling on Two-Person Cooperative Games with Variable Processing Times and Common Due Date

    OpenAIRE

    Liu, Peng; Wang, Xiaoli

    2017-01-01

    A new maximum lateness scheduling model in which both cooperative games and variable processing times exist simultaneously is considered in this paper. The job variable processing time is described by an increasing or a decreasing function dependent on the position of a job in the sequence. Two persons have to cooperate in order to process a set of jobs. Each of them has a single machine and their processing cost is defined as the minimum value of maximum lateness. All jobs have a common due ...

  10. Discrete Analysis of Portfolio Selection with Optimal Stopping Time

    Directory of Open Access Journals (Sweden)

    Jianfeng Liang

    2009-01-01

    Full Text Available Most of the investments in practice are carried out without certain horizons. There are many factors to drive investment to a stop. In this paper, we consider a portfolio selection policy with market-related stopping time. Particularly, we assume that the investor exits the market once his wealth reaches a given investment target or falls below a bankruptcy threshold. Our objective is to minimize the expected time when the investment target is obtained, at the same time, we guarantee the probability that bankruptcy happens is no larger than a given level. We formulate the problem as a mix integer linear programming model and make analysis of the model by using a numerical example.

  11. Investigation of a rotary valving system with variable valve timing for internal combustion engines

    Science.gov (United States)

    Cross, Paul C.; Hansen, Craig N.

    1994-11-01

    The objective of the program was to provide a functional demonstration of the Hansen Rotary Valving System with Variable Valve Timing (HRVS/VVT), capable of throttleless inlet charge control, as an alternative to conventional poppet-valves for use in spark ignited internal combustion engines. The goal of this new technology is to secure benefits in fuel economy, broadened torque band, vibration reduction, and overhaul accessibility. Additionally, use of the variable valve timing capability to vary the effective compression ratio is expected to improve multifuel tolerance and efficiency. Efforts directed at the design of HRVS components proved to be far more extensive than had been anticipated, ultimately requiring that proof-trial design/development work be performed. Although both time and funds were exhausted before optical or ion-probe types of in-cylinder investigation could be undertaken, a great deal of laboratory data was acquired during the course of the design/development work. This laboratory data is the basis for the information presented in this final report.

  12. A selective review of the first 20 years of instrumental variables models in health-services research and medicine.

    Science.gov (United States)

    Cawley, John

    2015-01-01

    The method of instrumental variables (IV) is useful for estimating causal effects. Intuitively, it exploits exogenous variation in the treatment, sometimes called natural experiments or instruments. This study reviews the literature in health-services research and medical research that applies the method of instrumental variables, documents trends in its use, and offers examples of various types of instruments. A literature search of the PubMed and EconLit research databases for English-language journal articles published after 1990 yielded a total of 522 original research articles. Citations counts for each article were derived from the Web of Science. A selective review was conducted, with articles prioritized based on number of citations, validity and power of the instrument, and type of instrument. The average annual number of papers in health services research and medical research that apply the method of instrumental variables rose from 1.2 in 1991-1995 to 41.8 in 2006-2010. Commonly-used instruments (natural experiments) in health and medicine are relative distance to a medical care provider offering the treatment and the medical care provider's historic tendency to administer the treatment. Less common but still noteworthy instruments include randomization of treatment for reasons other than research, randomized encouragement to undertake the treatment, day of week of admission as an instrument for waiting time for surgery, and genes as an instrument for whether the respondent has a heritable condition. The use of the method of IV has increased dramatically in the past 20 years, and a wide range of instruments have been used. Applications of the method of IV have in several cases upended conventional wisdom that was based on correlations and led to important insights about health and healthcare. Future research should pursue new applications of existing instruments and search for new instruments that are powerful and valid.

  13. Tempts to determine radon entry rate and air exchange rate variable in time from the time course of indoor radon concentration

    International Nuclear Information System (INIS)

    Thomas, J.

    1996-01-01

    For the study and explanation of the diurnal variability of the indoor radon concentration a(t) [Bq/m 3 ], which is proportional to the ratio of the radon entry rate A [Bq/h] and the air exchange rate k [1/h], it would be of advantage to know separately the diurnal variability of both determining quantities A(t) and k(t). To measure directly and continuously the radon entry rate A(t) is possible only in special studies (mostly in experimental rooms) and also continuous measuring of the air exchange rate k(t) is possible also only in special studies for a short time. But continuously measuring radon meters are now common, do not trouble people in normal living regime during day and night. The goal of this endeavour would be the evaluation of the time courses of both determining quantities from the time courses of the indoor radon concentration directly without additional experimental work and so a better utilisation of such measurements. (author)

  14. Hybrid Model Based on Genetic Algorithms and SVM Applied to Variable Selection within Fruit Juice Classification

    Directory of Open Access Journals (Sweden)

    C. Fernandez-Lozano

    2013-01-01

    Full Text Available Given the background of the use of Neural Networks in problems of apple juice classification, this paper aim at implementing a newly developed method in the field of machine learning: the Support Vector Machines (SVM. Therefore, a hybrid model that combines genetic algorithms and support vector machines is suggested in such a way that, when using SVM as a fitness function of the Genetic Algorithm (GA, the most representative variables for a specific classification problem can be selected.

  15. Test-retest reliability of jump execution variables using mechanography: a comparison of jump protocols.

    Science.gov (United States)

    Fitzgerald, John S; Johnson, LuAnn; Tomkinson, Grant; Stein, Jesse; Roemmich, James N

    2018-05-01

    Mechanography during the vertical jump may enhance screening and determining mechanistic causes underlying physical performance changes. Utility of jump mechanography for evaluation is limited by scant test-retest reliability data on force-time variables. This study examined the test-retest reliability of eight jump execution variables assessed from mechanography. Thirty-two women (mean±SD: age 20.8 ± 1.3 yr) and 16 men (age 22.1 ± 1.9 yr) attended a familiarization session and two testing sessions, all one week apart. Participants performed two variations of the squat jump with squat depth self-selected and controlled using a goniometer to 80º knee flexion. Test-retest reliability was quantified as the systematic error (using effect size between jumps), random error (using coefficients of variation), and test-retest correlations (using intra-class correlation coefficients). Overall, jump execution variables demonstrated acceptable reliability, evidenced by small systematic errors (mean±95%CI: 0.2 ± 0.07), moderate random errors (mean±95%CI: 17.8 ± 3.7%), and very strong test-retest correlations (range: 0.73-0.97). Differences in random errors between controlled and self-selected protocols were negligible (mean±95%CI: 1.3 ± 2.3%). Jump execution variables demonstrated acceptable reliability, with no meaningful differences between the controlled and self-selected jump protocols. To simplify testing, a self-selected jump protocol can be used to assess force-time variables with negligible impact on measurement error.

  16. On market timing and portfolio selectivity: modifying the Henriksson-Merton model

    OpenAIRE

    Goś, Krzysztof

    2011-01-01

    This paper evaluates selected functionalities of the parametrical Henriksson-Merton test, a tool designed for measuring the market timing and portfolio selectivity capabilities. It also provides a solution to two significant disadvantages of the model: relatively indirect interpretation and vulnerability to parameter insignificance. The model has been put to test on a group of Polish mutual funds in a period of 63 months (January 2004 – March 2009), providing unsatisfa...

  17. Concurrent variable-interval variable-ratio schedules in a dynamic choice environment.

    Science.gov (United States)

    Bell, Matthew C; Baum, William M

    2017-11-01

    Most studies of operant choice have focused on presenting subjects with a fixed pair of schedules across many experimental sessions. Using these methods, studies of concurrent variable- interval variable-ratio schedules helped to evaluate theories of choice. More recently, a growing literature has focused on dynamic choice behavior. Those dynamic choice studies have analyzed behavior on a number of different time scales using concurrent variable-interval schedules. Following the dynamic choice approach, the present experiment examined performance on concurrent variable-interval variable-ratio schedules in a rapidly changing environment. Our objectives were to compare performance on concurrent variable-interval variable-ratio schedules with extant data on concurrent variable-interval variable-interval schedules using a dynamic choice procedure and to extend earlier work on concurrent variable-interval variable-ratio schedules. We analyzed performances at different time scales, finding strong similarities between concurrent variable-interval variable-interval and concurrent variable-interval variable- ratio performance within dynamic choice procedures. Time-based measures revealed almost identical performance in the two procedures compared with response-based measures, supporting the view that choice is best understood as time allocation. Performance at the smaller time scale of visits accorded with the tendency seen in earlier research toward developing a pattern of strong preference for and long visits to the richer alternative paired with brief "samples" at the leaner alternative ("fix and sample"). © 2017 Society for the Experimental Analysis of Behavior.

  18. Neuronal Intra-Individual Variability Masks Response Selection Differences between ADHD Subtypes—A Need to Change Perspectives

    Directory of Open Access Journals (Sweden)

    Annet Bluschke

    2017-06-01

    Full Text Available Due to the high intra-individual variability in attention deficit/hyperactivity disorder (ADHD, there may be considerable bias in knowledge about altered neurophysiological processes underlying executive dysfunctions in patients with different ADHD subtypes. When aiming to establish dimensional cognitive-neurophysiological constructs representing symptoms of ADHD as suggested by the initiative for Research Domain Criteria, it is crucial to consider such processes independent of variability. We examined patients with the predominantly inattentive subtype (attention deficit disorder, ADD and the combined subtype of ADHD (ADHD-C in a flanker task measuring conflict control. Groups were matched for task performance. Besides using classic event-related potential (ERP techniques and source localization, neurophysiological data was also analyzed using residue iteration decomposition (RIDE to statistically account for intra-individual variability and S-LORETA to estimate the sources of the activations. The analysis of classic ERPs related to conflict monitoring revealed no differences between patients with ADD and ADHD-C. When individual variability was accounted for, clear differences became apparent in the RIDE C-cluster (analog to the P3 ERP-component. While patients with ADD distinguished between compatible and incompatible flanker trials early on, patients with ADHD-C seemed to employ more cognitive resources overall. These differences are reflected in inferior parietal areas. The study demonstrates differences in neuronal mechanisms related to response selection processes between ADD and ADHD-C which, according to source localization, arise from the inferior parietal cortex. Importantly, these differences could only be detected when accounting for intra-individual variability. The results imply that it is very likely that differences in neurophysiological processes between ADHD subtypes are underestimated and have not been recognized because intra

  19. Time-variable gravity potential components for optical clock comparisons and the definition of international time scales

    International Nuclear Information System (INIS)

    Voigt, C.; Denker, H.; Timmen, L.

    2016-01-01

    The latest generation of optical atomic clocks is approaching the level of one part in 10 18 in terms of frequency stability and uncertainty. For clock comparisons and the definition of international time scales, a relativistic redshift effect of the clock frequencies has to be taken into account at a corresponding uncertainty level of about 0.1 m 2 s -2 and 0.01 m in terms of gravity potential and height, respectively. Besides the predominant static part of the gravity potential, temporal variations must be considered in order to avoid systematic frequency shifts. Time-variable gravity potential components induced by tides and non-tidal mass redistributions are investigated with regard to the level of one part in 10 18 . The magnitudes and dominant time periods of the individual gravity potential contributions are investigated globally and for specific laboratory sites together with the related uncertainty estimates. The basics of the computation methods are presented along with the applied models, data sets and software. Solid Earth tides contribute by far the most dominant signal with a global maximum amplitude of 4.2 m 2 s -2 for the potential and a range (maximum-to-minimum) of up to 1.3 and 10.0 m 2 s -2 in terms of potential differences between specific laboratories over continental and intercontinental scales, respectively. Amplitudes of the ocean tidal loading potential can amount up to 1.25 m 2 s -2 , while the range of the potential between specific laboratories is 0.3 and 1.1 m 2 s -2 over continental and intercontinental scales, respectively. These are the only two contributors being relevant at a 10 -17 level. However, several other time-variable potential effects can particularly affect clock comparisons at the 10 -18 level. Besides solid Earth pole tides, these are non-tidal mass redistributions in the atmosphere, the oceans and the continental water storage. (authors)

  20. Protecting chips against hold time violations due to variability

    CERN Document Server

    Neuberger, Gustavo; Reis, Ricardo

    2013-01-01

    With the development of Very-Deep Sub-Micron technologies, process variability is becoming increasingly important and is a very important issue in the design of complex circuits. Process variability is the statistical variation of process parameters, meaning that these parameters do not have always the same value, but become a random variable, with a given mean value and standard deviation. This effect can lead to several issues in digital circuit design.The logical consequence of this parameter variation is that circuit characteristics, as delay and power, also become random variables. Becaus

  1. Adhesive bonding using variable frequency microwave energy

    Science.gov (United States)

    Lauf, Robert J.; McMillan, April D.; Paulauskas, Felix L.; Fathi, Zakaryae; Wei, Jianghua

    1998-01-01

    Methods of facilitating the adhesive bonding of various components with variable frequency microwave energy are disclosed. The time required to cure a polymeric adhesive is decreased by placing components to be bonded via the adhesive in a microwave heating apparatus having a multimode cavity and irradiated with microwaves of varying frequencies. Methods of uniformly heating various articles having conductive fibers disposed therein are provided. Microwave energy may be selectively oriented to enter an edge portion of an article having conductive fibers therein. An edge portion of an article having conductive fibers therein may be selectively shielded from microwave energy.

  2. Familial versus mass selection in small populations

    Directory of Open Access Journals (Sweden)

    Couvet Denis

    2003-07-01

    Full Text Available Abstract We used diffusion approximations and a Markov-chain approach to investigate the consequences of familial selection on the viability of small populations both in the short and in the long term. The outcome of familial selection was compared to the case of a random mating population under mass selection. In small populations, the higher effective size, associated with familial selection, resulted in higher fitness for slightly deleterious and/or highly recessive alleles. Conversely, because familial selection leads to a lower rate of directional selection, a lower fitness was observed for more detrimental genes that are not highly recessive, and with high population sizes. However, in the long term, genetic load was almost identical for both mass and familial selection for populations of up to 200 individuals. In terms of mean time to extinction, familial selection did not have any negative effect at least for small populations (N ≤ 50. Overall, familial selection could be proposed for use in management programs of small populations since it increases genetic variability and short-term viability without impairing the overall persistence times.

  3. Subset Selection by Local Convex Approximation

    DEFF Research Database (Denmark)

    Øjelund, Henrik; Sadegh, Payman; Madsen, Henrik

    1999-01-01

    This paper concerns selection of the optimal subset of variables in a lenear regression setting. The posed problem is combinatiorial and the globally best subset can only be found in exponential time. We define a cost function for the subset selection problem by adding the penalty term to the usual...... of the subset selection problem so as to guarantee positive definiteness of the Hessian term, hence avoiding numerical instability. The backward Elemination type algorithm attempts to improve the results upon termination of the modified Newton-Raphson search by sing the current solution as an initial guess...

  4. Antipersistent dynamics in short time scale variability of self-potential signals

    OpenAIRE

    Cuomo, V.; Lanfredi, M.; Lapenna, V.; Macchiato, M.; Ragosta, M.; Telesca, L.

    2000-01-01

    Time scale properties of self-potential signals are investigated through the analysis of the second order structure function (variogram), a powerful tool to investigate the spatial and temporal variability of observational data. In this work we analyse two sequences of self-potential values measured by means of a geophysical monitoring array located in a seismically active area of Southern Italy. The range of scales investigated goes from a few minutes to several days. It is shown that signal...

  5. THE CHANDRA VARIABLE GUIDE STAR CATALOG

    International Nuclear Information System (INIS)

    Nichols, Joy S.; Lauer, Jennifer L.; Morgan, Douglas L.; Sundheim, Beth A.; Henden, Arne A.; Huenemoerder, David P.; Martin, Eric

    2010-01-01

    Variable stars have been identified among the optical-wavelength light curves of guide stars used for pointing control of the Chandra X-ray Observatory. We present a catalog of these variable stars along with their light curves and ancillary data. Variability was detected to a lower limit of 0.02 mag amplitude in the 4000-10000 A range using the photometrically stable Aspect Camera on board the Chandra spacecraft. The Chandra Variable Guide Star Catalog (VGUIDE) contains 827 stars, of which 586 are classified as definitely variable and 241 are identified as possibly variable. Of the 586 definite variable stars, we believe 319 are new variable star identifications. Types of variables in the catalog include eclipsing binaries, pulsating stars, and rotating stars. The variability was detected during the course of normal verification of each Chandra pointing and results from analysis of over 75,000 guide star light curves from the Chandra mission. The VGUIDE catalog represents data from only about 9 years of the Chandra mission. Future releases of VGUIDE will include newly identified variable guide stars as the mission proceeds. An important advantage of the use of space data to identify and analyze variable stars is the relatively long observations that are available. The Chandra orbit allows for observations up to 2 days in length. Also, guide stars were often used multiple times for Chandra observations, so many of the stars in the VGUIDE catalog have multiple light curves available from various times in the mission. The catalog is presented as both online data associated with this paper and as a public Web interface. Light curves with data at the instrumental time resolution of about 2 s, overplotted with the data binned at 1 ks, can be viewed on the public Web interface and downloaded for further analysis. VGUIDE is a unique project using data collected during the mission that would otherwise be ignored. The stars available for use as Chandra guide stars are

  6. Fully automatic time-window selection using machine learning for global adjoint tomography

    Science.gov (United States)

    Chen, Y.; Hill, J.; Lei, W.; Lefebvre, M. P.; Bozdag, E.; Komatitsch, D.; Tromp, J.

    2017-12-01

    Selecting time windows from seismograms such that the synthetic measurements (from simulations) and measured observations are sufficiently close is indispensable in a global adjoint tomography framework. The increasing amount of seismic data collected everyday around the world demands "intelligent" algorithms for seismic window selection. While the traditional FLEXWIN algorithm can be "automatic" to some extent, it still requires both human input and human knowledge or experience, and thus is not deemed to be fully automatic. The goal of intelligent window selection is to automatically select windows based on a learnt engine that is built upon a huge number of existing windows generated through the adjoint tomography project. We have formulated the automatic window selection problem as a classification problem. All possible misfit calculation windows are classified as either usable or unusable. Given a large number of windows with a known selection mode (select or not select), we train a neural network to predict the selection mode of an arbitrary input window. Currently, the five features we extract from the windows are its cross-correlation value, cross-correlation time lag, amplitude ratio between observed and synthetic data, window length, and minimum STA/LTA value. More features can be included in the future. We use these features to characterize each window for training a multilayer perceptron neural network (MPNN). Training the MPNN is equivalent to solve a non-linear optimization problem. We use backward propagation to derive the gradient of the loss function with respect to the weighting matrices and bias vectors and use the mini-batch stochastic gradient method to iteratively optimize the MPNN. Numerical tests show that with a careful selection of the training data and a sufficient amount of training data, we are able to train a robust neural network that is capable of detecting the waveforms in an arbitrary earthquake data with negligible detection error

  7. Test-retest reliability of stride time variability while dual tasking in healthy and demented adults with frontotemporal degeneration

    Directory of Open Access Journals (Sweden)

    Herrmann Francois R

    2011-07-01

    Full Text Available Abstract Background Although test-retest reliability of mean values of spatio-temporal gait parameters has been assessed for reliability while walking alone (i.e., single tasking, little is known about the test-retest reliability of stride time variability (STV while performing an attention demanding-task (i.e., dual tasking. The objective of this study was to examine immediate test-retest reliability of STV while single and dual tasking in cognitively healthy older individuals (CHI and in demented patients with frontotemporal degeneration (FTD. Methods Based on a cross-sectional design, 69 community-dwelling CHI (mean age 75.5 ± 4.3; 43.5% women and 14 demented patients with FTD (mean age 65.7 ± 9.8 years; 6.7% women walked alone (without performing an additional task; i.e., single tasking and while counting backward (CB aloud starting from 50 (i.e., dual tasking. Each subject completed two trials for all the testing conditions. The mean value and the coefficient of variation (CoV of stride time while walking alone and while CB at self-selected walking speed were measured using GAITRite® and SMTEC® footswitch systems. Results ICC of mean value in CHI under both walking conditions were higher than ICC of demented patients with FTD and indicated perfect reliability (ICC > 0.80. Reliability of mean value was better while single tasking than dual tasking in CHI (ICC = 0.96 under single-task and ICC = 0.86 under dual-task, whereas it was the opposite in demented patients (ICC = 0.65 under single-task and ICC = 0.81 under dual-task. ICC of CoV was slight to poor whatever the group of participants and the walking condition (ICC Conclusions The immediate test-retest reliability of the mean value of stride time in single and dual tasking was good in older CHI as well as in demented patients with FTD. In contrast, the variability of stride time was low in both groups of participants.

  8. A flow system for generation of concentration perturbation in two-dimensional correlation near-infrared spectroscopy: application to variable selection in multivariate calibration.

    Science.gov (United States)

    Pereira, Claudete Fernandes; Pasquini, Celio

    2010-05-01

    A flow system is proposed to produce a concentration perturbation in liquid samples, aiming at the generation of two-dimensional correlation near-infrared spectra. The system presents advantages in relation to batch systems employed for the same purpose: the experiments are accomplished in a closed system; application of perturbation is rapid and easy; and the experiments can be carried out with micro-scale volumes. The perturbation system has been evaluated in the investigation and selection of relevant variables for multivariate calibration models for the determination of quality parameters of gasoline, including ethanol content, MON (motor octane number), and RON (research octane number). The main advantage of this variable selection approach is the direct association between spectral features and chemical composition, allowing easy interpretation of the regression models.

  9. Assessment of acute pesticide toxicity with selected biochemical variables in suicide attempting subjects

    International Nuclear Information System (INIS)

    Soomro, A.M.; Seehar, G.M.; Bhanger, M.I.

    2003-01-01

    Pesticide induced changes were assessed in thirty two subjects of attempted suicide cases. Among all, the farmers and their families were recorded as most frequently suicide attempting. The values obtained from seven biochemical variables of 29 years old (average age) hospitalized subjects were compared to the same number and age matched normal volunteers. The results revealed major differences in the mean values of the selected parameters. The mean difference calculate; alkaline phosphatase (178.7 mu/l), Bilirubin (7.5 mg/dl), GPT (59.2 mu/l) and glucose (38.6 mg/dl) were higher than the controls, which indicate the hepatotoxicity induced by the pesticides in suicide attempting individuals. Increase in serum creatinine and urea indicated renal malfunction that could be linked with pesticide induced nephrotoxicity among them. (author)

  10. Gait variability and basal ganglia disorders: stride-to-stride variations of gait cycle timing in Parkinson's disease and Huntington's disease

    Science.gov (United States)

    Hausdorff, J. M.; Cudkowicz, M. E.; Firtion, R.; Wei, J. Y.; Goldberger, A. L.

    1998-01-01

    The basal ganglia are thought to play an important role in regulating motor programs involved in gait and in the fluidity and sequencing of movement. We postulated that the ability to maintain a steady gait, with low stride-to-stride variability of gait cycle timing and its subphases, would be diminished with both Parkinson's disease (PD) and Huntington's disease (HD). To test this hypothesis, we obtained quantitative measures of stride-to-stride variability of gait cycle timing in subjects with PD (n = 15), HD (n = 20), and disease-free controls (n = 16). All measures of gait variability were significantly increased in PD and HD. In subjects with PD and HD, gait variability measures were two and three times that observed in control subjects, respectively. The degree of gait variability correlated with disease severity. In contrast, gait speed was significantly lower in PD, but not in HD, and average gait cycle duration and the time spent in many subphases of the gait cycle were similar in control subjects, HD subjects, and PD subjects. These findings are consistent with a differential control of gait variability, speed, and average gait cycle timing that may have implications for understanding the role of the basal ganglia in locomotor control and for quantitatively assessing gait in clinical settings.

  11. Using StorAge Selection Functions to Improve Simulation of Groundwater Nitrate Lag Times in the SWAT Modeling Framework.

    Science.gov (United States)

    Wilusz, D. C.; Fuka, D.; Cho, C.; Ball, W. P.; Easton, Z. M.; Harman, C. J.

    2017-12-01

    Intensive agriculture and atmospheric deposition have dramatically increased the input of reactive nitrogen into many watersheds worldwide. Reactive nitrogen can leach as nitrate into groundwater, which is stored and eventually released over years to decades into surface waters, potentially degrading water quality. To simulate the fate and transport of groundwater nitrate, many researchers and practitioners use the Soil and Water Assessment Tool (SWAT) or an enhanced version of SWAT that accounts for topographically-driven variable source areas (TopoSWAT). Both SWAT and TopoSWAT effectively assume that nitrate in the groundwater reservoir is well-mixed, which is known to be a poor assumption at many sites. In this study, we describe modifications to TopoSWAT that (1) relax the assumption of groundwater well-mixedness, (2) more flexibly parameterize groundwater transport as a time-varying distribution of travel times using the recently developed theory of rank StorAge Selection (rSAS) functions, and (3) allow for groundwater age to be represented by position on the hillslope or hydrological distance from the stream. The approach conceptualizes the groundwater aquifer as a population of water parcels entering as recharge with a particular nitrate concentration, aging as they move through storage, and eventually exiting as baseflow. The rSAS function selects the distribution of parcel ages that exit as baseflow based on a parameterized probability distribution; this distribution can be adjusted to preferentially select different distributions of young and old parcels in storage so as to reproduce (in principle) any form of transport. The modified TopoSWAT model (TopoSWAT+rSAS) is tested at a small agricultural catchment in the Eastern Shore, MD with an extensive hydrologic and hydrochemical data record for calibration and evaluation. The results examine (1) the sensitivity of TopoSWAT+rSAS modeling of nitrate transport to assumptions about the distribution of travel

  12. Time-adjusted variable resistor

    Science.gov (United States)

    Heyser, R. C.

    1972-01-01

    Timing mechanism was developed effecting extremely precisioned highly resistant fixed resistor. Switches shunt all or portion of resistor; effective resistance is varied over time interval by adjusting switch closure rate.

  13. Impact of perennial energy crops income variability on the crop selection of risk averse farmers

    International Nuclear Information System (INIS)

    Alexander, Peter; Moran, Dominic

    2013-01-01

    The UK Government policy is for the area of perennial energy crops in the UK to expand significantly. Farmers need to choose these crops in preference to conventional rotations for this to be achievable. This paper looks at the potential level and variability of perennial energy crop incomes and the relation to incomes from conventional arable crops. Assuming energy crop prices are correlated to oil prices the results suggests that incomes from them are not well correlated to conventional arable crop incomes. A farm scale mathematical programming model is then used to attempt to understand the affect on risk averse farmers crop selection. The inclusion of risk reduces the energy crop price required for the selection of these crops. However yields towards the highest of those predicted in the UK are still required to make them an optimal choice, suggesting only a small area of energy crops within the UK would be expected to be chosen to be grown. This must be regarded as a tentative conclusion, primarily due to high sensitivity found to crop yields, resulting in the proposal for further work to apply the model using spatially disaggregated data. - Highlights: ► Energy crop and conventional crop incomes suggested as uncorrelated. ► Diversification effect of energy crops investigated for a risk averse farmer. ► Energy crops indicated as optimal selection only on highest yielding UK sites. ► Large establishment grant rates to substantially alter crop selections.

  14. Time-dependent ion selectivity in capacitive charging of porous electrodes

    NARCIS (Netherlands)

    Zhao, R.; Soestbergen, M.; Rijnaarts, H.H.M.; Wal, van der A.F.; Bazant, M.Z.; Biesheuvel, P.M.

    2012-01-01

    In a combined experimental and theoretical study, we show that capacitive charging of porous electrodes in multicomponent electrolytes may lead to the phenomenon of time-dependent ion selectivity of the electrical double layers (EDLs) in the electrodes. This effect is found in experiments on

  15. Selection of internal control genes for quantitative real-time RT-PCR studies during tomato development process

    Directory of Open Access Journals (Sweden)

    Borges-Pérez Andrés

    2008-12-01

    Full Text Available Abstract Background The elucidation of gene expression patterns leads to a better understanding of biological processes. Real-time quantitative RT-PCR has become the standard method for in-depth studies of gene expression. A biologically meaningful reporting of target mRNA quantities requires accurate and reliable normalization in order to identify real gene-specific variation. The purpose of normalization is to control several variables such as different amounts and quality of starting material, variable enzymatic efficiencies of retrotranscription from RNA to cDNA, or differences between tissues or cells in overall transcriptional activity. The validity of a housekeeping gene as endogenous control relies on the stability of its expression level across the sample panel being analysed. In the present report we describe the first systematic evaluation of potential internal controls during tomato development process to identify which are the most reliable for transcript quantification by real-time RT-PCR. Results In this study, we assess the expression stability of 7 traditional and 4 novel housekeeping genes in a set of 27 samples representing different tissues and organs of tomato plants at different developmental stages. First, we designed, tested and optimized amplification primers for real-time RT-PCR. Then, expression data from each candidate gene were evaluated with three complementary approaches based on different statistical procedures. Our analysis suggests that SGN-U314153 (CAC, SGN-U321250 (TIP41, SGN-U346908 ("Expressed" and SGN-U316474 (SAND genes provide superior transcript normalization in tomato development studies. We recommend different combinations of these exceptionally stable housekeeping genes for suited normalization of different developmental series, including the complete tomato development process. Conclusion This work constitutes the first effort for the selection of optimal endogenous controls for quantitative real-time

  16. Knowledge acquisition with domain experts on the aspects of use of visual variables in the Space Time Cube

    DEFF Research Database (Denmark)

    Kveladze, Irma; Kraak, Menno-Jan

    2013-01-01

    participants are selected purposefully based on the specific criteria in order to say something on the topic that has to be discussed (Nielsen, 1993). Accordingly, the main objective for focus group interview was to discuss the use of the visual variables based on the cartographic design theory (Bertin, 1983......The Space – Time Cube (STC) is a visual representation developed at the end of the 20th century for understanding the spatio-temporal aspects in human’s everyday life (Hägerstrand, 1970). Since its introduction, it has been widely used in a various discipline (Kraak, 2003; Demšar and Virrantaus...... to other visual representations. However, the usability metrics of the cartographic design theory for the STC content still remain to be unexplored. Therefore, this study particularly focused on the evaluation of the cartographic design aspects into the STC. This study was conducted in two different...

  17. Global exponential stability for discrete-time neural networks with variable delays

    International Nuclear Information System (INIS)

    Chen Wuhua; Lu Xiaomei; Liang Dongying

    2006-01-01

    This Letter provides new exponential stability criteria for discrete-time neural networks with variable delays. The main technique is to reduce exponential convergence estimation of the neural network solution to that of one component of the corresponding solution by constructing Lyapunov function based on M-matrix. By introducing the tuning parameter diagonal matrix, the delay-independent and delay-dependent exponential stability conditions have been unified in the same mathematical formula. The effectiveness of the new results are illustrated by three examples

  18. Resolución del Response Time Variability Problem mediante tabu search

    OpenAIRE

    Corominas Subias, Albert; García Villoria, Alberto; Pastor Moreno, Rafael

    2009-01-01

    El Response Time Variability Problem (RTVP) es un problema combinatorio de scheduling publicado recientemente en la literatura. Dicho problema de optimización combinatoria es muy fácil de formular pero muy difícil de resolver de forma exacta (es NP-hard). El RTVP se presenta cuando productos, clientes o tareas se han de secuenciar minimizando la variabilidad entre los instantes de tiempo en los que reciben los recursos que ellos necesitan. Este problema tiene una gran cantidad de aplicaciones...

  19. Bounds of Double Integral Dynamic Inequalities in Two Independent Variables on Time Scales

    Directory of Open Access Journals (Sweden)

    S. H. Saker

    2011-01-01

    Full Text Available Our aim in this paper is to establish some explicit bounds of the unknown function in a certain class of nonlinear dynamic inequalities in two independent variables on time scales which are unbounded above. These on the one hand generalize and on the other hand furnish a handy tool for the study of qualitative as well as quantitative properties of solutions of partial dynamic equations on time scales. Some examples are considered to demonstrate the applications of the results.

  20. Primary gamma ray selection in a hybrid timing/imaging Cherenkov array

    Directory of Open Access Journals (Sweden)

    Postnikov E.B.

    2017-01-01

    Full Text Available This work is a methodical study on hybrid reconstruction techniques for hybrid imaging/timing Cherenkov observations. This type of hybrid array is to be realized at the gamma-observatory TAIGA intended for very high energy gamma-ray astronomy (> 30 TeV. It aims at combining the cost-effective timing-array technique with imaging telescopes. Hybrid operation of both of these techniques can lead to a relatively cheap way of development of a large area array. The joint approach of gamma event selection was investigated on both types of simulated data: the image parameters from the telescopes, and the shower parameters reconstructed from the timing array. The optimal set of imaging parameters and shower parameters to be combined is revealed. The cosmic ray background suppression factor depending on distance and energy is calculated. The optimal selection technique leads to cosmic ray background suppression of about 2 orders of magnitude on distances up to 450 m for energies greater than 50 TeV.

  1. Space-time variability of hydrological drought and wetness in Iran using NCEP/NCAR and GPCC datasets

    Directory of Open Access Journals (Sweden)

    T. Raziei

    2010-10-01

    Full Text Available Space-time variability of hydrological drought and wetness over Iran is investigated using the National Centers for Environmental Prediction/National Center for Atmospheric Research (NCEP/NCAR reanalysis and the Global Precipitation Climatology Centre (GPCC dataset for the common period 1948–2007. The aim is to complement previous studies on the detection of long-term trends in drought/wetness time series and on the applicability of reanalysis data for drought monitoring in Iran. Climate conditions of the area are assessed through the Standardized Precipitation Index (SPI on 24-month time scale, while Principal Component Analysis (PCA and Varimax rotation are used for investigating drought/wetness variability, and drought regionalization, respectively. Singular Spectrum Analysis (SSA is applied to the time series of interest to extract the leading nonlinear components and compare them with linear fittings.

    Differences in drought and wetness area coverage resulting from the two datasets are discussed also in relation to the change occurred in recent years. NCEP/NCAR and GPCC are in good agreement in identifying four sub-regions as principal spatial modes of drought variability. However, the climate variability in each area is not univocally represented by the two datasets: a good agreement is found for south-eastern and north-western regions, while noticeable discrepancies occur for central and Caspian sea regions. A comparison with NCEP Reanalysis II for the period 1979–2007, seems to exclude that the discrepancies are merely due to the introduction of satellite data into the reanalysis assimilation scheme.

  2. Effects of time-variable exposure regimes of the insecticide chlorpyrifos on freshwater invertebrate communities in microcosms

    NARCIS (Netherlands)

    Zafar, M.I.; Wijngaarden, van R.; Roessink, I.; Brink, van den P.J.

    2011-01-01

    The present study compared the effects of different time-variable exposure regimes having the same time-weighted average (TWA) concentration of the organophosphate insecticide chlorpyrifos on freshwater invertebrate communities to enable extrapolation of effects across exposure regimes. The

  3. Selection of teaching content in times of changing

    DEFF Research Database (Denmark)

    Petersen, Benedikte Vilslev

    in the time between the tests? Based on the overall development towards output-control in the school system, this project focuses on investigating whether content choices today have changed in general: What criteria are underlying the teacher's choice of teaching content? Methodically the study will work......Authors: Benedikte Vilslev Petersen Institution: VIA University College Teacher Education in Aarhus Contact details: bp@via.dk Title: Selection of teaching content in times of changing. Abstract: The educational system in Denmark is currently affected by changes, which can be generally...... characterized as a development going from input-control to output-control. Increasing research in classroom focuses on conditions for effective teaching, pupil learning outcome and classroom management techniques (e.g. Hattie 2013, Grøterud and Nielsen 1997, Nordenbo 2008, Meyer 2006, Hermansen 2007). However...

  4. Climate variability from isotope records in precipitation

    International Nuclear Information System (INIS)

    Grassl, H.; Latif, M.; Schotterer, U.; Gourcy, L.

    2002-01-01

    Selected time series from the Global Network for Isotopes in Precipitation (GNIP) revealed a close relationship to climate variability phenomena like El Nino - Southern Oscillation (ENSO) or the North Atlantic Oscillation (NAO) although the precipitation anomaly in the case studies of Manaus (Brazil) and Groningen (The Netherlands) is rather weak. For a sound understanding of this relationship especially in the case of Manaus, the data should include major events like the 1997/98 El Nino, however, the time series are interrupted frequently or important stations are even closed. Improvements are only possible if existing key stations and new ones (placed at 'hot spots' derived from model experiments) are supported continuously. A close link of GNIP to important scientific programmes like CLIVAR, the Climate Variability and Predictability Programme seems to be indispensable for a successful continuation. (author)

  5. Effects of the lateral amplitude and regularity of upper body fluctuation on step time variability evaluated using return map analysis.

    Science.gov (United States)

    Chidori, Kazuhiro; Yamamoto, Yuji

    2017-01-01

    The aim of this study was to evaluate the effects of the lateral amplitude and regularity of upper body fluctuation on step time variability. Return map analysis was used to clarify the relationship between step time variability and a history of falling. Eleven healthy, community-dwelling older adults and twelve younger adults participated in the study. All of the subjects walked 25 m at a comfortable speed. Trunk acceleration was measured using triaxial accelerometers attached to the third lumbar vertebrae (L3) and the seventh cervical vertebrae (C7). The normalized average magnitude of acceleration, the coefficient of determination ($R^2$) of the return map, and the step time variabilities, were calculated. Cluster analysis using the average fluctuation and the regularity of C7 fluctuation identified four walking patterns in the mediolateral (ML) direction. The participants with higher fluctuation and lower regularity showed significantly greater step time variability compared with the others. Additionally, elderly participants who had fallen in the past year had higher amplitude and a lower regularity of fluctuation during walking. In conclusion, by focusing on the time evolution of each step, it is possible to understand the cause of stride and/or step time variability that is associated with a risk of falls.

  6. A simulation study on estimating biomarker-treatment interaction effects in randomized trials with prognostic variables.

    Science.gov (United States)

    Haller, Bernhard; Ulm, Kurt

    2018-02-20

    To individualize treatment decisions based on patient characteristics, identification of an interaction between a biomarker and treatment is necessary. Often such potential interactions are analysed using data from randomized clinical trials intended for comparison of two treatments. Tests of interactions are often lacking statistical power and we investigated if and how a consideration of further prognostic variables can improve power and decrease the bias of estimated biomarker-treatment interactions in randomized clinical trials with time-to-event outcomes. A simulation study was performed to assess how prognostic factors affect the estimate of the biomarker-treatment interaction for a time-to-event outcome, when different approaches, like ignoring other prognostic factors, including all available covariates or using variable selection strategies, are applied. Different scenarios regarding the proportion of censored observations, the correlation structure between the covariate of interest and further potential prognostic variables, and the strength of the interaction were considered. The simulation study revealed that in a regression model for estimating a biomarker-treatment interaction, the probability of detecting a biomarker-treatment interaction can be increased by including prognostic variables that are associated with the outcome, and that the interaction estimate is biased when relevant prognostic variables are not considered. However, the probability of a false-positive finding increases if too many potential predictors are included or if variable selection is performed inadequately. We recommend undertaking an adequate literature search before data analysis to derive information about potential prognostic variables and to gain power for detecting true interaction effects and pre-specifying analyses to avoid selective reporting and increased false-positive rates.

  7. Time variability of X-ray binaries: observations with INTEGRAL. Modeling

    International Nuclear Information System (INIS)

    Cabanac, Clement

    2007-01-01

    The exact origin of the observed X and Gamma ray variability in X-ray binaries is still an open debate in high energy astrophysics. Among others, these objects are showing aperiodic and quasi-periodic luminosity variations on timescales as small as the millisecond. This erratic behavior must put constraints on the proposed emission processes occurring in the vicinity of the neutrons star or the stellar mass black-hole held by these objects. We propose here to study their behavior following 3 different ways: first we examine the evolution of a particular X-ray source discovered by INTEGRAL, IGR J19140+0951. Using timing and spectral data given by different instruments, we show that the source type is plausibly consistent with a High Mass X-ray Binary hosting a neutrons star. Subsequently, we propose a new method dedicated to the study of timing data coming from coded mask aperture instruments. Using it on INTEGRAL/ISGRI real data, we detect the presence of periodic and quasi-periodic features in some pulsars and micro-quasars at energies as high as a hundred keV. Finally, we suggest a model designed to describe the low frequency variability of X-ray binaries in their hardest state. This model is based on thermal comptonization of soft photons by a warm corona in which a pressure wave is propagating in cylindrical geometry. By computing both numerical simulations and analytical solution, we show that this model should be suitable to describe some of the typical features observed in X-ray binaries power spectra in their hard state and their evolution such as aperiodic noise and low frequency quasi-periodic oscillations. (author) [fr

  8. Variable selection based on clustering analysis for improvement of polyphenols prediction in green tea using synchronous fluorescence spectra

    Science.gov (United States)

    Shan, Jiajia; Wang, Xue; Zhou, Hao; Han, Shuqing; Riza, Dimas Firmanda Al; Kondo, Naoshi

    2018-04-01

    Synchronous fluorescence spectra, combined with multivariate analysis were used to predict flavonoids content in green tea rapidly and nondestructively. This paper presented a new and efficient spectral intervals selection method called clustering based partial least square (CL-PLS), which selected informative wavelengths by combining clustering concept and partial least square (PLS) methods to improve models’ performance by synchronous fluorescence spectra. The fluorescence spectra of tea samples were obtained and k-means and kohonen-self organizing map clustering algorithms were carried out to cluster full spectra into several clusters, and sub-PLS regression model was developed on each cluster. Finally, CL-PLS models consisting of gradually selected clusters were built. Correlation coefficient (R) was used to evaluate the effect on prediction performance of PLS models. In addition, variable influence on projection partial least square (VIP-PLS), selectivity ratio partial least square (SR-PLS), interval partial least square (iPLS) models and full spectra PLS model were investigated and the results were compared. The results showed that CL-PLS presented the best result for flavonoids prediction using synchronous fluorescence spectra.

  9. Automated optimization and construction of chemometric models based on highly variable raw chromatographic data.

    Science.gov (United States)

    Sinkov, Nikolai A; Johnston, Brandon M; Sandercock, P Mark L; Harynuk, James J

    2011-07-04

    Direct chemometric interpretation of raw chromatographic data (as opposed to integrated peak tables) has been shown to be advantageous in many circumstances. However, this approach presents two significant challenges: data alignment and feature selection. In order to interpret the data, the time axes must be precisely aligned so that the signal from each analyte is recorded at the same coordinates in the data matrix for each and every analyzed sample. Several alignment approaches exist in the literature and they work well when the samples being aligned are reasonably similar. In cases where the background matrix for a series of samples to be modeled is highly variable, the performance of these approaches suffers. Considering the challenge of feature selection, when the raw data are used each signal at each time is viewed as an individual, independent variable; with the data rates of modern chromatographic systems, this generates hundreds of thousands of candidate variables, or tens of millions of candidate variables if multivariate detectors such as mass spectrometers are utilized. Consequently, an automated approach to identify and select appropriate variables for inclusion in a model is desirable. In this research we present an alignment approach that relies on a series of deuterated alkanes which act as retention anchors for an alignment signal, and couple this with an automated feature selection routine based on our novel cluster resolution metric for the construction of a chemometric model. The model system that we use to demonstrate these approaches is a series of simulated arson debris samples analyzed by passive headspace extraction, GC-MS, and interpreted using partial least squares discriminant analysis (PLS-DA). Copyright © 2011 Elsevier B.V. All rights reserved.

  10. Extracting the relevant delays in time series modelling

    DEFF Research Database (Denmark)

    Goutte, Cyril

    1997-01-01

    selection, and more precisely stepwise forward selection. The method is compared to other forward selection schemes, as well as to a nonparametric tests aimed at estimating the embedding dimension of time series. The final application extends these results to the efficient estimation of FIR filters on some......In this contribution, we suggest a convenient way to use generalisation error to extract the relevant delays from a time-varying process, i.e. the delays that lead to the best prediction performance. We design a generalisation-based algorithm that takes its inspiration from traditional variable...

  11. Time-Sequential Working Wavelength-Selective Filter for Flat Autostereoscopic Displays

    Directory of Open Access Journals (Sweden)

    René de la Barré

    2017-02-01

    Full Text Available A time-sequential working, spatially-multiplexed autostereoscopic 3D display design consisting of a fast switchable RGB-color filter array and a fast color display is presented. The newly-introduced 3D display design is usable as a multi-user display, as well as a single-user system. The wavelength-selective filter barrier emits the light from a larger aperture than common autostereoscopic barrier displays with similar barrier pitch and ascent. Measurements on a demonstrator with commercial display components, simulations and computational evaluations have been carried out to describe the proposed wavelength-selective display design in static states and to show the weak spots of display filters in commercial displays. An optical modelling of wavelength-selective barriers has been used for instance to calculate the light ray distribution properties of that arrangement. In the time-sequential implementation, it is important to avoid that quick eye or eyelid movement leads to visible color artifacts. Therefore, color filter cells, switching faster than conventional LC display cells, must distribute directed light from different primaries at the same time, to create a 3D presentation. For that, electric tunable liquid crystal Fabry–Pérot color filters are presented. They switch on-off the colors red, green and blue in the millisecond regime. Their active areas consist of a sub-micrometer-thick nematic layer sandwiched between dielectric mirrors and indium tin oxide (ITO-electrodes. These cells shall switch narrowband light of red, green or blue. A barrier filter array for a high resolution, glasses-free 3D display has to be equipped with several thousand switchable filter elements having different color apertures.

  12. The reliable solution and computation time of variable parameters logistic model

    Science.gov (United States)

    Wang, Pengfei; Pan, Xinnong

    2018-05-01

    The study investigates the reliable computation time (RCT, termed as T c) by applying a double-precision computation of a variable parameters logistic map (VPLM). Firstly, by using the proposed method, we obtain the reliable solutions for the logistic map. Secondly, we construct 10,000 samples of reliable experiments from a time-dependent non-stationary parameters VPLM and then calculate the mean T c. The results indicate that, for each different initial value, the T cs of the VPLM are generally different. However, the mean T c trends to a constant value when the sample number is large enough. The maximum, minimum, and probable distribution functions of T c are also obtained, which can help us to identify the robustness of applying a nonlinear time series theory to forecasting by using the VPLM output. In addition, the T c of the fixed parameter experiments of the logistic map is obtained, and the results suggest that this T c matches the theoretical formula-predicted value.

  13. Intraindividual variability in reaction time before and after neoadjuvant chemotherapy in women diagnosed with breast cancer.

    Science.gov (United States)

    Yao, Christie; Rich, Jill B; Tirona, Kattleya; Bernstein, Lori J

    2017-12-01

    Women treated with chemotherapy for breast cancer experience subtle cognitive deficits. Research has focused on mean performance level, yet recent work suggests that within-person variability in reaction time performance may underlie cognitive symptoms. We examined intraindividual variability (IIV) in women diagnosed with breast cancer and treated with neoadjuvant chemotherapy. Patients (n = 28) were assessed at baseline before chemotherapy (T1), approximately 1 month after chemotherapy but prior to surgery (T2), and after surgery about 9 months post chemotherapy (T3). Healthy women of similar age and education (n = 20) were assessed at comparable time intervals. Using a standardized regression-based approach, we examined changes in mean performance level and IIV (eg, intraindividual standard deviation) on a Stroop task and self-report measures of cognitive function from T1 to T2 and T1 to T3. At T1, women with breast cancer were more variable than controls as task complexity increased. Change scores from T1 to T2 were similar between groups on all Stroop performance measures. From T1 to T3, controls improved more than women with breast cancer. IIV was more sensitive than mean reaction time in capturing group differences. Additional analyses showed increased cognitive symptoms reported by women with breast cancer from T1 to T3. Specifically, change in language symptoms was positively correlated with change in variability. Women with breast cancer declined in attention and inhibitory control relative to pretreatment performance. Future studies should include measures of variability, because they are an important sensitive indicator of change in cognitive function. Copyright © 2016 John Wiley & Sons, Ltd.

  14. Intraindividual Stepping Reaction Time Variability Predicts Falls in Older Adults With Mild Cognitive Impairment

    OpenAIRE

    Bunce, D; Haynes, BI; Lord, SR; Gschwind, YJ; Kochan, NA; Reppermund, S; Brodaty, H; Sachdev, PS; Delbaere, K

    2017-01-01

    Background: Reaction time measures have considerable potential to aid neuropsychological assessment in a variety of health care settings. One such measure, the intraindividual reaction time variability (IIV), is of particular interest as it is thought to reflect neurobiological disturbance. IIV is associated with a variety of age-related neurological disorders, as well as gait impairment and future falls in older adults. However, although persons diagnosed with Mild Cognitive Impairment (MCI)...

  15. [Silvicultural treatments and their selection effects].

    Science.gov (United States)

    Vincent, G

    1973-01-01

    Selection can be defined in terms of its observable consequences as the non random differential reproduction of genotypes (Lerner 1958). In the forest stands we are selecting during the improvements-fellings and reproduction treatments the individuals surpassing in growth or in production of first-class timber. However the silvicultural treatments taken in forest stands guarantee a permanent increase of forest production only in such cases, if they have been taken with respect to the principles of directional (dynamic) selection. These principles require that the trees determined for further growing and for forest regeneration are selected by their hereditary properties, i.e. by their genotypes.For making this selection feasible, our study deals with the genetic parameters and gives some examples of the application of the response, the selection differential, the heritability in the narrow and in the broad sense, as well as of the genetic and genotypic gain. On the strength of this parameter we have the possibility to estimate the economic success of several silvicultural treatments in forest stands.The mentioned examples demonstrate that the selection measures of a higher intensity will be manifested in a higher selection differential, in a higher genetic and genotypic gain and that the mentioned measures show more distinct effects in the variable populations - in natural forest - than in the population characteristic by a smaller variability, e.g. in many uniform artificially established stands.The examples of influences of different selection on the genotypes composition of population prove that genetics instructs us to differentiate the different genotypes of the same species and gives us at the same time a new criterions for evaluating selectional treatments. These criterions from economic point of view is necessary to consider in silviculture as advantageous even for the reason that we can judge from these criterions the genetical composition of forest stands

  16. Noncontextuality with Marginal Selectivity in Reconstructing Mental Architectures

    Directory of Open Access Journals (Sweden)

    Ru eZhang

    2015-06-01

    Full Text Available We present a general theory of series-parallel mental architectures with selectively influenced stochastically non-independent components. A mental architecture is a hypothetical network of processes aimed at performing a task, of which we only observe the overall time it takes under variable parameters of the task. It is usually assumed that the network contains several processes selectively influenced by different experimental factors, and then the question is asked as to how these processes are arranged within the network, e.g., whether they are concurrent or sequential. One way of doing this is to consider the distribution functions for the overall processing time and compute certain linear combinations thereof (interaction contrasts. The theory of selective influences in psychology can be viewed as a special application of the interdisciplinary theory of (noncontextuality having its origins and main applications in quantum theory. In particular, lack of contextuality is equivalent to the existence of a hidden random entity of which all the random variables in play are functions. Consequently, for any given value of this common random entity, the processing times and their compositions (minima, maxima, or sums become deterministic quantities. These quantities, in turn, can be treated as random variables with (shifted Heaviside distribution functions, for which one can easily compute various linear combinations across different treatments, including interaction contrasts. This mathematical fact leads to a simple method, more general than the previously used ones, to investigate and characterize the interaction contrast for different types of series-parallel architectures.

  17. Trend Change Detection in NDVI Time Series: Effects of Inter-Annual Variability and Methodology

    Science.gov (United States)

    Forkel, Matthias; Carvalhais, Nuno; Verbesselt, Jan; Mahecha, Miguel D.; Neigh, Christopher S.R.; Reichstein, Markus

    2013-01-01

    Changing trends in ecosystem productivity can be quantified using satellite observations of Normalized Difference Vegetation Index (NDVI). However, the estimation of trends from NDVI time series differs substantially depending on analyzed satellite dataset, the corresponding spatiotemporal resolution, and the applied statistical method. Here we compare the performance of a wide range of trend estimation methods and demonstrate that performance decreases with increasing inter-annual variability in the NDVI time series. Trend slope estimates based on annual aggregated time series or based on a seasonal-trend model show better performances than methods that remove the seasonal cycle of the time series. A breakpoint detection analysis reveals that an overestimation of breakpoints in NDVI trends can result in wrong or even opposite trend estimates. Based on our results, we give practical recommendations for the application of trend methods on long-term NDVI time series. Particularly, we apply and compare different methods on NDVI time series in Alaska, where both greening and browning trends have been previously observed. Here, the multi-method uncertainty of NDVI trends is quantified through the application of the different trend estimation methods. Our results indicate that greening NDVI trends in Alaska are more spatially and temporally prevalent than browning trends. We also show that detected breakpoints in NDVI trends tend to coincide with large fires. Overall, our analyses demonstrate that seasonal trend methods need to be improved against inter-annual variability to quantify changing trends in ecosystem productivity with higher accuracy.

  18. Harmonize input selection for sediment transport prediction

    Science.gov (United States)

    Afan, Haitham Abdulmohsin; Keshtegar, Behrooz; Mohtar, Wan Hanna Melini Wan; El-Shafie, Ahmed

    2017-09-01

    In this paper, three modeling approaches using a Neural Network (NN), Response Surface Method (RSM) and response surface method basis Global Harmony Search (GHS) are applied to predict the daily time series suspended sediment load. Generally, the input variables for forecasting the suspended sediment load are manually selected based on the maximum correlations of input variables in the modeling approaches based on NN and RSM. The RSM is improved to select the input variables by using the errors terms of training data based on the GHS, namely as response surface method and global harmony search (RSM-GHS) modeling method. The second-order polynomial function with cross terms is applied to calibrate the time series suspended sediment load with three, four and five input variables in the proposed RSM-GHS. The linear, square and cross corrections of twenty input variables of antecedent values of suspended sediment load and water discharge are investigated to achieve the best predictions of the RSM based on the GHS method. The performances of the NN, RSM and proposed RSM-GHS including both accuracy and simplicity are compared through several comparative predicted and error statistics. The results illustrated that the proposed RSM-GHS is as uncomplicated as the RSM but performed better, where fewer errors and better correlation was observed (R = 0.95, MAE = 18.09 (ton/day), RMSE = 25.16 (ton/day)) compared to the ANN (R = 0.91, MAE = 20.17 (ton/day), RMSE = 33.09 (ton/day)) and RSM (R = 0.91, MAE = 20.06 (ton/day), RMSE = 31.92 (ton/day)) for all types of input variables.

  19. Attempt to determine radon entry rate and air exchange rate variable in time from the time course of indoor radon concentration

    Energy Technology Data Exchange (ETDEWEB)

    Thomas, J [State Office for Nuclear Protection, Prague (Czech Republic)

    1996-12-31

    For radon diagnosis in houses the `ventilation experiment` was used as a standard method. After removal of indoor radon by draught the build-up of radon concentration a(t) [Bq/m{sup 3}] was measured continuously and from the time course the constant radon entry rate A [Bq/h] and the exchange rate k [h{sup -1}] was calculated by regression analysis using model relation a(t) A(1-e{sup -kt})/kV with V [m{sup 3}] for volume of the room. The conditions have to be stable for several hours so that the assumption of constant A and k was justified. During the day both quantities were independently (?) changing, therefore a method to determine variable entry rate A(t) and exchange rate k(t) is needed for a better understanding of the variability of the indoor radon concentration. Two approaches are given for the determination of variable in time radon entry rates and air exchange rates from continuously measured indoor radon concentration - numerical solution of the equivalent difference equations in deterministic or statistic form. The approaches are not always successful. Failures giving a right ration for the searched rates but not of the rates them self could not be explained.

  20. Time-variant coherence between heart rate variability and EEG activity in epileptic patients: an advanced coupling analysis between physiological networks

    International Nuclear Information System (INIS)

    Piper, D; Schiecke, K; Pester, B; Witte, H; Benninger, F; Feucht, M

    2014-01-01

    Time-variant coherence analysis between the heart rate variability (HRV) and the channel-related envelopes of adaptively selected EEG components was used as an indicator for the occurrence of (correlative) couplings between the central autonomic network (CAN) and the epileptic network before, during and after epileptic seizures. Two groups of patients were investigated, a group with left and a group with right hemispheric temporal lobe epilepsy. The individual EEG components were extracted by a signal-adaptive approach, the multivariate empirical mode decomposition, and the envelopes of each resulting intrinsic mode function (IMF) were computed by using Hilbert transform. Two IMFs, whose envelopes were strongly correlated with the HRV’s low-frequency oscillation (HRV-LF; ≈0.1 Hz) before and after the seizure were identified. The frequency ranges of these IMFs correspond to the EEG delta-band. The time-variant coherence was statistically quantified and tensor decomposition of the time-frequency coherence maps was applied to explore the topography-time-frequency characteristics of the coherence analysis. Results allow the hypothesis that couplings between the CAN, which controls the cardiovascular-cardiorespiratory system, and the ‘epileptic neural network’ exist. Additionally, our results confirm the hypothesis of a right hemispheric lateralization of sympathetic cardiac control of the HRV-LF. (paper)

  1. Estimating High-Dimensional Time Series Models

    DEFF Research Database (Denmark)

    Medeiros, Marcelo C.; Mendes, Eduardo F.

    We study the asymptotic properties of the Adaptive LASSO (adaLASSO) in sparse, high-dimensional, linear time-series models. We assume both the number of covariates in the model and candidate variables can increase with the number of observations and the number of candidate variables is, possibly......, larger than the number of observations. We show the adaLASSO consistently chooses the relevant variables as the number of observations increases (model selection consistency), and has the oracle property, even when the errors are non-Gaussian and conditionally heteroskedastic. A simulation study shows...

  2. SELECTION OF BURST-LIKE TRANSIENTS AND STOCHASTIC VARIABLES USING MULTI-BAND IMAGE DIFFERENCING IN THE PAN-STARRS1 MEDIUM-DEEP SURVEY

    International Nuclear Information System (INIS)

    Kumar, S.; Gezari, S.; Heinis, S.; Chornock, R.; Berger, E.; Soderberg, A.; Stubbs, C. W.; Kirshner, R. P.; Rest, A.; Huber, M. E.; Narayan, G.; Marion, G. H.; Burgett, W. S.; Foley, R. J.; Scolnic, D.; Riess, A. G.; Lawrence, A.; Smartt, S. J.; Smith, K.; Wood-Vasey, W. M.

    2015-01-01

    We present a novel method for the light-curve characterization of Pan-STARRS1 Medium Deep Survey (PS1 MDS) extragalactic sources into stochastic variables (SVs) and burst-like (BL) transients, using multi-band image-differencing time-series data. We select detections in difference images associated with galaxy hosts using a star/galaxy catalog extracted from the deep PS1 MDS stacked images, and adopt a maximum a posteriori formulation to model their difference-flux time-series in four Pan-STARRS1 photometric bands g P1 , r P1 , i P1 , and z P1 . We use three deterministic light-curve models to fit BL transients; a Gaussian, a Gamma distribution, and an analytic supernova (SN) model, and one stochastic light-curve model, the Ornstein-Uhlenbeck process, in order to fit variability that is characteristic of active galactic nuclei (AGNs). We assess the quality of fit of the models band-wise and source-wise, using their estimated leave-out-one cross-validation likelihoods and corrected Akaike information criteria. We then apply a K-means clustering algorithm on these statistics, to determine the source classification in each band. The final source classification is derived as a combination of the individual filter classifications, resulting in two measures of classification quality, from the averages across the photometric filters of (1) the classifications determined from the closest K-means cluster centers, and (2) the square distances from the clustering centers in the K-means clustering spaces. For a verification set of AGNs and SNe, we show that SV and BL occupy distinct regions in the plane constituted by these measures. We use our clustering method to characterize 4361 extragalactic image difference detected sources, in the first 2.5 yr of the PS1 MDS, into 1529 BL, and 2262 SV, with a purity of 95.00% for AGNs, and 90.97% for SN based on our verification sets. We combine our light-curve classifications with their nuclear or off-nuclear host galaxy offsets, to

  3. Influence of variable heat transfer coefficient of fireworks and crackers on thermal explosion critical ambient temperature and time to ignition

    Directory of Open Access Journals (Sweden)

    Guo Zerong

    2016-01-01

    Full Text Available To study the effect of variable heat transfer coefficient of fireworks and crackers on thermal explosion critical ambient temperature and time to ignition, considering the heat transfer coefficient as the power function of temperature, mathematical thermal explosion steady state and unsteady-state model of finite cylindrical fireworks and crackers with complex shell structures are established based on two-dimensional steady state thermal explosion theory. The influence of variable heat transfer coefficient on thermal explosion critical ambient temperature and time to ignition are analyzed. When heat transfer coefficient is changing with temperature and in the condition of natural convection heat transfer, critical ambient temperature lessen, thermal explosion time to ignition shorten. If ambient temperature is close to critical ambient temperature, the influence of variable heat transfer coefficient on time to ignition become large. For firework with inner barrel in example analysis, the critical ambient temperature of propellant is 463.88 K and the time to ignition is 4054.9s at 466 K, 0.26 K and 450.8s less than without considering the change of heat transfer coefficient respectively. The calculation results show that the influence of variable heat transfer coefficient on thermal explosion time to ignition is greater in this example. Therefore, the effect of variable heat transfer coefficient should be considered into thermal safety evaluation of fireworks to reduce potential safety hazard.

  4. A novel peak-hopping stepwise feature selection method with application to Raman spectroscopy

    International Nuclear Information System (INIS)

    McShane, M.J.; Cameron, B.D.; Cote, G.L.; Motamedi, M.; Spiegelman, C.H.

    1999-01-01

    A new stepwise approach to variable selection for spectroscopy that includes chemical information and attempts to test several spectral regions producing high ranking coefficients has been developed to improve on currently available methods. Existing selection techniques can, in general, be placed into two groups: the first, time-consuming optimization approaches that ignore available information about sample chemistry and require considerable expertise to arrive at appropriate solutions (e.g. genetic algorithms), and the second, stepwise procedures that tend to select many variables in the same area containing redundant information. The algorithm described here is a fast stepwise procedure that uses multiple ranking chains to identify several spectral regions correlated with known sample properties. The multiple-chain approach allows the generation of a final ranking vector that moves quickly away from the initial selection point, testing several areas exhibiting correlation between spectra and composition early in the stepping procedure. Quantitative evidence of the success of this approach as applied to Raman spectroscopy is given in terms of processing speed, number of selected variables, and prediction error in comparison with other selection methods. In this respect, the procedure described here may be considered as a significant evolutionary step in variable selection algorithms. (Copyright (c) 1999 Elsevier Science B.V., Amsterdam. All rights reserved.)

  5. Fisher Information Based Meteorological Factors Introduction and Features Selection for Short-Term Load Forecasting

    Directory of Open Access Journals (Sweden)

    Shuping Cai

    2018-03-01

    Full Text Available Weather information is an important factor in short-term load forecasting (STLF. However, for a long time, more importance has always been attached to forecasting models instead of other processes such as the introduction of weather factors or feature selection for STLF. The main aim of this paper is to develop a novel methodology based on Fisher information for meteorological variables introduction and variable selection in STLF. Fisher information computation for one-dimensional and multidimensional weather variables is first described, and then the introduction of meteorological factors and variables selection for STLF models are discussed in detail. On this basis, different forecasting models with the proposed methodology are established. The proposed methodology is implemented on real data obtained from Electric Power Utility of Zhenjiang, Jiangsu Province, in southeast China. The results show the advantages of the proposed methodology in comparison with other traditional ones regarding prediction accuracy, and it has very good practical significance. Therefore, it can be used as a unified method for introducing weather variables into STLF models, and selecting their features.

  6. Time-grated energy-selected cold neutron radiography

    International Nuclear Information System (INIS)

    McDonald, T.E. Jr.; Brun, T.O.; Claytor, T.N.; Farnum, E.H.; Greene, G.L.; Morris, C.

    1998-01-01

    A technique is under development at the Los Alamos Neutron Science Center (LANSCE), Manuel Lujan Jr. Neutron Scattering Center (Lujan Center) for producing neutron radiography using only a narrow energy range of cold neutrons. The technique, referred to as Time-Gated Energy-Selected (TGES) neutron radiography, employs the pulsed neutron source at the Lujan Center with time of flight to obtain a neutron pulse having an energy distribution that is a function of the arrival time at the imager. The radiograph is formed on a short persistence scintillator and a gated, intensified, cooled CCD camera is employed to record the images, which are produced at the specific neutron energy range determined by the camera gate. The technique has been used to achieve a degree of material discrimination in radiographic images. For some materials, such as beryllium and carbon, at energies above the Bragg cutoff the neutron scattering cross section is relatively high while at energies below the Bragg cutoff the scattering cross section drops significantly. This difference in scattering characteristics can be recorded in the TGES radiography and, because the Bragg cutoff occurs at different energy levels for various materials, the approach can be used to differentiate among these materials. This paper outlines the TGES radiography technique and shows an example of radiography using the approach

  7. The joint effects of background selection and genetic recombination on local gene genealogies.

    Science.gov (United States)

    Zeng, Kai; Charlesworth, Brian

    2011-09-01

    Background selection, the effects of the continual removal of deleterious mutations by natural selection on variability at linked sites, is potentially a major determinant of DNA sequence variability. However, the joint effects of background selection and genetic recombination on the shape of the neutral gene genealogy have proved hard to study analytically. The only existing formula concerns the mean coalescent time for a pair of alleles, making it difficult to assess the importance of background selection from genome-wide data on sequence polymorphism. Here we develop a structured coalescent model of background selection with recombination and implement it in a computer program that efficiently generates neutral gene genealogies for an arbitrary sample size. We check the validity of the structured coalescent model against forward-in-time simulations and show that it accurately captures the effects of background selection. The model produces more accurate predictions of the mean coalescent time than the existing formula and supports the conclusion that the effect of background selection is greater in the interior of a deleterious region than at its boundaries. The level of linkage disequilibrium between sites is elevated by background selection, to an extent that is well summarized by a change in effective population size. The structured coalescent model is readily extendable to more realistic situations and should prove useful for analyzing genome-wide polymorphism data.

  8. Improved time series prediction with a new method for selection of model parameters

    International Nuclear Information System (INIS)

    Jade, A M; Jayaraman, V K; Kulkarni, B D

    2006-01-01

    A new method for model selection in prediction of time series is proposed. Apart from the conventional criterion of minimizing RMS error, the method also minimizes the error on the distribution of singularities, evaluated through the local Hoelder estimates and its probability density spectrum. Predictions of two simulated and one real time series have been done using kernel principal component regression (KPCR) and model parameters of KPCR have been selected employing the proposed as well as the conventional method. Results obtained demonstrate that the proposed method takes into account the sharp changes in a time series and improves the generalization capability of the KPCR model for better prediction of the unseen test data. (letter to the editor)

  9. Natural selection and inheritance of breeding time and clutch size in the collared flycatcher.

    Science.gov (United States)

    Sheldon, B C; Kruuk, L E B; Merilä, J

    2003-02-01

    Many characteristics of organisms in free-living populations appear to be under directional selection, possess additive genetic variance, and yet show no evolutionary response to selection. Avian breeding time and clutch size are often-cited examples of such characters. We report analyses of inheritance of, and selection on, these traits in a long-term study of a wild population of the collared flycatcher Ficedula albicollis. We used mixed model analysis with REML estimation ("animal models") to make full use of the information in complex multigenerational pedigrees. Heritability of laying date, but not clutch size, was lower than that estimated previously using parent-offspring regressions, although for both traits there was evidence of substantial additive genetic variance (h2 = 0.19 and 0.29, respectively). Laying date and clutch size were negatively genetically correlated (rA = -0.41 +/- 0.09), implying that selection on one of the traits would cause a correlated response in the other, but there was little evidence to suggest that evolution of either trait would be constrained by correlations with other phenotypic characters. Analysis of selection on these traits in females revealed consistent strong directional fecundity selection for earlier breeding at the level of the phenotype (beta = -0.28 +/- 0.03), but little evidence for stabilising selection on breeding time. We found no evidence that clutch size was independently under selection. Analysis of fecundity selection on breeding values for laying date, estimated from an animal model, indicated that selection acts directly on additive genetic variance underlying breeding time (beta = -0.20 +/- 0.04), but not on clutch size (beta = 0.03 +/- 0.05). In contrast, selection on laying date via adult female survival fluctuated in sign between years, and was opposite in sign for selection on phenotypes (negative) and breeding values (positive). Our data thus suggest that any evolutionary response to selection on

  10. Time-varying surrogate data to assess nonlinearity in nonstationary time series: application to heart rate variability.

    Science.gov (United States)

    Faes, Luca; Zhao, He; Chon, Ki H; Nollo, Giandomenico

    2009-03-01

    We propose a method to extend to time-varying (TV) systems the procedure for generating typical surrogate time series, in order to test the presence of nonlinear dynamics in potentially nonstationary signals. The method is based on fitting a TV autoregressive (AR) model to the original series and then regressing the model coefficients with random replacements of the model residuals to generate TV AR surrogate series. The proposed surrogate series were used in combination with a TV sample entropy (SE) discriminating statistic to assess nonlinearity in both simulated and experimental time series, in comparison with traditional time-invariant (TIV) surrogates combined with the TIV SE discriminating statistic. Analysis of simulated time series showed that using TIV surrogates, linear nonstationary time series may be erroneously regarded as nonlinear and weak TV nonlinearities may remain unrevealed, while the use of TV AR surrogates markedly increases the probability of a correct interpretation. Application to short (500 beats) heart rate variability (HRV) time series recorded at rest (R), after head-up tilt (T), and during paced breathing (PB) showed: 1) modifications of the SE statistic that were well interpretable with the known cardiovascular physiology; 2) significant contribution of nonlinear dynamics to HRV in all conditions, with significant increase during PB at 0.2 Hz respiration rate; and 3) a disagreement between TV AR surrogates and TIV surrogates in about a quarter of the series, suggesting that nonstationarity may affect HRV recordings and bias the outcome of the traditional surrogate-based nonlinearity test.

  11. Are Model Transferability And Complexity Antithetical? Insights From Validation of a Variable-Complexity Empirical Snow Model in Space and Time

    Science.gov (United States)

    Lute, A. C.; Luce, Charles H.

    2017-11-01

    The related challenges of predictions in ungauged basins and predictions in ungauged climates point to the need to develop environmental models that are transferable across both space and time. Hydrologic modeling has historically focused on modelling one or only a few basins using highly parameterized conceptual or physically based models. However, model parameters and structures have been shown to change significantly when calibrated to new basins or time periods, suggesting that model complexity and model transferability may be antithetical. Empirical space-for-time models provide a framework within which to assess model transferability and any tradeoff with model complexity. Using 497 SNOTEL sites in the western U.S., we develop space-for-time models of April 1 SWE and Snow Residence Time based on mean winter temperature and cumulative winter precipitation. The transferability of the models to new conditions (in both space and time) is assessed using non-random cross-validation tests with consideration of the influence of model complexity on transferability. As others have noted, the algorithmic empirical models transfer best when minimal extrapolation in input variables is required. Temporal split-sample validations use pseudoreplicated samples, resulting in the selection of overly complex models, which has implications for the design of hydrologic model validation tests. Finally, we show that low to moderate complexity models transfer most successfully to new conditions in space and time, providing empirical confirmation of the parsimony principal.

  12. BLAZAR OPTICAL VARIABILITY IN THE PALOMAR-QUEST SURVEY

    International Nuclear Information System (INIS)

    Bauer, Anne; Baltay, Charles; Coppi, Paolo; Ellman, Nancy; Jerke, Jonathan; Rabinowitz, David; Scalzo, Richard

    2009-01-01

    We study the ensemble optical variability of 276 flat-spectrum radio quasars (FSRQs) and 86 BL Lacs in the Palomar-QUEST Survey with the goal of searching for common fluctuation properties, examining the range of behavior across the sample, and characterizing the appearance of blazars in such a survey so that future work can more easily identify such objects. The survey, which covers 15,000 deg 2 multiple times over 3.5 years, allows for the first ensemble blazar study of this scale. Variability amplitude distributions are shown for the FSRQ and BL Lac samples for numerous time lags, and also studied through structure function analyses. Individual blazars show a wide range of variability amplitudes, timescales, and duty cycles. Of the best-sampled objects, 35% are seen to vary by more than 0.4 mag; for these, the fraction of measurements contributing to the high-amplitude variability ranges constantly from about 5% to 80%. Blazar variability has some similarities to that of type I quasi-stellar objects (QSOs) but includes larger amplitude fluctuations on all timescales. FSRQ variability amplitudes are particularly similar to those of QSOs on timescales of several months, suggesting significant contributions from the accretion disk to the variable flux at these timescales. Optical variability amplitudes are correlated with the maximum apparent velocities of the radio jet for the subset of FSRQs with MOJAVE Very Long Baseline Array measurements, implying that the optically variable flux's strength is typically related to that of the radio emission. We also study CRATES radio-selected FSRQ candidates, which show similar variability characteristics to known FSRQs; this suggests a high purity for the CRATES sample.

  13. Statistical analysis of nuclear power plant pump failure rate variability: some preliminary results

    International Nuclear Information System (INIS)

    Martz, H.F.; Whiteman, D.E.

    1984-02-01

    In-Plant Reliability Data System (IPRDS) pump failure data on over 60 selected pumps in four nuclear power plants are statistically analyzed using the Failure Rate Analysis Code (FRAC). A major purpose of the analysis is to determine which environmental, system, and operating factors adequately explain the variability in the failure data. Catastrophic, degraded, and incipient failure severity categories are considered for both demand-related and time-dependent failures. For catastrophic demand-related pump failures, the variability is explained by the following factors listed in their order of importance: system application, pump driver, operating mode, reactor type, pump type, and unidentified plant-specific influences. Quantitative failure rate adjustments are provided for the effects of these factors. In the case of catastrophic time-dependent pump failures, the failure rate variability is explained by three factors: reactor type, pump driver, and unidentified plant-specific influences. Finally, point and confidence interval failure rate estimates are provided for each selected pump by considering the influential factors. Both types of estimates represent an improvement over the estimates computed exclusively from the data on each pump

  14. Effects of Variable Production Rate and Time-Dependent Holding Cost for Complementary Products in Supply Chain Model

    Directory of Open Access Journals (Sweden)

    Mitali Sarkar

    2017-01-01

    Full Text Available Recently, a major trend is going to redesign a production system by controlling or making variable the production rate within some fixed interval to maintain the optimal level. This strategy is more effective when the holding cost is time-dependent as it is interrelated with holding duration of products and rate of production. An effort is made to make a supply chain model (SCM to show the joint effect of variable production rate and time-varying holding cost for specific type of complementary products, where those products are made by two different manufacturers and a common retailer makes them bundle and sells bundles to end customers. Demand of each product is specified by stochastic reservation prices with a known potential market size. Those players of the SCM are considered with unequal power. Stackelberg game approach is employed to obtain global optimum solution of the model. An illustrative numerical example, graphical representation, and managerial insights are given to illustrate the model. Results prove that variable production rate and time-dependent holding cost save more than existing literature.

  15. Error Analysis of a Fractional Time-Stepping Technique for Incompressible Flows with Variable Density

    KAUST Repository

    Guermond, J.-L.; Salgado, Abner J.

    2011-01-01

    In this paper we analyze the convergence properties of a new fractional time-stepping technique for the solution of the variable density incompressible Navier-Stokes equations. The main feature of this method is that, contrary to other existing algorithms, the pressure is determined by just solving one Poisson equation per time step. First-order error estimates are proved, and stability of a formally second-order variant of the method is established. © 2011 Society for Industrial and Applied Mathematics.

  16. Variable selectivity and the role of nutritional quality in food selection by a planktonic rotifer

    International Nuclear Information System (INIS)

    Sierszen, M.E.

    1990-01-01

    To investigate the potential for selective feeding to enhance fitness, I test the hypothesis that an herbivorous zooplankter selects those food items that best support its reproduction. Under this hypothesis, growth and reproduction on selected food items should be higher than on less preferred items. The hypothesis is not supported. In situ selectivity by the rotifer Keratella taurocephala for Cryptomonas relative to Chlamydomonas goes through a seasonal cycle, in apparent response to fluctuating Cryptomonas populations. However, reproduction on a unialgal diet of Cryptomonas is consistently high and similar to that on Chlamydomonas. Oocystis, which also supports reproduction equivalent to that supported by Chlamydomonas, is sometimes rejected by K. taurocephala. In addition, K. taurocephala does not discriminate between Merismopedia and Chlamydomonas even though Merismopedia supports virtually no reproduction by the rotifer. Selection by K. taurocephala does not simply maximize the intake of food items that yield high reproduction. Selectivity is a complex, dynamic process, one function of which may be the exploitation of locally or seasonally abundant foods. (author)

  17. Stress, Time Pressure, Strategy Selection and Math Anxiety in Mathematics: A Review of the Literature.

    Science.gov (United States)

    Caviola, Sara; Carey, Emma; Mammarella, Irene C; Szucs, Denes

    2017-01-01

    We review how stress induction, time pressure manipulations and math anxiety can interfere with or modulate selection of problem-solving strategies (henceforth "strategy selection") in arithmetical tasks. Nineteen relevant articles were identified, which contain references to strategy selection and time limit (or time manipulations), with some also discussing emotional aspects in mathematical outcomes. Few of these take cognitive processes such as working memory or executive functions into consideration. We conclude that due to the sparsity of available literature our questions can only be partially answered and currently there is not much evidence of clear associations. We identify major gaps in knowledge and raise a series of open questions to guide further research.

  18. Time series pCO2 at a coastal mooring: Internal consistency, seasonal cycles, and interannual variability

    Science.gov (United States)

    Reimer, Janet J.; Cai, Wei-Jun; Xue, Liang; Vargas, Rodrigo; Noakes, Scott; Hu, Xinping; Signorini, Sergio R.; Mathis, Jeremy T.; Feely, Richard A.; Sutton, Adrienne J.; Sabine, Christopher; Musielewicz, Sylvia; Chen, Baoshan; Wanninkhof, Rik

    2017-08-01

    Marine carbonate system monitoring programs often consist of multiple observational methods that include underway cruise data, moored autonomous time series, and discrete water bottle samples. Monitored parameters include all, or some of the following: partial pressure of CO2 of the water (pCO2w) and air, dissolved inorganic carbon (DIC), total alkalinity (TA), and pH. Any combination of at least two of the aforementioned parameters can be used to calculate the others. In this study at the Gray's Reef (GR) mooring in the South Atlantic Bight (SAB) we: examine the internal consistency of pCO2w from underway cruise, moored autonomous time series, and calculated from bottle samples (DIC-TA pairing); describe the seasonal to interannual pCO2w time series variability and air-sea flux (FCO2), as well as describe the potential sources of pCO2w variability; and determine the source/sink for atmospheric pCO2. Over the 8.5 years of GR mooring time series, mooring-underway and mooring-bottle calculated-pCO2w strongly correlate with r-values > 0.90. pCO2w and FCO2 time series follow seasonal thermal patterns; however, seasonal non-thermal processes, such as terrestrial export, net biological production, and air-sea exchange also influence variability. The linear slope of time series pCO2w increases by 5.2 ± 1.4 μatm y-1 with FCO2 increasing 51-70 mmol m-2 y-1. The net FCO2 sign can switch interannually with the magnitude varying greatly. Non-thermal pCO2w is also increasing over the time series, likely indicating that terrestrial export and net biological processes drive the long term pCO2w increase.

  19. A formal method for identifying distinct states of variability in time-varying sources: SGR A* as an example

    Energy Technology Data Exchange (ETDEWEB)

    Meyer, L.; Witzel, G.; Ghez, A. M. [Department of Physics and Astronomy, University of California, Los Angeles, CA 90095-1547 (United States); Longstaff, F. A. [UCLA Anderson School of Management, University of California, Los Angeles, CA 90095-1481 (United States)

    2014-08-10

    Continuously time variable sources are often characterized by their power spectral density and flux distribution. These quantities can undergo dramatic changes over time if the underlying physical processes change. However, some changes can be subtle and not distinguishable using standard statistical approaches. Here, we report a methodology that aims to identify distinct but similar states of time variability. We apply this method to the Galactic supermassive black hole, where 2.2 μm flux is observed from a source associated with Sgr A* and where two distinct states have recently been suggested. Our approach is taken from mathematical finance and works with conditional flux density distributions that depend on the previous flux value. The discrete, unobserved (hidden) state variable is modeled as a stochastic process and the transition probabilities are inferred from the flux density time series. Using the most comprehensive data set to date, in which all Keck and a majority of the publicly available Very Large Telescope data have been merged, we show that Sgr A* is sufficiently described by a single intrinsic state. However, the observed flux densities exhibit two states: noise dominated and source dominated. Our methodology reported here will prove extremely useful to assess the effects of the putative gas cloud G2 that is on its way toward the black hole and might create a new state of variability.

  20. Synthesis, Characterization, and Variable-Temperature NMR Studies of Silver(I) Complexes for Selective Nitrene Transfer.

    Science.gov (United States)

    Huang, Minxue; Corbin, Joshua R; Dolan, Nicholas S; Fry, Charles G; Vinokur, Anastasiya I; Guzei, Ilia A; Schomaker, Jennifer M

    2017-06-05

    An array of silver complexes supported by nitrogen-donor ligands catalyze the transformation of C═C and C-H bonds to valuable C-N bonds via nitrene transfer. The ability to achieve high chemoselectivity and site selectivity in an amination event requires an understanding of both the solid- and solution-state behavior of these catalysts. X-ray structural characterizations were helpful in determining ligand features that promote the formation of monomeric versus dimeric complexes. Variable-temperature 1 H and DOSY NMR experiments were especially useful for understanding how the ligand identity influences the nuclearity, coordination number, and fluxional behavior of silver(I) complexes in solution. These insights are valuable for developing improved ligand designs.

  1. Moving attention - Evidence for time-invariant shifts of visual selective attention

    Science.gov (United States)

    Remington, R.; Pierce, L.

    1984-01-01

    Two experiments measured the time to shift spatial selective attention across the visual field to targets 2 or 10 deg from central fixation. A central arrow cued the most likely target location. The direction of attention was inferred from reaction times to expected, unexpected, and neutral locations. The development of a spatial attentional set with time was examined by presenting target probes at varying times after the cue. There were no effects of distance on the time course of the attentional set. Reaction times for far locations were slower than for near, but the effects of attention were evident by 150 msec in both cases. Spatial attention does not shift with a characteristic, fixed velocity. Rather, velocity is proportional to distance, resulting in a movement time that is invariant over the distances tested.

  2. Bayesian dynamic modeling of time series of dengue disease case counts.

    Science.gov (United States)

    Martínez-Bello, Daniel Adyro; López-Quílez, Antonio; Torres-Prieto, Alexander

    2017-07-01

    The aim of this study is to model the association between weekly time series of dengue case counts and meteorological variables, in a high-incidence city of Colombia, applying Bayesian hierarchical dynamic generalized linear models over the period January 2008 to August 2015. Additionally, we evaluate the model's short-term performance for predicting dengue cases. The methodology shows dynamic Poisson log link models including constant or time-varying coefficients for the meteorological variables. Calendar effects were modeled using constant or first- or second-order random walk time-varying coefficients. The meteorological variables were modeled using constant coefficients and first-order random walk time-varying coefficients. We applied Markov Chain Monte Carlo simulations for parameter estimation, and deviance information criterion statistic (DIC) for model selection. We assessed the short-term predictive performance of the selected final model, at several time points within the study period using the mean absolute percentage error. The results showed the best model including first-order random walk time-varying coefficients for calendar trend and first-order random walk time-varying coefficients for the meteorological variables. Besides the computational challenges, interpreting the results implies a complete analysis of the time series of dengue with respect to the parameter estimates of the meteorological effects. We found small values of the mean absolute percentage errors at one or two weeks out-of-sample predictions for most prediction points, associated with low volatility periods in the dengue counts. We discuss the advantages and limitations of the dynamic Poisson models for studying the association between time series of dengue disease and meteorological variables. The key conclusion of the study is that dynamic Poisson models account for the dynamic nature of the variables involved in the modeling of time series of dengue disease, producing useful

  3. Pathogen-mediated selection for MHC variability in wild zebrafish

    Czech Academy of Sciences Publication Activity Database

    Smith, C.; Ondračková, Markéta; Spence, R.; Adams, S.; Betts, D. S.; Mallon, E.

    2011-01-01

    Roč. 13, č. 6 (2011), s. 589-605 ISSN 1522-0613 Institutional support: RVO:68081766 Keywords : digenean * frequency-dependent selection * heterozygote advantage * major histocompatibility complex * metazoan parasite * pathogen-driven selection Subject RIV: EG - Zoology Impact factor: 1.029, year: 2011

  4. Study on a new type variable valve lift timing mechanism with a three dimensional cam; Sanjigen cam ni yoru shinkahen valve lift timing kiko ni kansuru kenkyu

    Energy Technology Data Exchange (ETDEWEB)

    Ogura, M; Song, C [Nippon Institute of Technology, Saitama (Japan)

    1997-10-01

    The variable valve timing mechanism was invented to get a torque band at wide engine speed, and to reduce a compression job and pumping loss by a miler cycle at partial load. In this paper, the new type variable valve timing mechanism applying a three dimensional cam was proposed. Also, the feature of mechanism and the control system was done obviously. Further, about a miler cycle, a thermodynamics -like consideration was described. 5 refs., 8 figs.

  5. Hong-Ou-Mandel effect in terms of the temporal biphoton wave function with two arrival-time variables

    Science.gov (United States)

    Fedorov, M. V.; Sysoeva, A. A.; Vintskevich, S. V.; Grigoriev, D. A.

    2018-03-01

    The well-known Hong-Ou-Mandel effect is revisited. Two physical reasons are discussed for the effect to be less pronounced or even to disappear: differing polarizations of photons coming to the beamsplitter and delay time of photons in one of two channels. For the latter we use the concepts of biphoton frequency and temporal wave functions depending, correspondingly, on two frequency continuous variables of photons and on two time variables t 1 and t 2 interpreted as the arrival times of photons to the beamsplitter. Explicit expressions are found for the probability densities and total probabilities for photon pairs to be split between two channels after the beamsplitter and to be unsplit, when two photons appear together in one of two channels.

  6. A variable-order time-dependent neutron transport method for nuclear reactor kinetics using analytically-integrated space-time characteristics

    International Nuclear Information System (INIS)

    Hoffman, A. J.; Lee, J. C.

    2013-01-01

    A new time-dependent neutron transport method based on the method of characteristics (MOC) has been developed. Whereas most spatial kinetics methods treat time dependence through temporal discretization, this new method treats time dependence by defining the characteristics to span space and time. In this implementation regions are defined in space-time where the thickness of the region in time fulfills an analogous role to the time step in discretized methods. The time dependence of the local source is approximated using a truncated Taylor series expansion with high order derivatives approximated using backward differences, permitting the solution of the resulting space-time characteristic equation. To avoid a drastic increase in computational expense and memory requirements due to solving many discrete characteristics in the space-time planes, the temporal variation of the boundary source is similarly approximated. This allows the characteristics in the space-time plane to be represented analytically rather than discretely, resulting in an algorithm comparable in implementation and expense to one that arises from conventional time integration techniques. Furthermore, by defining the boundary flux time derivative in terms of the preceding local source time derivative and boundary flux time derivative, the need to store angularly-dependent data is avoided without approximating the angular dependence of the angular flux time derivative. The accuracy of this method is assessed through implementation in the neutron transport code DeCART. The method is employed with variable-order local source representation to model a TWIGL transient. The results demonstrate that this method is accurate and more efficient than the discretized method. (authors)

  7. Bidecadal North Atlantic ocean circulation variability controlled by timing of volcanic eruptions.

    Science.gov (United States)

    Swingedouw, Didier; Ortega, Pablo; Mignot, Juliette; Guilyardi, Eric; Masson-Delmotte, Valérie; Butler, Paul G; Khodri, Myriam; Séférian, Roland

    2015-03-30

    While bidecadal climate variability has been evidenced in several North Atlantic paleoclimate records, its drivers remain poorly understood. Here we show that the subset of CMIP5 historical climate simulations that produce such bidecadal variability exhibits a robust synchronization, with a maximum in Atlantic Meridional Overturning Circulation (AMOC) 15 years after the 1963 Agung eruption. The mechanisms at play involve salinity advection from the Arctic and explain the timing of Great Salinity Anomalies observed in the 1970s and the 1990s. Simulations, as well as Greenland and Iceland paleoclimate records, indicate that coherent bidecadal cycles were excited following five Agung-like volcanic eruptions of the last millennium. Climate simulations and a conceptual model reveal that destructive interference caused by the Pinatubo 1991 eruption may have damped the observed decreasing trend of the AMOC in the 2000s. Our results imply a long-lasting climatic impact and predictability following the next Agung-like eruption.

  8. Improved variable reduction in partial least squares modelling by Global-Minimum Error Uninformative-Variable Elimination.

    Science.gov (United States)

    Andries, Jan P M; Vander Heyden, Yvan; Buydens, Lutgarde M C

    2017-08-22

    The calibration performance of Partial Least Squares regression (PLS) can be improved by eliminating uninformative variables. For PLS, many variable elimination methods have been developed. One is the Uninformative-Variable Elimination for PLS (UVE-PLS). However, the number of variables retained by UVE-PLS is usually still large. In UVE-PLS, variable elimination is repeated as long as the root mean squared error of cross validation (RMSECV) is decreasing. The set of variables in this first local minimum is retained. In this paper, a modification of UVE-PLS is proposed and investigated, in which UVE is repeated until no further reduction in variables is possible, followed by a search for the global RMSECV minimum. The method is called Global-Minimum Error Uninformative-Variable Elimination for PLS, denoted as GME-UVE-PLS or simply GME-UVE. After each iteration, the predictive ability of the PLS model, built with the remaining variable set, is assessed by RMSECV. The variable set with the global RMSECV minimum is then finally selected. The goal is to obtain smaller sets of variables with similar or improved predictability than those from the classical UVE-PLS method. The performance of the GME-UVE-PLS method is investigated using four data sets, i.e. a simulated set, NIR and NMR spectra, and a theoretical molecular descriptors set, resulting in twelve profile-response (X-y) calibrations. The selective and predictive performances of the models resulting from GME-UVE-PLS are statistically compared to those from UVE-PLS and 1-step UVE, one-sided paired t-tests. The results demonstrate that variable reduction with the proposed GME-UVE-PLS method, usually eliminates significantly more variables than the classical UVE-PLS, while the predictive abilities of the resulting models are better. With GME-UVE-PLS, a lower number of uninformative variables, without a chemical meaning for the response, may be retained than with UVE-PLS. The selectivity of the classical UVE method

  9. A model for estimating pathogen variability in shellfish and predicting minimum depuration times.

    Science.gov (United States)

    McMenemy, Paul; Kleczkowski, Adam; Lees, David N; Lowther, James; Taylor, Nick

    2018-01-01

    Norovirus is a major cause of viral gastroenteritis, with shellfish consumption being identified as one potential norovirus entry point into the human population. Minimising shellfish norovirus levels is therefore important for both the consumer's protection and the shellfish industry's reputation. One method used to reduce microbiological risks in shellfish is depuration; however, this process also presents additional costs to industry. Providing a mechanism to estimate norovirus levels during depuration would therefore be useful to stakeholders. This paper presents a mathematical model of the depuration process and its impact on norovirus levels found in shellfish. Two fundamental stages of norovirus depuration are considered: (i) the initial distribution of norovirus loads within a shellfish population and (ii) the way in which the initial norovirus loads evolve during depuration. Realistic assumptions are made about the dynamics of norovirus during depuration, and mathematical descriptions of both stages are derived and combined into a single model. Parameters to describe the depuration effect and norovirus load values are derived from existing norovirus data obtained from U.K. harvest sites. However, obtaining population estimates of norovirus variability is time-consuming and expensive; this model addresses the issue by assuming a 'worst case scenario' for variability of pathogens, which is independent of mean pathogen levels. The model is then used to predict minimum depuration times required to achieve norovirus levels which fall within possible risk management levels, as well as predictions of minimum depuration times for other water-borne pathogens found in shellfish. Times for Escherichia coli predicted by the model all fall within the minimum 42 hours required for class B harvest sites, whereas minimum depuration times for norovirus and FRNA+ bacteriophage are substantially longer. Thus this study provides relevant information and tools to assist

  10. Drivers of time-activity budget variability during breeding in a pelagic seabird.

    Directory of Open Access Journals (Sweden)

    Gavin M Rishworth

    Full Text Available During breeding, animal behaviour is particularly sensitive to environmental and food resource availability. Additionally, factors such as sex, body condition, and offspring developmental stage can influence behaviour. Amongst seabirds, behaviour is generally predictably affected by local foraging conditions and has therefore been suggested as a potentially useful proxy to indicate prey state. However, besides prey availability and distribution, a range of other variables also influence seabird behavior, and these need to be accounted for to increase the signal-to-noise ratio when assessing specific characteristics of the environment based on behavioural attributes. The aim of this study was to use continuous, fine-scale time-activity budget data from a pelagic seabird (Cape gannet, Morus capensis to determine the influence of intrinsic (sex and body condition and extrinsic (offspring and time variables on parent behaviour during breeding. Foraging trip duration and chick provisioning rates were clearly sex-specific and associated with chick developmental stage. Females made fewer, longer foraging trips and spent less time at the nest during chick provisioning. These sex-specific differences became increasingly apparent with chick development. Additionally, parents in better body condition spent longer periods at their nests and those which returned later in the day had longer overall nest attendance bouts. Using recent technological advances, this study provides new insights into the foraging behaviour of breeding seabirds, particularly during the post-guarding phase. The biparental strategy of chick provisioning revealed in this study appears to be an example where the costs of egg development to the female are balanced by paternal-dominated chick provisioning particularly as the chick nears fledging.

  11. Variable selection in multiple linear regression: The influence of ...

    African Journals Online (AJOL)

    provide an indication of whether the fit of the selected model improves or ... and calculate M(−i); quantify the influence of case i in terms of a function, f(•), of M and ..... [21] Venter JH & Snyman JLJ, 1997, Linear model selection based on risk ...

  12. Spatial and temporal variation in selection of genes associated with pearl millet varietal quantitative traits in situ

    Directory of Open Access Journals (Sweden)

    Cedric Mariac

    2016-07-01

    Full Text Available Ongoing global climate changes imply new challenges for agriculture. Whether plants and crops can adapt to such rapid changes is still a widely debated question. We previously showed adaptation in the form of earlier flowering in pearl millet at the scale of a whole country over three decades. However, this analysis did not deal with variability of year to year selection. To understand and possibly manage plant and crop adaptation, we need more knowledge of how selection acts in situ. Is selection gradual, abrupt, and does it vary in space and over time? In the present study, we tracked the evolution of allele frequency in two genes associated with pearl millet phenotypic variation in situ. We sampled 17 populations of cultivated pearl millet over a period of two years. We tracked changes in allele frequencies in these populations by genotyping more than seven thousand individuals. We demonstrate that several allele frequencies changes are compatible with selection, by correcting allele frequency changes associated with genetic drift. We found marked variation in allele frequencies from year to year, suggesting a variable selection effect in space and over time. We estimated the strength of selection associated with variations in allele frequency. Our results suggest that the polymorphism maintained at the genes we studied is partially explained by the spatial and temporal variability of selection. In response to environmental changes, traditional pearl millet varieties could rapidly adapt thanks to this available functional variability.

  13. The Performance of Variable Annuities

    OpenAIRE

    Michael J. McNamara; Henry R. Oppenheimer

    1991-01-01

    Variable annuities have become increasingly important in retirement plans. This paper provides an examination of the investment performance of variable annuities for the period year-end 1973 to year-end 1988. Returns, risk, and selectivity measures are analyzed for the sample of annuities, for individual variable annuities, and for subsamples of annuities with similar portfolio size and turnover. While the investment returns of variable annuities were greater than inflation over the period, t...

  14. Short timescale variability in the faint sky variability survey

    NARCIS (Netherlands)

    Morales-Rueda, L.; Groot, P.J.; Augusteijn, T.; Nelemans, G.A.; Vreeswijk, P.M.; Besselaar, E.J.M. van den

    2006-01-01

    We present the V-band variability analysis of the Faint Sky Variability Survey (FSVS). The FSVS combines colour and time variability information, from timescales of 24 minutes to tens of days, down to V = 24. We find that �1% of all point sources are variable along the main sequence reaching �3.5%

  15. Increasing work-time influence: consequences for flexibility, variability, regularity and predictability.

    Science.gov (United States)

    Nabe-Nielsen, Kirsten; Garde, Anne Helene; Aust, Birgit; Diderichsen, Finn

    2012-01-01

    This quasi-experimental study investigated how an intervention aiming at increasing eldercare workers' influence on their working hours affected the flexibility, variability, regularity and predictability of the working hours. We used baseline (n = 296) and follow-up (n = 274) questionnaire data and interviews with intervention-group participants (n = 32). The work units in the intervention group designed their own intervention comprising either implementation of computerised self-scheduling (subgroup A), collection of information about the employees' work-time preferences by questionnaires (subgroup B), or discussion of working hours (subgroup C). Only computerised self-scheduling changed the working hours and the way they were planned. These changes implied more flexible but less regular working hours and an experience of less predictability and less continuity in the care of clients and in the co-operation with colleagues. In subgroup B and C, the participants ended up discussing the potential consequences of more work-time influence without actually implementing any changes. Employee work-time influence may buffer the adverse effects of shift work. However, our intervention study suggested that while increasing the individual flexibility, increasing work-time influence may also result in decreased regularity of the working hours and less continuity in the care of clients and co-operation with colleagues.

  16. Resolving the Conflict Between Associative Overdominance and Background Selection

    Science.gov (United States)

    Zhao, Lei; Charlesworth, Brian

    2016-01-01

    In small populations, genetic linkage between a polymorphic neutral locus and loci subject to selection, either against partially recessive mutations or in favor of heterozygotes, may result in an apparent selective advantage to heterozygotes at the neutral locus (associative overdominance) and a retardation of the rate of loss of variability by genetic drift at this locus. In large populations, selection against deleterious mutations has previously been shown to reduce variability at linked neutral loci (background selection). We describe analytical, numerical, and simulation studies that shed light on the conditions under which retardation vs. acceleration of loss of variability occurs at a neutral locus linked to a locus under selection. We consider a finite, randomly mating population initiated from an infinite population in equilibrium at a locus under selection. With mutation and selection, retardation occurs only when S, the product of twice the effective population size and the selection coefficient, is of order 1. With S >> 1, background selection always causes an acceleration of loss of variability. Apparent heterozygote advantage at the neutral locus is, however, always observed when mutations are partially recessive, even if there is an accelerated rate of loss of variability. With heterozygote advantage at the selected locus, loss of variability is nearly always retarded. The results shed light on experiments on the loss of variability at marker loci in laboratory populations and on the results of computer simulations of the effects of multiple selected loci on neutral variability. PMID:27182952

  17. Time-Consistent Strategies for a Multiperiod Mean-Variance Portfolio Selection Problem

    Directory of Open Access Journals (Sweden)

    Huiling Wu

    2013-01-01

    Full Text Available It remained prevalent in the past years to obtain the precommitment strategies for Markowitz's mean-variance portfolio optimization problems, but not much is known about their time-consistent strategies. This paper takes a step to investigate the time-consistent Nash equilibrium strategies for a multiperiod mean-variance portfolio selection problem. Under the assumption that the risk aversion is, respectively, a constant and a function of current wealth level, we obtain the explicit expressions for the time-consistent Nash equilibrium strategy and the equilibrium value function. Many interesting properties of the time-consistent results are identified through numerical sensitivity analysis and by comparing them with the classical pre-commitment solutions.

  18. The Propagation of Movement Variability in Time: A Methodological Approach for Discrete Movements with Multiple Degrees of Freedom

    Science.gov (United States)

    Krüger, Melanie; Straube, Andreas; Eggert, Thomas

    2017-01-01

    In recent years, theory-building in motor neuroscience and our understanding of the synergistic control of the redundant human motor system has significantly profited from the emergence of a range of different mathematical approaches to analyze the structure of movement variability. Approaches such as the Uncontrolled Manifold method or the Noise-Tolerance-Covariance decomposition method allow to detect and interpret changes in movement coordination due to e.g., learning, external task constraints or disease, by analyzing the structure of within-subject, inter-trial movement variability. Whereas, for cyclical movements (e.g., locomotion), mathematical approaches exist to investigate the propagation of movement variability in time (e.g., time series analysis), similar approaches are missing for discrete, goal-directed movements, such as reaching. Here, we propose canonical correlation analysis as a suitable method to analyze the propagation of within-subject variability across different time points during the execution of discrete movements. While similar analyses have already been applied for discrete movements with only one degree of freedom (DoF; e.g., Pearson's product-moment correlation), canonical correlation analysis allows to evaluate the coupling of inter-trial variability across different time points along the movement trajectory for multiple DoF-effector systems, such as the arm. The theoretical analysis is illustrated by empirical data from a study on reaching movements under normal and disturbed proprioception. The results show increased movement duration, decreased movement amplitude, as well as altered movement coordination under ischemia, which results in a reduced complexity of movement control. Movement endpoint variability is not increased under ischemia. This suggests that healthy adults are able to immediately and efficiently adjust the control of complex reaching movements to compensate for the loss of proprioceptive information. Further, it is

  19. Low Computational Signal Acquisition for GNSS Receivers Using a Resampling Strategy and Variable Circular Correlation Time

    Directory of Open Access Journals (Sweden)

    Yeqing Zhang

    2018-02-01

    Full Text Available For the objective of essentially decreasing computational complexity and time consumption of signal acquisition, this paper explores a resampling strategy and variable circular correlation time strategy specific to broadband multi-frequency GNSS receivers. In broadband GNSS receivers, the resampling strategy is established to work on conventional acquisition algorithms by resampling the main lobe of received broadband signals with a much lower frequency. Variable circular correlation time is designed to adapt to different signal strength conditions and thereby increase the operation flexibility of GNSS signal acquisition. The acquisition threshold is defined as the ratio of the highest and second highest correlation results in the search space of carrier frequency and code phase. Moreover, computational complexity of signal acquisition is formulated by amounts of multiplication and summation operations in the acquisition process. Comparative experiments and performance analysis are conducted on four sets of real GPS L2C signals with different sampling frequencies. The results indicate that the resampling strategy can effectively decrease computation and time cost by nearly 90–94% with just slight loss of acquisition sensitivity. With circular correlation time varying from 10 ms to 20 ms, the time cost of signal acquisition has increased by about 2.7–5.6% per millisecond, with most satellites acquired successfully.

  20. Low Computational Signal Acquisition for GNSS Receivers Using a Resampling Strategy and Variable Circular Correlation Time

    Science.gov (United States)

    Zhang, Yeqing; Wang, Meiling; Li, Yafeng

    2018-01-01

    For the objective of essentially decreasing computational complexity and time consumption of signal acquisition, this paper explores a resampling strategy and variable circular correlation time strategy specific to broadband multi-frequency GNSS receivers. In broadband GNSS receivers, the resampling strategy is established to work on conventional acquisition algorithms by resampling the main lobe of received broadband signals with a much lower frequency. Variable circular correlation time is designed to adapt to different signal strength conditions and thereby increase the operation flexibility of GNSS signal acquisition. The acquisition threshold is defined as the ratio of the highest and second highest correlation results in the search space of carrier frequency and code phase. Moreover, computational complexity of signal acquisition is formulated by amounts of multiplication and summation operations in the acquisition process. Comparative experiments and performance analysis are conducted on four sets of real GPS L2C signals with different sampling frequencies. The results indicate that the resampling strategy can effectively decrease computation and time cost by nearly 90–94% with just slight loss of acquisition sensitivity. With circular correlation time varying from 10 ms to 20 ms, the time cost of signal acquisition has increased by about 2.7–5.6% per millisecond, with most satellites acquired successfully. PMID:29495301